OpenAI faced questions after ChatGPT surfaced app prompts for paid users

ChatGPT users complained after the system surfaced an unexpected Peloton suggestion during an unrelated conversation. The prompt appeared for a Pro Plan subscriber and triggered questions about ad-like behaviour. Many asked why paid chats were showing promotional-style links.

OpenAI said the prompt was part of early app-discovery tests, not advertising. Staff acknowledged that the suggestion was irrelevant to the query. They said the system is still being adjusted to avoid confusing or misplaced prompts.

Users reported other recommendations, including music apps that contradicted their stated preferences. The lack of an option to turn off these suggestions fuelled irritation. Paid subscribers warned that such prompts undermine the service’s reliability.

OpenAI described the feature as a step toward integrating apps directly into conversations. The aim is to surface tools when genuinely helpful. Early trials, however, have demonstrated gaps between intended relevance and actual outcomes.

The tests remain limited to selected regions and are not active in parts of Europe. Critics argue intrusive prompts risk pushing users to competitors. OpenAI said refinements will continue to ensure suggestions feel helpful, not promotional.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Regulators question transparency after Mixpanel data leak

Mixpanel is facing criticism after disclosing a security incident with minimal detail, providing only a brief note before the US Thanksgiving weekend. Analysts say the timing and lack of clarity set a poor example for transparency in breach reporting.

OpenAI later confirmed its own exposure, stating that analytics data linked to developer activity had been obtained from Mixpanel’s systems. It stressed that ChatGPT users were not affected and that it had halted its use of the service following the incident.

OpenAI said the stolen information included names, email addresses, coarse location data and browser details, raising concerns about phishing risks. It noted that no advertising identifiers were involved, limiting broader cross-platform tracking.

Security experts say the breach highlights long-standing concerns about analytics companies that collect detailed behavioural and device data across thousands of apps. Mixpanel’s session-replay tools can be sensitive, as they can inadvertently capture private information.

Regulators argue the case shows why analytics providers have become prime targets for attackers. They say that more transparent disclosure from Mixpanel is needed to assess the scale of exposure and the potential impact on companies and end-users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands investment in mental health safety research

Yesterday, OpenAI launched a new grant programme to support external research on the connection between AI and mental health.

An initiative that aims to expand independent inquiry into how people express distress, how AI interprets complex emotional signals and how different cultures shape the language used to discuss sensitive experiences.

OpenAI also hopes that broader participation will strengthen collective understanding, rather than keeping progress confined to internal studies.

The programme encourages interdisciplinary work that brings together technical specialists, mental health professionals and people with lived experience. OpenAI is seeking proposals that can offer clear outputs, such as datasets, evaluation methods, or practical insights, that improve safety and guidance.

Researchers may focus on patterns of distress in specific communities, the influence of slang and vernacular, or the challenges that appear when mental health symptoms manifest in ways that current systems fail to recognise.

The grants also aim to expand knowledge of how providers use AI within care settings, including where tools are practical, where limitations appear and where risks emerge for users.

Additional areas of interest include how young people respond to different tones or styles, how grief is expressed in language and how visual cues linked to body image concerns can be interpreted responsibly.

OpenAI emphasises that better evaluation frameworks, ethical datasets and annotated examples can support safer development across the field.

Applications are open until 19 December, with decisions expected by mid-January. The programme forms part of OpenAI’s broader effort to invest in well-being and safety research, offering financial support to independent teams working across diverse cultural and linguistic contexts.

The company argues that expanding evidence and perspectives will contribute to a more secure and supportive environment for future AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI growth threatens millions of jobs across Asia

UN economists warned millions of jobs in Asia could be at risk as AI widens the gap between digitally advanced nations and those lacking basic access and skills. The report compared the AI revolution to 19th-century industrialisation, which created a wealthy few and left many behind.

Women and young adults face the most significant threat from AI in the workplace, while the benefits in health, education, and income are unevenly distributed.

Countries such as China, Singapore, and South Korea have invested heavily in AI and reaped significant benefits. Still, entry-level workers in many South Asian nations remain highly vulnerable to automation and technological advancements.

The UN Development Programme urged governments to consider ethical deployment and inclusivity when implementing AI. Countries such as Cambodia, Papua New Guinea, and Vietnam are focusing on developing simple digital tools to help health workers and farmers who lack reliable internet access.

AI could generate nearly $1 trillion in economic gains across Asia over the next decade, boosting regional GDP growth by about two percentage points. Income disparities mean AI benefits remain concentrated in wealthy countries, leaving poorer nations at a disadvantage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gemini Projects feature appears in Google app teardown

Google is preparing a Gemini feature called Projects, offering a small workspace for grouping chats by topic. Early clues show it works like a sandbox that keeps related conversations structured. It is still hidden and not active for anyone.

An Android Authority teardown of the Google app revealed the interface and onboarding prompts. These mention isolating chats, choosing a focus area and adding files for context. The feature remains dormant until Google enables it on the server.

When opening a project, users can name it and then view a simple dashboard. This includes options to set project goals that guide Gemini’s behaviour. The aim is to keep longer tasks organised in one place.

The teardown shows a limit of ten file uploads per project, with no clarity on whether paid tiers will receive more. This may affect complex tasks that require a larger context. Users will also be able to pin projects for quicker access.

Because all information comes from hidden code, Google has not confirmed any details. The design or limits may change before launch. Until the Gemini feature is announced, the findings should be treated as provisional.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Greek businesses urged to accelerate AI adoption

AI is becoming a central factor in business development, according to Google Cloud executives visiting Athens for Google Cloud Day.

Marianne Janik and Boris Georgiev explained that AI is entering daily life more quickly than many expected, creating an urgent need for companies to strengthen their capabilities. Their visit coincided with the international launch of Gemini 3, the latest version of the company’s AI model.

They argued that enterprises in Greece should accelerate their adoption of AI tools to remain competitive. A slow transition could limit their position in both domestic and international markets.

They also underlined the importance of employees developing new skills that support digital transformation, noting that risk-taking has become a necessary element of strategic progress.

The financial sector is advancing at a faster pace, aided by its long-standing familiarity with digital and analytical tools.

Banks are investing heavily in compliance functions and customer onboarding. Retail is also undergoing a similar transformation, driven by consumer expectations and operational pressures.

Google Cloud Day in Athens brought together a large number of participants, highlighting the sector’s growing interest in practical AI applications and the role of advanced models in shaping business processes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

V3.2 models signal renewed DeepSeek momentum

DeepSeek has launched two new reasoning-focused models, V3.2 and V3.2-Speciale. The release marks a shift toward agent-style systems that emphasise efficiency. Both models are positioned as upgrades to the firm’s earlier experimental work.

The V3.2 model incorporates structured thinking into its tool-use behaviour. It supports fast and reflective modes while generating large training datasets. DeepSeek says this approach enables more exhaustive testing across thousands of tasks.

V3.2-Speciale is designed for high-intensity reasoning workloads and contests. DeepSeek reports performance levels comparable to top proprietary systems. Its Sparse Attention method keeps costs down for long and complex inputs.

The launch follows pressure from rapid advances by key rivals. DeepSeek argues the new line narrows capability gaps despite lower budgets. Earlier momentum came from strong pricing, but expectations have increased.

The company views the V3.2 series as supporting agent pipelines and research applications. It frames the update as proof that efficient models can still compete globally. Developers are expected to use the systems for analytical and technical tasks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Thrive Holdings deepens AI collaboration with OpenAI for business transformation

OpenAI and Thrive Holdings have launched a partnership to accelerate enterprise adoption of AI. The work focuses on applying AI to high-volume business functions such as accounting and IT services. Both companies say these areas offer immediate gains in speed, accuracy, and cost efficiency.

OpenAI will place its teams inside Thrive Holdings’ companies to improve core workflows. The partners want a model they can replicate across other sectors. They say embedding AI in real operations delivers better results than external tools.

Executives say AI is reshaping how organisations deliver value in competitive markets. OpenAI’s Brad Lightcap described the collaboration as an example of rapid, organisation-wide transformation. He said the approach could guide other businesses seeking practical pathways to use advanced AI tools.

Thrive Holdings views the initiative as part of a broader shift in how technology is adopted. Founder Joshua Kushner said industry experts are now driving change from within their sectors. He added that Thrive’s portfolio offers the data and domain knowledge needed to refine AI for specialised tasks.

Both partners expect the model to scale into additional business areas as uptake grows. They see long-term opportunities to adapt the framework to more enterprise functions. The ambition is to demonstrate how embedded AI can boost performance and sustain operational improvements.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Dublin startup raises US$2.5 m to protect AI data with encryption

Mirror Security, founded at University College Dublin, has announced a US$2.5 million (approx. €2.15 million) pre-seed funding round to develop what it describes as the next generation of secure AI infrastructure.

The startup’s core product, VectaX, is a fully homomorphic encryption (FHE) engine designed for AI workloads. This technology allows AI systems to process, train or infer on data that remains encrypted, meaning sensitive or proprietary data never has to be exposed in plaintext, even during computation.

Backed by leading deep-tech investors such as Sure Valley Ventures (SVV) and Atlantic Bridge, Mirror Security plans to scale its engineering and AI-security teams across Ireland, the US and India, accelerate development of encrypted inferencing and secure fine-tuning, and target enterprise markets in the US.

As organisations increasingly adopt AI, often handling sensitive data, Mirror Security argues that conventional security measures (like policy-based controls) fall short. Its encryption native approach aims to provide cryptographic guarantees rather than trust-based assurances, positioning the company as a ‘trust layer’ for the emerging AI economy.

The Irish startup also announced a strategic partnership with Inception AI (a subsidiary of G42) to deploy its full AI security stack across enterprise and government systems. Mirror has also formed collaborations with major technology players including Intel, MongoDB, and others.

From a digital policy and global technology governance perspective, this funding milestone is significant. It underlines how the increasing deployment of AI, especially in enterprise and government contexts, is creating demand for robust, privacy-preserving infrastructure. Mirror Security’s model offers a potential blueprint for how to reconcile AI’s power with data confidentiality, compliance, and sovereignty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia stands firm on under 16 social media ban

Australia’s government defended its under-16 social media ban ahead of its introduction on 10 December. Minister Anika Wells said she would not be pressured by major platforms opposing the plan.

Tech companies argued that bans may prove ineffective, yet Wells maintained firms had years to address known harms. She insisted parents required stronger safeguards after repeated failures by global platforms.

Critics raised concerns about enforcement and the exclusion of online gaming despite widespread worries about Roblox. Two teenagers also launched a High Court challenge, claiming the policy violated children’s rights.

Wells accepted rollout difficulties but said wider social gains in Australia justified firm action. She added that policymakers must intervene when unsafe operating models place young people at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot