OpenAI faced questions after ChatGPT surfaced app prompts for paid users

ChatGPT users complained after the system surfaced an unexpected Peloton suggestion during an unrelated conversation. The prompt appeared for a Pro Plan subscriber and triggered questions about ad-like behaviour. Many asked why paid chats were showing promotional-style links.

OpenAI said the prompt was part of early app-discovery tests, not advertising. Staff acknowledged that the suggestion was irrelevant to the query. They said the system is still being adjusted to avoid confusing or misplaced prompts.

Users reported other recommendations, including music apps that contradicted their stated preferences. The lack of an option to turn off these suggestions fuelled irritation. Paid subscribers warned that such prompts undermine the service’s reliability.

OpenAI described the feature as a step toward integrating apps directly into conversations. The aim is to surface tools when genuinely helpful. Early trials, however, have demonstrated gaps between intended relevance and actual outcomes.

The tests remain limited to selected regions and are not active in parts of Europe. Critics argue intrusive prompts risk pushing users to competitors. OpenAI said refinements will continue to ensure suggestions feel helpful, not promotional.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Regulators question transparency after Mixpanel data leak

Mixpanel is facing criticism after disclosing a security incident with minimal detail, providing only a brief note before the US Thanksgiving weekend. Analysts say the timing and lack of clarity set a poor example for transparency in breach reporting.

OpenAI later confirmed its own exposure, stating that analytics data linked to developer activity had been obtained from Mixpanel’s systems. It stressed that ChatGPT users were not affected and that it had halted its use of the service following the incident.

OpenAI said the stolen information included names, email addresses, coarse location data and browser details, raising concerns about phishing risks. It noted that no advertising identifiers were involved, limiting broader cross-platform tracking.

Security experts say the breach highlights long-standing concerns about analytics companies that collect detailed behavioural and device data across thousands of apps. Mixpanel’s session-replay tools can be sensitive, as they can inadvertently capture private information.

Regulators argue the case shows why analytics providers have become prime targets for attackers. They say that more transparent disclosure from Mixpanel is needed to assess the scale of exposure and the potential impact on companies and end-users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Irish regulator probes an investigation into TikTok and LinkedIn

Regulators in Ireland have opened investigations into TikTok and LinkedIn under the EU Digital Services Act.

Coimisiún na Meán’s Investigations Team believes there may be shortcomings in how both platforms handle reports of suspected illegal material. Concerns emerged during an exhaustive review of Article 16 compliance that began last year and focused on the availability of reporting tools.

The review highlighted the potential for interface designs that could confuse users, particularly when choosing between reporting illegal content and content that merely violates platform rules.

An investigation that will examine whether reporting tools are easy to access, user-friendly and capable of supporting anonymous reporting of suspected child sexual abuse material, as required under Article 16(2)(c).

It will also assess whether platform design may discourage users from reporting material as illegal under Article 25.

Coimisiún na Meán stated that several other providers made changes to their reporting systems following regulatory engagement. Those changes are being reviewed for effectiveness.

The regulator emphasised that platforms must avoid practices that could mislead users and must provide reliable reporting mechanisms instead of diverting people toward less protective options.

These investigations will proceed under the Broadcasting Act of Ireland. If either platform is found to be in breach of the DSA, the regulator can impose administrative penalties that may reach six percent of global turnover.

Coimisiún na Meán noted that cooperation remains essential and that further action may be necessary if additional concerns about DSA compliance arise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands investment in mental health safety research

Yesterday, OpenAI launched a new grant programme to support external research on the connection between AI and mental health.

An initiative that aims to expand independent inquiry into how people express distress, how AI interprets complex emotional signals and how different cultures shape the language used to discuss sensitive experiences.

OpenAI also hopes that broader participation will strengthen collective understanding, rather than keeping progress confined to internal studies.

The programme encourages interdisciplinary work that brings together technical specialists, mental health professionals and people with lived experience. OpenAI is seeking proposals that can offer clear outputs, such as datasets, evaluation methods, or practical insights, that improve safety and guidance.

Researchers may focus on patterns of distress in specific communities, the influence of slang and vernacular, or the challenges that appear when mental health symptoms manifest in ways that current systems fail to recognise.

The grants also aim to expand knowledge of how providers use AI within care settings, including where tools are practical, where limitations appear and where risks emerge for users.

Additional areas of interest include how young people respond to different tones or styles, how grief is expressed in language and how visual cues linked to body image concerns can be interpreted responsibly.

OpenAI emphasises that better evaluation frameworks, ethical datasets and annotated examples can support safer development across the field.

Applications are open until 19 December, with decisions expected by mid-January. The programme forms part of OpenAI’s broader effort to invest in well-being and safety research, offering financial support to independent teams working across diverse cultural and linguistic contexts.

The company argues that expanding evidence and perspectives will contribute to a more secure and supportive environment for future AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI growth threatens millions of jobs across Asia

UN economists warned millions of jobs in Asia could be at risk as AI widens the gap between digitally advanced nations and those lacking basic access and skills. The report compared the AI revolution to 19th-century industrialisation, which created a wealthy few and left many behind.

Women and young adults face the most significant threat from AI in the workplace, while the benefits in health, education, and income are unevenly distributed.

Countries such as China, Singapore, and South Korea have invested heavily in AI and reaped significant benefits. Still, entry-level workers in many South Asian nations remain highly vulnerable to automation and technological advancements.

The UN Development Programme urged governments to consider ethical deployment and inclusivity when implementing AI. Countries such as Cambodia, Papua New Guinea, and Vietnam are focusing on developing simple digital tools to help health workers and farmers who lack reliable internet access.

AI could generate nearly $1 trillion in economic gains across Asia over the next decade, boosting regional GDP growth by about two percentage points. Income disparities mean AI benefits remain concentrated in wealthy countries, leaving poorer nations at a disadvantage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

eSafety highlights risks in connected vehicle technology

Australia’s eSafety regulator is drawing attention to concerns about how connected car features can be misused within domestic and family violence situations.

Reports from frontline workers indicate that remote access tools, trip records and location tracking can be exploited instead of serving their intended purpose as safety and convenience features.

The Australian regulator stresses that increased connectivity across vehicles and devices is creating new challenges for those supporting victim-survivors.

Smart cars often store detailed travel information and allow remote commands through apps and online accounts. These functions can be accessed by someone with shared credentials or linked accounts, which can expose sensitive information.

eSafety notes that misuse of connected vehicles forms part of a broader pattern of technology-facilitated coercive control, where multiple smart devices such as watches, tablets, cameras and televisions can play a role.

The regulator has produced updated guidance to help people understand potential risks and take practical steps with the support of specialist services.

Officials highlight the importance of stronger safeguards from industry, including simpler methods for revoking access, clearer account transfer processes during separation and more transparent logs showing when remote commands are used.

Retailers and dealerships are encouraged to ensure devices and accounts are reset when ownership changes. eSafety argues that design improvements introduced early can reduce the likelihood of harm, rather than requiring complex responses later.

Agencies and community services continue to assist those affected by domestic and family violence, offering advice on account security, safe device use and available support services.

The guidance aims to help people take protective measures in a controlled and safe way, while emphasising the importance of accessing professional assistance.

eSafety encourages ongoing cooperation between industry, government and frontline workers to manage risks linked to emerging automotive and digital technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SIM-binding mandate forces changes to WhatsApp use in India

India plans to change how major messaging apps operate under new rules requiring SIM binding and frequent re-verification. The directive obliges platforms to confirm that the original SIM remains active, altering long-standing habits around device switching. Services have 90 days to comply with the order.

The Department of Telecom says continuous SIM checks will reduce misuse by linking each account to a live subscriber identity. Companion tools such as WhatsApp Web will automatically log out every 6 hours. Users will need to relink sessions with a QR code to stay connected.

The rules apply to apps that rely on phone numbers, including WhatsApp, Signal, Telegram, and local platforms. The approach mirrors SIM-bound verification used in banking apps in India. It adds a deeper security layer that goes beyond one-time codes and registration checks.

The change may inconvenience people who use Wi-Fi-only tablets or older devices without an active SIM card. It also affects anyone who relies on WhatsApp Web for work or on multi-device setups under a single number. Messaging apps may need new login systems to ease the shift.

Officials argue that tighter controls are needed to limit cyber fraud and protect consumers. Users may still access services, but with reduced flexibility and more frequent verification. India’s move signals a broader push for stronger digital safeguards across core communications tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Dublin startup raises US$2.5 m to protect AI data with encryption

Mirror Security, founded at University College Dublin, has announced a US$2.5 million (approx. €2.15 million) pre-seed funding round to develop what it describes as the next generation of secure AI infrastructure.

The startup’s core product, VectaX, is a fully homomorphic encryption (FHE) engine designed for AI workloads. This technology allows AI systems to process, train or infer on data that remains encrypted, meaning sensitive or proprietary data never has to be exposed in plaintext, even during computation.

Backed by leading deep-tech investors such as Sure Valley Ventures (SVV) and Atlantic Bridge, Mirror Security plans to scale its engineering and AI-security teams across Ireland, the US and India, accelerate development of encrypted inferencing and secure fine-tuning, and target enterprise markets in the US.

As organisations increasingly adopt AI, often handling sensitive data, Mirror Security argues that conventional security measures (like policy-based controls) fall short. Its encryption native approach aims to provide cryptographic guarantees rather than trust-based assurances, positioning the company as a ‘trust layer’ for the emerging AI economy.

The Irish startup also announced a strategic partnership with Inception AI (a subsidiary of G42) to deploy its full AI security stack across enterprise and government systems. Mirror has also formed collaborations with major technology players including Intel, MongoDB, and others.

From a digital policy and global technology governance perspective, this funding milestone is significant. It underlines how the increasing deployment of AI, especially in enterprise and government contexts, is creating demand for robust, privacy-preserving infrastructure. Mirror Security’s model offers a potential blueprint for how to reconcile AI’s power with data confidentiality, compliance, and sovereignty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia stands firm on under 16 social media ban

Australia’s government defended its under-16 social media ban ahead of its introduction on 10 December. Minister Anika Wells said she would not be pressured by major platforms opposing the plan.

Tech companies argued that bans may prove ineffective, yet Wells maintained firms had years to address known harms. She insisted parents required stronger safeguards after repeated failures by global platforms.

Critics raised concerns about enforcement and the exclusion of online gaming despite widespread worries about Roblox. Two teenagers also launched a High Court challenge, claiming the policy violated children’s rights.

Wells accepted rollout difficulties but said wider social gains in Australia justified firm action. She added that policymakers must intervene when unsafe operating models place young people at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Italy secures new EU support for growth and reform

The European Commission has endorsed Italy’s latest request for funding under the Recovery and Resilience Facility, marking an important step in the country’s economic modernisation.

An approval that covers 12.8 billion euros, combining grants and loans, and supports efforts to strengthen competitiveness and long-term growth across key sectors of national life.

Italy completed 32 milestones and targets connected to the eighth instalment, enabling progress in public administration, procurement, employment, education, research, tourism, renewable energy and the circular economy.

Thousands of schools have gained new resources to improve multilingual learning and build stronger skills in science, technology, engineering, arts and mathematics.

Many primary and secondary schools have also secured modern digital tools to enhance teaching quality instead of relying on outdated systems.

Health research forms another major part of the package. Projects focused on rare diseases, cancer and other high-impact conditions have gained fresh funding to support scientific work and improve treatment pathways.

These measures contribute to a broader transformation programme financed through 194.4 billion euros, representing one of the largest recovery plans in the EU.

A four-week review by the Economic and Financial Committee will follow before the payment can be released. Once completed, Italy’s total receipts will exceed 153 billion euros, covering more than 70 percent of its full Recovery and Resilience Facility allocation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!