Public backlash grows as Coupang faces scrutiny over massive data leak

South Korea is facing broader concerns about data governance following Coupang’s confirmation of a breach affecting 33.7 million accounts. Investigators say the leak began months before it was detected, highlighting weak access controls and delayed monitoring across major firms.

Authorities believe a former employee exploited long-valid server tokens and unrevoked permissions to extract customer records. Officials say the scale of the incident underscores persistent gaps in offboarding processes and basic internal safeguards.

Regulators have launched parallel inquiries to assess compliance violations and examine whether structural weaknesses extend beyond a single company. Recent leaks at telecom and financial institutions have raised similar questions about systemic risk.

Public reaction has been intense, with online groups coordinating class-action filings and documenting spikes in spam after the exposure. Many argue that repeated incidents show a more profound corporate reluctance to invest meaningfully in security.

Lawmakers are now signalling plans for more substantial penalties and tighter oversight. Analysts warn that unless companies elevate data protection standards, South Korea will continue to face cascading breaches that damage public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK ministers advance energy plans for AI expansion

The final AI Energy Council meeting of 2025 took place in London, led by AI Minister Kanishka Narayan alongside energy ministers Lord Vallance and Michael Shanks.

Regulators and industry representatives reviewed how the UK can expedite grid connections and support the necessary infrastructure for expanding AI activity nationwide.

Council members examined progress on government measures intended to accelerate connections for AI data centres. Plans include support for AI Growth Zones, with discounted electricity available for sites able to draw on excess capacity, which is expected to reduce pressure in the broader network.

Ministers underlined AI’s role in national economic ambitions, noting recent announcements of new AI Growth Zones in North East England and in North and South Wales.

They also discussed how forthcoming reforms are expected to help deliver AI-related infrastructure by easing access to grid capacity.

The meeting concluded with a focus on long-term energy needs for AI development. Participants explored ways to unlock additional capacity and considered innovative options for power generation, including self-build solutions.

The council will reconvene in early 2026 to continue work on sustainable approaches for future AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI faced questions after ChatGPT surfaced app prompts for paid users

ChatGPT users complained after the system surfaced an unexpected Peloton suggestion during an unrelated conversation. The prompt appeared for a Pro Plan subscriber and triggered questions about ad-like behaviour. Many asked why paid chats were showing promotional-style links.

OpenAI said the prompt was part of early app-discovery tests, not advertising. Staff acknowledged that the suggestion was irrelevant to the query. They said the system is still being adjusted to avoid confusing or misplaced prompts.

Users reported other recommendations, including music apps that contradicted their stated preferences. The lack of an option to turn off these suggestions fuelled irritation. Paid subscribers warned that such prompts undermine the service’s reliability.

OpenAI described the feature as a step toward integrating apps directly into conversations. The aim is to surface tools when genuinely helpful. Early trials, however, have demonstrated gaps between intended relevance and actual outcomes.

The tests remain limited to selected regions and are not active in parts of Europe. Critics argue intrusive prompts risk pushing users to competitors. OpenAI said refinements will continue to ensure suggestions feel helpful, not promotional.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Regulators question transparency after Mixpanel data leak

Mixpanel is facing criticism after disclosing a security incident with minimal detail, providing only a brief note before the US Thanksgiving weekend. Analysts say the timing and lack of clarity set a poor example for transparency in breach reporting.

OpenAI later confirmed its own exposure, stating that analytics data linked to developer activity had been obtained from Mixpanel’s systems. It stressed that ChatGPT users were not affected and that it had halted its use of the service following the incident.

OpenAI said the stolen information included names, email addresses, coarse location data and browser details, raising concerns about phishing risks. It noted that no advertising identifiers were involved, limiting broader cross-platform tracking.

Security experts say the breach highlights long-standing concerns about analytics companies that collect detailed behavioural and device data across thousands of apps. Mixpanel’s session-replay tools can be sensitive, as they can inadvertently capture private information.

Regulators argue the case shows why analytics providers have become prime targets for attackers. They say that more transparent disclosure from Mixpanel is needed to assess the scale of exposure and the potential impact on companies and end-users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Irish regulator probes an investigation into TikTok and LinkedIn

Regulators in Ireland have opened investigations into TikTok and LinkedIn under the EU Digital Services Act.

Coimisiún na Meán’s Investigations Team believes there may be shortcomings in how both platforms handle reports of suspected illegal material. Concerns emerged during an exhaustive review of Article 16 compliance that began last year and focused on the availability of reporting tools.

The review highlighted the potential for interface designs that could confuse users, particularly when choosing between reporting illegal content and content that merely violates platform rules.

An investigation that will examine whether reporting tools are easy to access, user-friendly and capable of supporting anonymous reporting of suspected child sexual abuse material, as required under Article 16(2)(c).

It will also assess whether platform design may discourage users from reporting material as illegal under Article 25.

Coimisiún na Meán stated that several other providers made changes to their reporting systems following regulatory engagement. Those changes are being reviewed for effectiveness.

The regulator emphasised that platforms must avoid practices that could mislead users and must provide reliable reporting mechanisms instead of diverting people toward less protective options.

These investigations will proceed under the Broadcasting Act of Ireland. If either platform is found to be in breach of the DSA, the regulator can impose administrative penalties that may reach six percent of global turnover.

Coimisiún na Meán noted that cooperation remains essential and that further action may be necessary if additional concerns about DSA compliance arise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands investment in mental health safety research

Yesterday, OpenAI launched a new grant programme to support external research on the connection between AI and mental health.

An initiative that aims to expand independent inquiry into how people express distress, how AI interprets complex emotional signals and how different cultures shape the language used to discuss sensitive experiences.

OpenAI also hopes that broader participation will strengthen collective understanding, rather than keeping progress confined to internal studies.

The programme encourages interdisciplinary work that brings together technical specialists, mental health professionals and people with lived experience. OpenAI is seeking proposals that can offer clear outputs, such as datasets, evaluation methods, or practical insights, that improve safety and guidance.

Researchers may focus on patterns of distress in specific communities, the influence of slang and vernacular, or the challenges that appear when mental health symptoms manifest in ways that current systems fail to recognise.

The grants also aim to expand knowledge of how providers use AI within care settings, including where tools are practical, where limitations appear and where risks emerge for users.

Additional areas of interest include how young people respond to different tones or styles, how grief is expressed in language and how visual cues linked to body image concerns can be interpreted responsibly.

OpenAI emphasises that better evaluation frameworks, ethical datasets and annotated examples can support safer development across the field.

Applications are open until 19 December, with decisions expected by mid-January. The programme forms part of OpenAI’s broader effort to invest in well-being and safety research, offering financial support to independent teams working across diverse cultural and linguistic contexts.

The company argues that expanding evidence and perspectives will contribute to a more secure and supportive environment for future AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

eSafety highlights risks in connected vehicle technology

Australia’s eSafety regulator is drawing attention to concerns about how connected car features can be misused within domestic and family violence situations.

Reports from frontline workers indicate that remote access tools, trip records and location tracking can be exploited instead of serving their intended purpose as safety and convenience features.

The Australian regulator stresses that increased connectivity across vehicles and devices is creating new challenges for those supporting victim-survivors.

Smart cars often store detailed travel information and allow remote commands through apps and online accounts. These functions can be accessed by someone with shared credentials or linked accounts, which can expose sensitive information.

eSafety notes that misuse of connected vehicles forms part of a broader pattern of technology-facilitated coercive control, where multiple smart devices such as watches, tablets, cameras and televisions can play a role.

The regulator has produced updated guidance to help people understand potential risks and take practical steps with the support of specialist services.

Officials highlight the importance of stronger safeguards from industry, including simpler methods for revoking access, clearer account transfer processes during separation and more transparent logs showing when remote commands are used.

Retailers and dealerships are encouraged to ensure devices and accounts are reset when ownership changes. eSafety argues that design improvements introduced early can reduce the likelihood of harm, rather than requiring complex responses later.

Agencies and community services continue to assist those affected by domestic and family violence, offering advice on account security, safe device use and available support services.

The guidance aims to help people take protective measures in a controlled and safe way, while emphasising the importance of accessing professional assistance.

eSafety encourages ongoing cooperation between industry, government and frontline workers to manage risks linked to emerging automotive and digital technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini Projects feature appears in Google app teardown

Google is preparing a Gemini feature called Projects, offering a small workspace for grouping chats by topic. Early clues show it works like a sandbox that keeps related conversations structured. It is still hidden and not active for anyone.

An Android Authority teardown of the Google app revealed the interface and onboarding prompts. These mention isolating chats, choosing a focus area and adding files for context. The feature remains dormant until Google enables it on the server.

When opening a project, users can name it and then view a simple dashboard. This includes options to set project goals that guide Gemini’s behaviour. The aim is to keep longer tasks organised in one place.

The teardown shows a limit of ten file uploads per project, with no clarity on whether paid tiers will receive more. This may affect complex tasks that require a larger context. Users will also be able to pin projects for quicker access.

Because all information comes from hidden code, Google has not confirmed any details. The design or limits may change before launch. Until the Gemini feature is announced, the findings should be treated as provisional.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Greek businesses urged to accelerate AI adoption

AI is becoming a central factor in business development, according to Google Cloud executives visiting Athens for Google Cloud Day.

Marianne Janik and Boris Georgiev explained that AI is entering daily life more quickly than many expected, creating an urgent need for companies to strengthen their capabilities. Their visit coincided with the international launch of Gemini 3, the latest version of the company’s AI model.

They argued that enterprises in Greece should accelerate their adoption of AI tools to remain competitive. A slow transition could limit their position in both domestic and international markets.

They also underlined the importance of employees developing new skills that support digital transformation, noting that risk-taking has become a necessary element of strategic progress.

The financial sector is advancing at a faster pace, aided by its long-standing familiarity with digital and analytical tools.

Banks are investing heavily in compliance functions and customer onboarding. Retail is also undergoing a similar transformation, driven by consumer expectations and operational pressures.

Google Cloud Day in Athens brought together a large number of participants, highlighting the sector’s growing interest in practical AI applications and the role of advanced models in shaping business processes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU moves forward with Bulgaria payment review

The European Commission has given partial approval to Bulgaria’s request for €1.6 billion under the Recovery and Resilience Facility. The assessment followed the country’s submission in early October and confirmed that most reforms and investments linked to the payment were completed.

Progress spanned the green and digital transition, research, innovation, healthcare, social protection, sustainable transport and business modernisation.

Officials confirmed that 48 of 50 milestones were met, supporting Bulgaria’s efforts to strengthen economic growth and improve long-term competitiveness, rather than delaying structural change.

Measures covered a prohibition on new coal or lignite power installations, limits on emissions from existing plants, investment in renewable energy and steps to make healthcare careers more appealing.

The Commission noted that these areas formed core elements of Bulgaria’s recovery plan.

Two milestones were considered incomplete. The first relates to the establishment of an operational anti-corruption body; the second concerns aspects of legal acts linked to criminal proceedings and the accountability of the Prosecutor General.

Additionally, the Commission proposed a temporary deferral for the portion of funding connected to those elements, allowing Bulgaria to receive money for milestones already achieved instead of holding back the entire request.

The next stage involves a review by the Economic and Financial Committee within four weeks. Bulgaria will also have one month to respond to the Commission’s concerns. If issues remain unresolved, part of the payment will be withheld until the outstanding milestones are met.

Once corrective actions are completed, the remaining funds will be released in line with the standard procedure for the Recovery and Resilience Facility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!