EU opens antitrust probe into Meta’s WhatsApp AI rollout

Brussels has opened an antitrust inquiry into Meta over how AI features were added to WhatsApp, focusing on whether the updated access policies hinder market competition. Regulators say scrutiny is needed as integrated assistants become central to messaging platforms.

Meta AI has been built into WhatsApp across Europe since early 2025, prompting questions about whether external AI providers face unfair barriers. Meta rejects the accusations and argues that users can reach rival tools through other digital channels.

Italy launched a related proceeding in July and expanded it in November, examining claims that Meta curtailed access for competing chatbots. Authorities worry that dominance in messaging could influence the wider AI services market.

EU officials confirmed the case will proceed under standard antitrust rules rather than the Digital Markets Act. Investigators aim to understand how embedded assistants reshape competitive dynamics in services used by millions.

European regulators say outcomes could guide future oversight as generative AI becomes woven into essential communications. The case signals growing concern about concentrated power in fast-evolving AI ecosystems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Uzbekistan sets principles for responsible AI

A new ethical framework for the development and use of AI technologies has been adopted by Uzbekistan.

The rules, prepared by the Ministry of Digital Technologies, establish unified standards for developers, implementing organisations and users of AI systems, ensuring AI respects human rights, privacy and societal trust.

A framework that is part of presidential decrees and resolutions aimed at advancing AI innovation across the country. It also emphasises legality, transparency, fairness, accountability, and continuous human oversight.

AI systems must avoid discrimination based on gender, nationality, religion, language or social origin.

Developers are required to ensure algorithmic clarity, assess risks and bias in advance, and prevent AI from causing harm to individuals, society, the state or the environment.

Users of AI systems must comply with legislation, safeguard personal data, and operate technologies responsibly. Any harm caused during AI development or deployment carries legal liability.

The Ministry of Digital Technologies will oversee standards, address ethical concerns, foster industry cooperation, and improve digital literacy across Uzbekistan.

An initiative that aligns with broader efforts to prepare Uzbekistan for AI adoption in healthcare, education, transport, space, and other sectors.

By establishing clear ethical principles, the country aims to strengthen trust in AI applications and ensure responsible and secure use nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO launches AI guidelines for courts and tribunals

UNESCO has launched new Guidelines for the Use of AI Systems in Courts and Tribunals to ensure AI strengthens rather than undermines human-led justice. The initiative arrives as courts worldwide face millions of pending cases and limited resources.

In Argentina, AI-assisted legal tools have increased case processing by nearly 300%, while automated transcription in Egypt is improving court efficiency.

Judicial systems are increasingly encountering AI-generated evidence, AI-assisted sentencing, and automated administrative processes. AI misuse can have serious consequences, as seen in the UK High Court where false AI-generated arguments caused delays, extra costs, and fines.

UNESCO’s Guidelines aim to prevent such risks by emphasising human oversight, auditability, and ethical AI use.

The Guidelines outline 15 principles and provide recommendations for judicial organisations and individual judges throughout the AI lifecycle. They also serve as a benchmark for developing national and regional standards.

UNESCO’s Judges’ Initiative, which has trained over 36,000 judicial operators in 160 countries, played a key role in shaping and peer-reviewing the Guidelines.

The official launch will take place at the Athens Roundtable on AI and the Rule of Law in London on 4 December 2025. UNESCO aims for the standards to ensure responsible AI use, improve court efficiency, and uphold public trust in the judiciary.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta expands global push against online scam networks

The US tech giant, Meta, outlined an expanded strategy to limit online fraud by combining technical defences with stronger collaboration across industry and law enforcement.

The company described scams as a threat to user safety and as a direct risk to the credibility of its advertising ecosystem, which remains central to its business model.

Executives emphasised that large criminal networks continue to evolve and that a faster, coordinated response is essential instead of fragmented efforts.

Meta presented recent progress, noting that more than 134 million scam advertisements were removed in 2025 and that reports about misleading advertising fell significantly in the last fifteen months.

It also provided details about disrupted criminal networks that operated across Facebook, Instagram and WhatsApp.

Facial recognition tools played a crucial role in detecting scam content that utilised images of public figures, resulting in an increased volume of removals during testing, rather than allowing wider circulation.

Cooperation with law enforcement remains central to Meta’s approach. The company supported investigations that targeted criminal centres in Myanmar and illegal online gambling operations connected to transfers through anonymous accounts.

Information shared with financial institutions and partners in the Global Signal Exchange contributed to the removal of thousands of accounts. At the same time, legal action continued against those who used impersonation or bulk messaging to deceive users.

Meta stated that it backs bipartisan legislation designed to support a national response to online fraud. The company argued that new laws are necessary to weaken transnational groups behind large-scale scam operations and to protect users more effectively.

A broader aim is to strengthen trust across Meta’s services, rather than allowing criminal activity to undermine user confidence and advertiser investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube criticises Australia’s new youth social-media restrictions

Australia’s forthcoming ban on social media accounts for users under 16 has prompted intense criticism from YouTube, which argues that the new law will undermine existing child safety measures.

The report notes that from 10 December, young users will be logged out of their accounts and barred from posting or uploading content, though they will still be able to watch videos without signing in.

YouTube said the policy will remove key parental-control tools, such as content filters, channel blocking and well-being reminders, which only function for logged-in accounts.

Rachel Lord, Google and YouTube public-policy lead for Australia, described the measure as ‘rushed regulation’ and warned the changes could make children ‘less safe’ by stripping away long-established protections.

Communications Minister Anika Wells rejected this criticism as ‘outright weird’, arguing that if YouTube believes its own platform is unsafe for young users, it must address that problem itself.

The debate comes as Australia’s eSafety Commissioner investigates other youth-focused apps such as Lemon8 and Yope, which have seen a surge in downloads ahead of the ban.

Regulators reversed YouTube’s earlier exemption in July after identifying it as the platform where 10- to 15-year-olds most frequently encountered harmful content.

Under the new Social Media Minimum Age Act, companies must deactivate underage accounts, prevent new sign-ups and halt any technical workarounds or face penalties of up to A$49.5m.

Officials say the measure responds to concerns about the impact of algorithms, notifications and constant connectivity on Gen Alpha. Wells said the law aims to reduce the ‘dopamine drip’ that keeps young users hooked to their feeds, calling it a necessary step to shield children from relentless online pressures.

YouTube has reportedly considered challenging its inclusion in the ban, but has not confirmed whether it will take legal action.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mistral AI unveils new open models with broader capabilities

Yesterday, Mistral AI introduced Mistral 3 as a new generation of open multimodal and multilingual models that aim to support developers and enterprises through broader access and improved efficiency.

The company presented both small dense models and a new mixture-of-experts system called Mistral Large 3, offering open-weight releases to encourage wider adoption across different sectors.

Developers are encouraged to build on models in compressed formats that reduce deployment costs, rather than relying on heavier, closed solutions.

The organisation highlighted that Large 3 was trained with extensive resources on NVIDIA hardware to improve performance in multilingual communication, image understanding and general instruction tasks.

Mistral AI underlined its cooperation with NVIDIA, Red Hat and vLLM to deliver faster inference and easier deployment, providing optimised support for data centres along with options suited for edge computing.

A partnership that introduced lower-precision execution and improved kernels to increase throughput for frontier-scale workloads.

Attention was also given to the Ministral 3 series, which includes models designed for local or edge settings in three sizes. Each version supports image understanding and multilingual tasks, with instruction and reasoning variants that aim to strike a balance between accuracy and cost efficiency.

Moreover, the company stated that these models produce fewer tokens in real-world use cases, rather than generating unnecessarily long outputs, a choice that aims to reduce operational burdens for enterprises.

Mistral AI continued by noting that all releases will be available through major platforms and cloud partners, offering both standard and custom training services. Organisations that require specialised performance are invited to adapt the models to domain-specific needs under the Apache 2.0 licence.

The company emphasised a long-term commitment to open development and encouraged developers to explore and customise the models to support new applications across different industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Honolulu in the US pushes for transparency in government AI use

Growing pressure from Honolulu residents in the US is prompting city leaders to consider stricter safeguards surrounding the use of AI. Calls for greater transparency have intensified as AI has quietly become part of everyday government operations.

Several city departments already rely on automated systems for tasks such as building-plan screening, customer service support and internal administrative work. Advocates now want voters to decide whether the charter should require a public registry of AI tools, human appeal rights and routine audits.

Concerns have deepened after the police department began testing AI-assisted report-writing software without broad consultation. Supporters of reform argue that stronger oversight is crucial to maintain public trust, especially if AI starts influencing high-stakes decisions that impact residents’ lives.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI faced questions after ChatGPT surfaced app prompts for paid users

ChatGPT users complained after the system surfaced an unexpected Peloton suggestion during an unrelated conversation. The prompt appeared for a Pro Plan subscriber and triggered questions about ad-like behaviour. Many asked why paid chats were showing promotional-style links.

OpenAI said the prompt was part of early app-discovery tests, not advertising. Staff acknowledged that the suggestion was irrelevant to the query. They said the system is still being adjusted to avoid confusing or misplaced prompts.

Users reported other recommendations, including music apps that contradicted their stated preferences. The lack of an option to turn off these suggestions fuelled irritation. Paid subscribers warned that such prompts undermine the service’s reliability.

OpenAI described the feature as a step toward integrating apps directly into conversations. The aim is to surface tools when genuinely helpful. Early trials, however, have demonstrated gaps between intended relevance and actual outcomes.

The tests remain limited to selected regions and are not active in parts of Europe. Critics argue intrusive prompts risk pushing users to competitors. OpenAI said refinements will continue to ensure suggestions feel helpful, not promotional.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Irish regulator probes an investigation into TikTok and LinkedIn

Regulators in Ireland have opened investigations into TikTok and LinkedIn under the EU Digital Services Act.

Coimisiún na Meán’s Investigations Team believes there may be shortcomings in how both platforms handle reports of suspected illegal material. Concerns emerged during an exhaustive review of Article 16 compliance that began last year and focused on the availability of reporting tools.

The review highlighted the potential for interface designs that could confuse users, particularly when choosing between reporting illegal content and content that merely violates platform rules.

An investigation that will examine whether reporting tools are easy to access, user-friendly and capable of supporting anonymous reporting of suspected child sexual abuse material, as required under Article 16(2)(c).

It will also assess whether platform design may discourage users from reporting material as illegal under Article 25.

Coimisiún na Meán stated that several other providers made changes to their reporting systems following regulatory engagement. Those changes are being reviewed for effectiveness.

The regulator emphasised that platforms must avoid practices that could mislead users and must provide reliable reporting mechanisms instead of diverting people toward less protective options.

These investigations will proceed under the Broadcasting Act of Ireland. If either platform is found to be in breach of the DSA, the regulator can impose administrative penalties that may reach six percent of global turnover.

Coimisiún na Meán noted that cooperation remains essential and that further action may be necessary if additional concerns about DSA compliance arise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands investment in mental health safety research

Yesterday, OpenAI launched a new grant programme to support external research on the connection between AI and mental health.

An initiative that aims to expand independent inquiry into how people express distress, how AI interprets complex emotional signals and how different cultures shape the language used to discuss sensitive experiences.

OpenAI also hopes that broader participation will strengthen collective understanding, rather than keeping progress confined to internal studies.

The programme encourages interdisciplinary work that brings together technical specialists, mental health professionals and people with lived experience. OpenAI is seeking proposals that can offer clear outputs, such as datasets, evaluation methods, or practical insights, that improve safety and guidance.

Researchers may focus on patterns of distress in specific communities, the influence of slang and vernacular, or the challenges that appear when mental health symptoms manifest in ways that current systems fail to recognise.

The grants also aim to expand knowledge of how providers use AI within care settings, including where tools are practical, where limitations appear and where risks emerge for users.

Additional areas of interest include how young people respond to different tones or styles, how grief is expressed in language and how visual cues linked to body image concerns can be interpreted responsibly.

OpenAI emphasises that better evaluation frameworks, ethical datasets and annotated examples can support safer development across the field.

Applications are open until 19 December, with decisions expected by mid-January. The programme forms part of OpenAI’s broader effort to invest in well-being and safety research, offering financial support to independent teams working across diverse cultural and linguistic contexts.

The company argues that expanding evidence and perspectives will contribute to a more secure and supportive environment for future AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!