United Nations warns AI-driven advertising could deepen information crisis

The United Nations has warned that the rapid adoption of AI in advertising could deepen a global information integrity crisis. With worldwide advertising spending now exceeding $1 trillion annually, concerns are growing over how automated systems influence what users see, trust, and engage with online.

A briefing by the Department of Global Communications and the Conscious Advertising Network places advertising at the centre of the digital information ecosystem. It argues that advertising helps fund and shape the systems that influence what people see and believe, while AI-driven tools are increasingly being used in media buying and content generation in ways that can amplify disinformation, hate speech, and opaque decision-making.

Transparency gaps in AI advertising systems are also raising concerns over fraud, inefficiency, and declining trust in digital platforms. The report warns that these pressures can weaken independent journalism and reduce advertising effectiveness as confidence in online environments continues to erode.

UN officials and industry representatives are calling for stronger governance, clearer oversight of AI supply chains, and closer cooperation between regulators, advertisers, and civil society. The core message is that without stronger guardrails, AI may accelerate the breakdown of information ecosystem integrity rather than simply improve commercial performance.

Why does it matter?

AI is becoming embedded in systems that shape the online flow of information, which means advertising is no longer only a commercial mechanism but also a force affecting public perception and trust. As automation expands without clear oversight, risks can spread beyond brand safety into disinformation, media sustainability, and democratic discourse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Croatia faces additional European Commission action over Digital Services Act enforcement

The European Commission has stepped up its enforcement action against Croatia by issuing an additional letter of formal notice over shortcomings in the national implementation of the Digital Services Act. The move reflects continued concern about whether Croatia’s enforcement structure is fully equipped to apply the regulation in practice.

Although Croatia adopted implementing legislation in 2025, the Commission considers that important obligations remain unmet. In particular, the national authority designated to oversee the regulation has not been given sufficient powers to enforce the Digital Services Act effectively.

Further concerns relate to the penalty regime. According to the Commission, Croatian law does not yet fully meet EU requirements on maximum penalties, proportionality, and deterrence. It also lacks certain provisions needed to sanction individuals for non-cooperation or for providing inaccurate information.

Croatia has been given two months to respond and address the issues raised. If the response is not satisfactory, the Commission may move to the next stage of the infringement process by issuing a reasoned opinion.

Why does it matter?

The case matters because the Digital Services Act depends not only on EU-level rules, but on whether member states give their national authorities the powers needed to enforce them. Croatia’s case shows that even after implementing legislation is adopted, gaps in enforcement design, penalties, and institutional authority can still weaken how the DSA works in practice across the EU.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

IWF and Immaterialism expand efforts to combat child abuse content online

Immaterialism has joined the Internet Watch Foundation to strengthen efforts against the spread of child sexual abuse material online.

The partnership introduces IWF tools designed to accelerate the identification of harmful domains and enable faster intervention when abusive activity is detected. By adopting Registrar Alerts and related datasets, the registrar aims to improve its ability to respond to criminal content across the domains under its management.

The collaboration reflects a broader shift towards more proactive action at the domain infrastructure layer. By integrating intelligence tools into operational processes, the initiative aims to disrupt both the deliberate distribution of abusive material and the continued availability of domains linked to it.

The IWF says the volume of detected child sexual abuse material continues to rise, reinforcing the need for coordinated responses between safety organisations and private-sector actors. In that sense, the partnership points to closer alignment between domain service providers and specialist online safety groups working to strengthen protections for children online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU pushes Android changes to open AI competition

The European Commission has outlined draft measures requiring Google to improve interoperability on Android as part of ongoing proceedings under the Digital Markets Act. Regulators are focusing on how third-party AI services can interact with hardware and software features controlled by the Android operating system.

The proposed measures are intended to give competing AI services access to key Android features already used by Google’s own AI services, including Gemini. In practice, that could allow rival services to support actions such as sending messages, sharing content, or completing tasks through user-preferred applications rather than being limited by Google’s default ecosystem.

The Commission’s approach could also make it easier for users to activate alternative AI assistants through customised interactions and device-level features, reducing dependence on default system tools. The broader aim is to give third-party providers a more equal opportunity to innovate and compete in the fast-moving market for AI services on mobile devices.

Feedback on the proposed measures is being gathered as part of the Commission’s specification proceedings under the DMA. The consultation forms part of a wider regulatory effort to enforce fair access to core platform features and strengthen digital competition across European markets, including in the AI sector.

Why does it matter?

The move targets one of the most important control points in the digital economy: the operating system layer. Opening Android features to competing AI services could reduce the structural advantage held by Google and shift power towards a more competitive, multi-provider mobile ecosystem. This is an inference based on the Commission’s stated objective of giving third-party AI services access equivalent to that available to Google’s own AI tools.

Greater interoperability under the Digital Markets Act could reshape how AI reaches users, turning smartphones into more open platforms rather than tightly controlled default environments. At the same time, the case also shows how strongly the EU is trying to apply competition law to the next phase of AI distribution, not only to search, app stores, and browsers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

EU advances GPAI framework with focus on forecasting systemic risks

At the third meeting of the Signatory Taskforce, the European Commission advanced discussions on how to strengthen oversight of advanced AI systems through the General-Purpose AI Code of Practice, with a particular focus on risk forecasting and harmful manipulation.

The latest GPAI taskforce meeting focused on improving how providers assess and anticipate systemic risks linked to high-impact AI models. A central proposal would require providers to estimate when future systems may exceed the highest systemic risk tier already reached by any of their existing models, using structured forecasting methods.

The Commission is also considering using aggregate forecasts across the industry to provide a broader view of technological trends, including compute capacity, algorithmic efficiency, and data availability. The aim is to improve visibility into how capabilities may evolve across the sector rather than only at the level of individual providers.

Attention was also directed towards harmful manipulation, which the Code treats as a recognised systemic risk. Discussions focused on how providers should develop realistic scenarios for testing and evaluating model behaviour, including in deployment settings such as chatbot interfaces, third-party applications, and agentic systems.

The initiative reflects a wider EU regulatory approach centred on transparency, accountability, and proactive governance in AI development. Rather than waiting for harms to materialise, the Code of Practice is being used to push providers to identify risks earlier and to adopt more structured safety planning for general-purpose AI models with systemic risk.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Australia aligns privacy and online safety regulation

Australia’s eSafety Commissioner and the Office of the Australian Information Commissioner have signed a new agreement to strengthen cooperation on online privacy and safety regulation.

The Memorandum of Understanding formalises coordination between the two bodies as digital risks increasingly overlap across their respective mandates.

The agreement focuses on joint oversight of age-assurance technologies and compliance with social media minimum-age requirements. Both regulators say they want to ensure that systems designed to protect children from harmful or inappropriate content also respect privacy obligations under Australian law.

Officials also highlighted the growing complexity of online risks, particularly with the rapid development of AI and other emerging technologies. The framework is intended to support more consistent regulatory responses by improving communication, information sharing, and enforcement coordination.

Why does it matter?

Officials from both agencies said closer collaboration will help address digital harms more effectively while ensuring privacy protections remain central to online safety measures. The initiative reflects a broader shift towards more integrated regulation of technology-driven risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Meta expands parental oversight with new AI conversation insights for teens

Meta has introduced new supervision features that allow parents to see the topics their teenagers discuss with its AI assistant across Facebook, Messenger, and Instagram.

The update provides visibility into activity over the previous seven days, grouping interactions into areas such as education, health and well-being, lifestyle, travel, and entertainment. Parents can review these themes through a new Insights tab, although they will not see the exact prompts their teen sent or Meta AI’s responses.

The feature forms part of Meta’s broader effort to strengthen safeguards for younger users as AI becomes more embedded in everyday digital experiences. For more sensitive issues, including suicide and self-harm, Meta says it is developing additional alerts to notify parents when teens try to engage in those types of conversations with its AI assistant.

Meta has also partnered with external experts, including the Cyberbullying Research Centre, to develop structured conversation prompts to help families talk about AI use. The company says these tools are intended to support informed, non-judgemental dialogue rather than passive monitoring.

Alongside these updates, Meta has created an AI Wellbeing Expert Council to provide input on the development of age-appropriate AI systems for teens. The move reflects a wider shift towards embedding safety, transparency, and parental involvement into AI-driven platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

MIT method tackles AI overconfidence problem

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new training approach designed to address a persistent issue in AI systems: excessive confidence in uncertain answers.

The study identifies overconfidence as a by-product of standard reinforcement learning methods, which reward correct outputs without accounting for how those answers are reached.

The proposed method, known as RLCR (Reinforcement Learning with Calibration Rewards), enables models to generate both answers and associated confidence estimates.

By introducing a calibration-based reward mechanism, the system penalises incorrect high-confidence responses and unnecessary uncertainty in correct ones. Across multiple benchmarks, the approach reduced calibration error by up to 90 percent while maintaining or improving accuracy.

Findings suggest that conventional reinforcement learning frameworks unintentionally encourage models to guess confidently, even in the absence of sufficient evidence.

Researchers argue that this behaviour poses risks in applied settings, particularly in sectors such as healthcare, law, and finance, where users may rely heavily on perceived certainty in AI outputs.

Results also indicate that improved confidence calibration enhances practical performance during inference. Selecting answers based on model-reported confidence improves accuracy, suggesting uncertainty-aware reasoning can deliver measurable benefits in deployment.

Why does it matter? 

Improving how AI systems express uncertainty directly affects their reliability in real-world use. Models that distinguish between strong and weak answers reduce the risk of users over-relying on incorrect outputs presented with undue confidence.

Better-calibrated systems also enable more informed decision-making, as confidence signals can be used to filter, rank or combine responses. Overall, uncertainty-aware reasoning strengthens trust, safety and practical performance as AI becomes more widely integrated into critical decision processes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

IWF data shows 63% of global child abuse content hosted in the EU

New data from the Internet Watch Foundation (IWF) points to a stark imbalance in global online child protection, with the EU member states hosting the majority of confirmed child sexual abuse material URLs identified by the organisation. In 2025, IWF analysts actioned 310,437 URLs, with 63% traced to hosting services in the EU member states.

A small cluster of countries, including Bulgaria and the Netherlands, accounted for a large share of that hosting concentration, highlighting structural vulnerabilities in hosting infrastructure and uneven enforcement across jurisdictions. The IWF notes that such concentrations often reflect a combination of high-volume sites, migration between hosting locations, and inconsistent takedown speeds.

These findings come shortly after the EU failed to preserve legal continuity for the temporary framework that had allowed companies to carry out certain voluntary detection measures while negotiations on a permanent child sexual abuse law continued. That lapse has intensified concerns about a widening gap between the scale of online abuse and the legal tools available to detect and disrupt it.

The IWF argues that fragmented regulation and uneven infrastructure responses make it easier for criminal content to persist online. Where abuse material remains concentrated on a few high-volume sites in jurisdictions with slower or less consistent takedown practices, it stays accessible for longer and is more likely to be copied, redistributed, or reposted elsewhere.

By contrast, takedown performance can vary sharply across jurisdictions. The UK accounted for just 951 actioned URLs in 2025, or 0.30% of the total, a figure the IWF links to a much stronger domestic removal framework and closer operational cooperation.

The broader message of the data is that child sexual abuse material cannot be tackled effectively through fragmented national responses alone. The IWF is using the figures to press for a more coherent international framework for detection, reporting, and removal, warning that without aligned rules and stronger accountability, systemic weaknesses in digital governance will continue to leave serious gaps in child protection.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

eSafety Commissioner of Australia issues notices to Roblox, Minecraft, Fortnite and Steam

Australia’s eSafety Commissioner has issued legally enforceable transparency notices to Roblox, Minecraft, Fortnite and Steam over concerns that online games are being used by individuals seeking to groom children and by extremist groups to spread violent propaganda and radicalise young people.

The notices require the platforms to explain how they identify, prevent and respond to harms including grooming, cyberbullying, online hate, sexual extortion and violent extremism. They also ask how systems, staffing and safety-by-design measures align with the Australian Government’s Basic Online Safety Expectations.

eSafety Commissioner Julie Inman Grant said online games and gaming-adjacent services can serve as first points of contact between children and offenders in cases involving serious online harm. She said: ‘What we often see after these offenders make contact with children in online game environments, they then move children to private messaging services.’

Inman Grant also said: ‘Predatory adults know this and target children through grooming or embedding terrorist and violent extremist narratives in gameplay, increasing the risks of contact offending, radicalisation and other off-platform harms.’

eSafety said it publishes reports based on transparency notices to provide the public, including parents, with more information about safety risks and existing mitigations, while also increasing pressure on technology companies to adopt Safety by Design. Online game platforms must also comply with Australia’s Online Safety Codes and Standards, and a breach of a direction to comply with a code or standard can attract penalties of up to A$49.5 million per breach.

Compliance with a transparency notice is mandatory. If companies fail to respond, eSafety has enforcement options, including financial penalties of up to A$825,000 a day.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!