Data access emerges as cornerstone of EU AI plan

The European Commission has unveiled its AI Continent Action Plan, setting out a strategy to strengthen Europe’s position in the global AI landscape. The plan responds to rapid international advances and seeks to accelerate AI adoption across European industry and public services, where progress remains uneven.

Rather than introducing a new regulatory framework, the plan brings together targeted investments and policy measures around five priorities: expanding AI infrastructure, improving access to data, accelerating adoption in strategic sectors, strengthening skills, and supporting the implementation of existing rules.

Access to high-quality and interoperable data is presented as one of the key conditions for scaling AI in Europe. The plan links this objective to the EU’s wider data strategy and to efforts to make cross-border data use more practical, enabling organisations to train and deploy AI systems more effectively while operating within Europe’s transparency and accountability standards.

The broader ambition is to move Europe from fragmented experimentation towards more scalable and trustworthy AI deployment. In that sense, the Action Plan treats data, infrastructure, skills, and implementation capacity as parts of the same competitiveness agenda rather than separate policy tracks.

Why does it matter?

Europe’s AI challenge is no longer only about regulation, but about whether companies and public institutions can actually build and use AI at scale. If access to data remains fragmented across borders, sectors, and technical systems, the EU risks falling further behind competitors that already combine compute, capital, and data more effectively. By putting data access alongside infrastructure and skills, the Commission is signalling that AI competitiveness will depend as much on operational capacity as on rules or research strength.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Agentic AI risks outlined in joint cyber agency guidance

Six cybersecurity agencies have jointly published guidance urging organisations to adopt agentic AI services cautiously. The document warns that greater autonomy can increase cyber risk, particularly as agentic AI is introduced into critical infrastructure, defence, and other mission-critical environments.

The authors say organisations should use agentic AI primarily for low-risk and non-sensitive tasks and should not grant it broad or unrestricted access to sensitive data or critical systems. The guidance also recommends incremental deployment rather than large-scale implementation from the outset.

The document was co-authored by agencies from Australia, the United States, Canada, New Zealand, and the United Kingdom: the Australian Signals Directorate’s Australian Cyber Security Centre, the US Cybersecurity and Infrastructure Security Agency and National Security Agency, the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre, and the UK’s National Cyber Security Centre.

It defines agentic AI as systems composed of one or more agents that rely on AI models, such as large language models, to interpret context, make decisions, and take actions, often without continuous human intervention. The guidance says these systems often combine an LLM-based agent with tools, external data, memory, and planning functions, which expands both capability and attack surface.

The agencies say agentic AI inherits many of the vulnerabilities already associated with large language models while introducing greater complexity and new systemic risks. The document identifies five broad categories of concern: privilege risks, design and configuration risks, behaviour risks, structural risks, and accountability risks.

It warns that over-privileged agents, insecure third-party tools, goal misalignment, emergent or deceptive behaviour, and opaque decision-making chains can all increase the likelihood and impact of compromise. To reduce those risks, the guidance recommends secure design, strong identity management, defence-in-depth, comprehensive testing, threat modelling, progressive deployment, isolation, continuous monitoring, and strict privilege controls.

The agencies also stress that human approval should remain in place for high-impact actions and that agentic AI security should be treated as part of broader cybersecurity governance rather than as a separate discipline. The document concludes by calling for stronger research, collaboration, and agent-specific evaluations as the technology matures.

Why does it matter?

The guidance matters because it draws a clear line between ordinary AI adoption and agentic systems that can act with far more autonomy inside real operational environments. Once AI tools move from assisting users to making decisions, calling tools, and interacting with sensitive systems, the security challenge shifts from model safety alone to full organisational risk management. That is why the document treats agentic AI not as a niche technical issue, but as a governance and cyber resilience problem that organisations need to control before deploying at scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European Commission urges fast rollout of EU age verification app

The European Commission has adopted a recommendation urging member states to accelerate the rollout of the EU age verification app and make it available by the end of the year. The recommendation says the app can be deployed either as a standalone solution or integrated into a European Digital Identity Wallet.

According to the Commission, the app is intended to let users prove they meet a required age threshold without disclosing their exact age, identity, or other personal details. The Commission has also published a blueprint for the system, leaving it to member states to customise and produce the app for their citizens.

The recommendation sets out actions for member states to support rapid availability and interoperability, including implementation plans and coordination to ensure the swift rollout of the solution across the EU.

The measure forms part of the EU’s wider approach to protecting minors online under the Digital Services Act, which requires online platforms to ensure a high level of privacy, safety, and security for minors.

Executive Vice-President Henna Virkkunen said: ‘Effective and privacy-preserving age verification is the next piece of the puzzle that we are getting closer to completing, as we work towards an online space where our children are safe and empowered to use positively and responsibly without restricting the rights of adults.’

Why does it matter?

The move takes age verification in the EU from a general policy objective to a more concrete implementation phase. Rather than leaving platforms and member states to develop separate solutions, the Commission is trying to steer the bloc towards a common privacy-preserving model that can work across borders.

That matters for both child protection and regulatory coherence, because if countries adopt incompatible systems or move at very different speeds, enforcement under the Digital Services Act could become uneven in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New federated learning approach highlights shift towards decentralised and privacy-preserving AI

Researchers at MIT have developed a new method that significantly improves privacy-preserving AI training on everyday devices such as smartphones, sensors, and smartwatches.

The approach strengthens federated learning systems, where data remains on devices while models are trained collaboratively, supporting sensitive applications such as healthcare and finance.

The new framework, called FTTE (Federated Tiny Training Engine), addresses long-standing issues in federated learning networks with uneven device capabilities. Traditional systems struggle with delays from limited memory, weak connectivity and slow update cycles, reducing network efficiency and performance.

FTTE improves the process by sending smaller model segments to devices, introducing asynchronous updates and weighting contributions based on freshness. These changes reduce memory load and communication demands while maintaining stable training across heterogeneous devices.

Testing across simulated and real device networks showed training speeds improved by around 81 percent, with major reductions in memory and data transfer requirements.

Researchers also highlighted the potential to expand AI access in regions with lower-end hardware, while future work will focus on further personalising models for individual devices.

Why does it matter? 

Decentralised AI training marks a shift away from dependence on centralised data centres towards distributed intelligence embedded in everyday devices.

That changes the architecture of AI itself, allowing sensitive data to remain local and reducing privacy risks. At the same time, computation is spread across billions of low-power devices rather than concentrated in a few powerful systems.

The researchers note that such approaches may enable AI training on devices with limited memory and connectivity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Digital identity ecosystems expand as verifiable credentials roll out across India and other regions

Digital identity ecosystems are expanding with Google Wallet, introducing new capabilities to simplify secure identity verification across multiple regions.

The latest update enables users in India to store Aadhaar-based verifiable credentials directly on their devices.

The integration allows individuals to confirm identity or age in everyday scenarios while maintaining strong privacy protections. Features such as selective disclosure ensure that only necessary information is shared, reinforcing a privacy-first approach to digital identity management.

At the same time, digital ID passes based on passport data are being rolled out in Singapore and Brazil. These credentials provide a streamlined way to authenticate identity across both online services and physical environments.

Why does it matter?

Such an expansion by Google reflects a broader push towards interoperable and secure digital identity systems. By aligning with global standards and embedding privacy into design, the initiative aims to support more seamless and trusted digital interactions worldwide.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta faces EU Digital Services Act breach finding over under-13 access

The European Commission has preliminarily found Meta’s Instagram and Facebook in breach of the Digital Services Act over failures to adequately prevent children under 13 from accessing the platforms. The finding remains provisional and does not prejudge the outcome of the investigation.

According to the Commission, Meta’s existing measures do not effectively enforce its own minimum age requirement of 13. The preliminary findings say children below that age can still create accounts by entering false birth dates, while the company’s reporting tool for underage users is difficult to use and often does not result in effective follow-up.

The Commission also considers Meta’s risk assessment to be incomplete and arbitrary. It says the company failed to identify and assess the risks properly posed to children under 13 who access Instagram and Facebook, despite evidence from across the EU suggesting that a significant share of children under 13 use one or both services. This wording is best kept cautious unless you are quoting the exact percentage directly from the Commission text.

At this stage, the Commission says Meta must revise its risk assessment methodology and strengthen its measures to prevent, detect, and remove children under 13 from the platforms. It also says the company must better counter and mitigate the risks those children may face and ensure a high level of privacy, safety, and security for minors.

The preliminary findings form part of formal proceedings opened against Meta in May 2024 under the DSA. The Commission says the investigation has included analysis of Meta’s risk assessment reports, internal data and documents, and the company’s responses to requests for information, with support from civil society organisations and child protection experts across the EU.

If the Commission’s preliminary view is confirmed, it may adopt a non-compliance decision and impose a fine of up to 6% of the provider’s total worldwide annual turnover, as well as periodic penalty payments. Meta now has the opportunity to reply before any final decision is taken.

Henna Virkkunen, Executive Vice President for Tech Sovereignty, Security and Democracy, said Meta’s own terms and conditions already state that its services are not intended for children under 13, but that the company appears to be doing too little in practice to prevent them from gaining access.

Why does it matter?

The case matters because it goes to the heart of how the Digital Services Act is expected to work in practice: not only by requiring large platforms to set rules for child safety, but by obliging them to enforce those rules effectively. If the Commission’s preliminary view is confirmed, the Meta case could become an important benchmark for how the EU treats age assurance, risk assessments, and platform accountability in cases involving minors, with wider implications for other services that rely on self-declared age checks and weak reporting tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN experts warn of growing risks from digital surveillance and AI misuse

UN human rights experts have raised concerns about the global expansion of digital surveillance technologies and their impact on fundamental freedoms, warning that current practices risk undermining democratic participation and civic space.

In a joint statement, the experts said that surveillance tools are increasingly used in ways that may be incompatible with international human rights standards. They noted that such technologies are often deployed against civil society, journalists, political opposition, and minority groups, contributing to what they described as a ‘chilling effect’ on freedom of expression and dissent.

The experts highlighted the growing use of advanced technologies, including AI, in areas such as law enforcement, counter-terrorism, and border management. They said that, without adequate legal safeguards, these tools can enable large-scale monitoring, predictive profiling, and the amplification of bias, potentially leading to disproportionate targeting of individuals and groups.

According to the statement, digital surveillance systems are part of broader ecosystems that involve collaboration among governments, private companies, and data intermediaries. These interconnected systems can expand state surveillance capabilities and increase the complexity of assessing their impact on human rights.

The experts also pointed to the role of legal frameworks, noting that broadly defined laws on national security, extremism, and cybercrime may contribute to the misuse of surveillance technologies. Such measures, they said, can affect the work of civil society organisations and other actors operating in the public sphere.

To address these challenges, the experts called for stronger safeguards, including clearer limits on surveillance practices, risk-based regulation of AI systems, and improved oversight mechanisms. They emphasised the importance of human rights impact assessments throughout the lifecycle of digital technologies, as well as the need for accountability and access to remedies in cases of harm.

Why does it matter?

The statement also highlighted the importance of data protection, system testing, and validation to reduce risks associated with digital surveillance tools. It called on governments to align national legislation with international human rights standards and ensure independent oversight of surveillance activities.

The experts further suggested that international cooperation may be needed to address cross-border implications, including the potential development of a binding international framework governing digital surveillance technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia aligns privacy and online safety regulation

Australia’s eSafety Commissioner and the Office of the Australian Information Commissioner have signed a new agreement to strengthen cooperation on online privacy and safety regulation.

The Memorandum of Understanding formalises coordination between the two bodies as digital risks increasingly overlap across their respective mandates.

The agreement focuses on joint oversight of age-assurance technologies and compliance with social media minimum-age requirements. Both regulators say they want to ensure that systems designed to protect children from harmful or inappropriate content also respect privacy obligations under Australian law.

Officials also highlighted the growing complexity of online risks, particularly with the rapid development of AI and other emerging technologies. The framework is intended to support more consistent regulatory responses by improving communication, information sharing, and enforcement coordination.

Why does it matter?

Officials from both agencies said closer collaboration will help address digital harms more effectively while ensuring privacy protections remain central to online safety measures. The initiative reflects a broader shift towards more integrated regulation of technology-driven risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Meta expands parental oversight with new AI conversation insights for teens

Meta has introduced new supervision features that allow parents to see the topics their teenagers discuss with its AI assistant across Facebook, Messenger, and Instagram.

The update provides visibility into activity over the previous seven days, grouping interactions into areas such as education, health and well-being, lifestyle, travel, and entertainment. Parents can review these themes through a new Insights tab, although they will not see the exact prompts their teen sent or Meta AI’s responses.

The feature forms part of Meta’s broader effort to strengthen safeguards for younger users as AI becomes more embedded in everyday digital experiences. For more sensitive issues, including suicide and self-harm, Meta says it is developing additional alerts to notify parents when teens try to engage in those types of conversations with its AI assistant.

Meta has also partnered with external experts, including the Cyberbullying Research Centre, to develop structured conversation prompts to help families talk about AI use. The company says these tools are intended to support informed, non-judgemental dialogue rather than passive monitoring.

Alongside these updates, Meta has created an AI Wellbeing Expert Council to provide input on the development of age-appropriate AI systems for teens. The move reflects a wider shift towards embedding safety, transparency, and parental involvement into AI-driven platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OpenAI introduces ChatGPT for Clinicians and HealthBench Professional

OpenAI has launched ChatGPT for Clinicians, a version of ChatGPT designed to support clinical tasks such as documentation, medical research, evidence review, and care consults. The company says the product is now available free to verified physicians, nurse practitioners, physician associates, and pharmacists in the United States.

According to OpenAI, ChatGPT for Clinicians includes trusted clinical search with cited answers, reusable skills for repeatable workflows, deep research across medical literature, optional HIPAA support through a Business Associate Agreement for eligible accounts, and the ability for eligible evidence review to count towards continuing medical education credits. OpenAI also says conversations in the product are not used to train models.

The launch builds on OpenAI’s earlier ChatGPT for Healthcare offering for organisations. OpenAI says clinicians across US health systems are already using that product for administrative work such as medical research and documentation, and describes the free clinician version as the next step in expanding access.

Alongside the launch, OpenAI has introduced HealthBench Professional, which it describes as an open benchmark for real-world clinician chat tasks across care consultation, writing, documentation, and medical research. The company says the benchmark is based on physician-authored conversations, multi-stage physician adjudication, and filtered examples selected for quality, representativeness, and difficulty.

OpenAI also says physician advisers reviewed more than 700,000 model responses in health scenarios, and that before release, clinicians tested 6,924 conversations across clinical care, documentation, and research.

According to the company, physicians rated 99.6% of those responses as safe and accurate, while GPT-5.4 in the ChatGPT for Clinicians workspace outperformed base GPT-5.4, other OpenAI and external models, and human physicians on HealthBench Professional. OpenAI adds that the tool is designed to support clinicians with information rather than replace their judgement or expertise.

The company says the free version is currently limited to verified US clinicians, with plans to expand access to additional countries and groups over time. OpenAI also says it will begin by working with the Better Evidence Network to pilot access for verified clinicians outside the United States, subject to local regulations, and has released a Health Blueprint with recommendations for responsible AI integration in US healthcare.

Why does it matter?

The launch of ChatGPT for Clinicians reflects a shift from general-purpose AI use in healthcare towards clinician-specific products tied to workflow, benchmarking, and compliance. It also shows that competition in medical AI is increasingly centred not only on model capability, but on safety evaluation, evidence retrieval, privacy controls, and integration into real clinical practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!