European Commission urges fast rollout of EU age verification app

The European Commission has adopted a recommendation urging member states to accelerate the rollout of the EU age verification app and make it available by the end of the year. The recommendation says the app can be deployed either as a standalone solution or integrated into a European Digital Identity Wallet.

According to the Commission, the app is intended to let users prove they meet a required age threshold without disclosing their exact age, identity, or other personal details. The Commission has also published a blueprint for the system, leaving it to member states to customise and produce the app for their citizens.

The recommendation sets out actions for member states to support rapid availability and interoperability, including implementation plans and coordination to ensure the swift rollout of the solution across the EU.

The measure forms part of the EU’s wider approach to protecting minors online under the Digital Services Act, which requires online platforms to ensure a high level of privacy, safety, and security for minors.

Executive Vice-President Henna Virkkunen said: ‘Effective and privacy-preserving age verification is the next piece of the puzzle that we are getting closer to completing, as we work towards an online space where our children are safe and empowered to use positively and responsibly without restricting the rights of adults.’

Why does it matter?

The move takes age verification in the EU from a general policy objective to a more concrete implementation phase. Rather than leaving platforms and member states to develop separate solutions, the Commission is trying to steer the bloc towards a common privacy-preserving model that can work across borders.

That matters for both child protection and regulatory coherence, because if countries adopt incompatible systems or move at very different speeds, enforcement under the Digital Services Act could become uneven in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New federated learning approach highlights shift towards decentralised and privacy-preserving AI

Researchers at MIT have developed a new method that significantly improves privacy-preserving AI training on everyday devices such as smartphones, sensors, and smartwatches.

The approach strengthens federated learning systems, where data remains on devices while models are trained collaboratively, supporting sensitive applications such as healthcare and finance.

The new framework, called FTTE (Federated Tiny Training Engine), addresses long-standing issues in federated learning networks with uneven device capabilities. Traditional systems struggle with delays from limited memory, weak connectivity and slow update cycles, reducing network efficiency and performance.

FTTE improves the process by sending smaller model segments to devices, introducing asynchronous updates and weighting contributions based on freshness. These changes reduce memory load and communication demands while maintaining stable training across heterogeneous devices.

Testing across simulated and real device networks showed training speeds improved by around 81 percent, with major reductions in memory and data transfer requirements.

Researchers also highlighted the potential to expand AI access in regions with lower-end hardware, while future work will focus on further personalising models for individual devices.

Why does it matter? 

Decentralised AI training marks a shift away from dependence on centralised data centres towards distributed intelligence embedded in everyday devices.

That changes the architecture of AI itself, allowing sensitive data to remain local and reducing privacy risks. At the same time, computation is spread across billions of low-power devices rather than concentrated in a few powerful systems.

The researchers note that such approaches may enable AI training on devices with limited memory and connectivity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Digital identity ecosystems expand as verifiable credentials roll out across India and other regions

Digital identity ecosystems are expanding with Google Wallet, introducing new capabilities to simplify secure identity verification across multiple regions.

The latest update enables users in India to store Aadhaar-based verifiable credentials directly on their devices.

The integration allows individuals to confirm identity or age in everyday scenarios while maintaining strong privacy protections. Features such as selective disclosure ensure that only necessary information is shared, reinforcing a privacy-first approach to digital identity management.

At the same time, digital ID passes based on passport data are being rolled out in Singapore and Brazil. These credentials provide a streamlined way to authenticate identity across both online services and physical environments.

Why does it matter?

Such an expansion by Google reflects a broader push towards interoperable and secure digital identity systems. By aligning with global standards and embedding privacy into design, the initiative aims to support more seamless and trusted digital interactions worldwide.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta faces EU Digital Services Act breach finding over under-13 access

The European Commission has preliminarily found Meta’s Instagram and Facebook in breach of the Digital Services Act over failures to adequately prevent children under 13 from accessing the platforms. The finding remains provisional and does not prejudge the outcome of the investigation.

According to the Commission, Meta’s existing measures do not effectively enforce its own minimum age requirement of 13. The preliminary findings say children below that age can still create accounts by entering false birth dates, while the company’s reporting tool for underage users is difficult to use and often does not result in effective follow-up.

The Commission also considers Meta’s risk assessment to be incomplete and arbitrary. It says the company failed to identify and assess the risks properly posed to children under 13 who access Instagram and Facebook, despite evidence from across the EU suggesting that a significant share of children under 13 use one or both services. This wording is best kept cautious unless you are quoting the exact percentage directly from the Commission text.

At this stage, the Commission says Meta must revise its risk assessment methodology and strengthen its measures to prevent, detect, and remove children under 13 from the platforms. It also says the company must better counter and mitigate the risks those children may face and ensure a high level of privacy, safety, and security for minors.

The preliminary findings form part of formal proceedings opened against Meta in May 2024 under the DSA. The Commission says the investigation has included analysis of Meta’s risk assessment reports, internal data and documents, and the company’s responses to requests for information, with support from civil society organisations and child protection experts across the EU.

If the Commission’s preliminary view is confirmed, it may adopt a non-compliance decision and impose a fine of up to 6% of the provider’s total worldwide annual turnover, as well as periodic penalty payments. Meta now has the opportunity to reply before any final decision is taken.

Henna Virkkunen, Executive Vice President for Tech Sovereignty, Security and Democracy, said Meta’s own terms and conditions already state that its services are not intended for children under 13, but that the company appears to be doing too little in practice to prevent them from gaining access.

Why does it matter?

The case matters because it goes to the heart of how the Digital Services Act is expected to work in practice: not only by requiring large platforms to set rules for child safety, but by obliging them to enforce those rules effectively. If the Commission’s preliminary view is confirmed, the Meta case could become an important benchmark for how the EU treats age assurance, risk assessments, and platform accountability in cases involving minors, with wider implications for other services that rely on self-declared age checks and weak reporting tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN experts warn of growing risks from digital surveillance and AI misuse

UN human rights experts have raised concerns about the global expansion of digital surveillance technologies and their impact on fundamental freedoms, warning that current practices risk undermining democratic participation and civic space.

In a joint statement, the experts said that surveillance tools are increasingly used in ways that may be incompatible with international human rights standards. They noted that such technologies are often deployed against civil society, journalists, political opposition, and minority groups, contributing to what they described as a ‘chilling effect’ on freedom of expression and dissent.

The experts highlighted the growing use of advanced technologies, including AI, in areas such as law enforcement, counter-terrorism, and border management. They said that, without adequate legal safeguards, these tools can enable large-scale monitoring, predictive profiling, and the amplification of bias, potentially leading to disproportionate targeting of individuals and groups.

According to the statement, digital surveillance systems are part of broader ecosystems that involve collaboration among governments, private companies, and data intermediaries. These interconnected systems can expand state surveillance capabilities and increase the complexity of assessing their impact on human rights.

The experts also pointed to the role of legal frameworks, noting that broadly defined laws on national security, extremism, and cybercrime may contribute to the misuse of surveillance technologies. Such measures, they said, can affect the work of civil society organisations and other actors operating in the public sphere.

To address these challenges, the experts called for stronger safeguards, including clearer limits on surveillance practices, risk-based regulation of AI systems, and improved oversight mechanisms. They emphasised the importance of human rights impact assessments throughout the lifecycle of digital technologies, as well as the need for accountability and access to remedies in cases of harm.

Why does it matter?

The statement also highlighted the importance of data protection, system testing, and validation to reduce risks associated with digital surveillance tools. It called on governments to align national legislation with international human rights standards and ensure independent oversight of surveillance activities.

The experts further suggested that international cooperation may be needed to address cross-border implications, including the potential development of a binding international framework governing digital surveillance technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia aligns privacy and online safety regulation

Australia’s eSafety Commissioner and the Office of the Australian Information Commissioner have signed a new agreement to strengthen cooperation on online privacy and safety regulation.

The Memorandum of Understanding formalises coordination between the two bodies as digital risks increasingly overlap across their respective mandates.

The agreement focuses on joint oversight of age-assurance technologies and compliance with social media minimum-age requirements. Both regulators say they want to ensure that systems designed to protect children from harmful or inappropriate content also respect privacy obligations under Australian law.

Officials also highlighted the growing complexity of online risks, particularly with the rapid development of AI and other emerging technologies. The framework is intended to support more consistent regulatory responses by improving communication, information sharing, and enforcement coordination.

Why does it matter?

Officials from both agencies said closer collaboration will help address digital harms more effectively while ensuring privacy protections remain central to online safety measures. The initiative reflects a broader shift towards more integrated regulation of technology-driven risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Meta expands parental oversight with new AI conversation insights for teens

Meta has introduced new supervision features that allow parents to see the topics their teenagers discuss with its AI assistant across Facebook, Messenger, and Instagram.

The update provides visibility into activity over the previous seven days, grouping interactions into areas such as education, health and well-being, lifestyle, travel, and entertainment. Parents can review these themes through a new Insights tab, although they will not see the exact prompts their teen sent or Meta AI’s responses.

The feature forms part of Meta’s broader effort to strengthen safeguards for younger users as AI becomes more embedded in everyday digital experiences. For more sensitive issues, including suicide and self-harm, Meta says it is developing additional alerts to notify parents when teens try to engage in those types of conversations with its AI assistant.

Meta has also partnered with external experts, including the Cyberbullying Research Centre, to develop structured conversation prompts to help families talk about AI use. The company says these tools are intended to support informed, non-judgemental dialogue rather than passive monitoring.

Alongside these updates, Meta has created an AI Wellbeing Expert Council to provide input on the development of age-appropriate AI systems for teens. The move reflects a wider shift towards embedding safety, transparency, and parental involvement into AI-driven platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OpenAI introduces ChatGPT for Clinicians and HealthBench Professional

OpenAI has launched ChatGPT for Clinicians, a version of ChatGPT designed to support clinical tasks such as documentation, medical research, evidence review, and care consults. The company says the product is now available free to verified physicians, nurse practitioners, physician associates, and pharmacists in the United States.

According to OpenAI, ChatGPT for Clinicians includes trusted clinical search with cited answers, reusable skills for repeatable workflows, deep research across medical literature, optional HIPAA support through a Business Associate Agreement for eligible accounts, and the ability for eligible evidence review to count towards continuing medical education credits. OpenAI also says conversations in the product are not used to train models.

The launch builds on OpenAI’s earlier ChatGPT for Healthcare offering for organisations. OpenAI says clinicians across US health systems are already using that product for administrative work such as medical research and documentation, and describes the free clinician version as the next step in expanding access.

Alongside the launch, OpenAI has introduced HealthBench Professional, which it describes as an open benchmark for real-world clinician chat tasks across care consultation, writing, documentation, and medical research. The company says the benchmark is based on physician-authored conversations, multi-stage physician adjudication, and filtered examples selected for quality, representativeness, and difficulty.

OpenAI also says physician advisers reviewed more than 700,000 model responses in health scenarios, and that before release, clinicians tested 6,924 conversations across clinical care, documentation, and research.

According to the company, physicians rated 99.6% of those responses as safe and accurate, while GPT-5.4 in the ChatGPT for Clinicians workspace outperformed base GPT-5.4, other OpenAI and external models, and human physicians on HealthBench Professional. OpenAI adds that the tool is designed to support clinicians with information rather than replace their judgement or expertise.

The company says the free version is currently limited to verified US clinicians, with plans to expand access to additional countries and groups over time. OpenAI also says it will begin by working with the Better Evidence Network to pilot access for verified clinicians outside the United States, subject to local regulations, and has released a Health Blueprint with recommendations for responsible AI integration in US healthcare.

Why does it matter?

The launch of ChatGPT for Clinicians reflects a shift from general-purpose AI use in healthcare towards clinician-specific products tied to workflow, benchmarking, and compliance. It also shows that competition in medical AI is increasingly centred not only on model capability, but on safety evaluation, evidence retrieval, privacy controls, and integration into real clinical practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Italy issues guidelines requiring consent for email tracking pixels

Italy’s Data Protection Authority has issued new guidelines on tracking pixels used in email communications, requiring organisations to inform users and obtain consent before deploying the hidden monitoring tools.

Published on 17 April 2026, the Garante per la Protezione dei Dati Personali guidelines address the invasive nature of tracking pixels, which silently monitor whether recipients open and read emails without their knowledge.

Tracking pixels are tiny, often invisible images embedded in emails that automatically send information back to the sender when recipients open the message. The pixels can collect data, including device type, IP address, and exact time of access.

The Authority identified limited exceptions to the consent requirement, including statistical measurements of email open rates, security protocols during user authentication, and mandatory institutional communications such as fraud alerts or contractual notifications.

The guidelines allow organisations six months from publication to achieve compliance with the new standards. Users in Italy must be able to revoke consent easily and granularly, meaning they can withdraw permission for tracking whilst continuing to receive emails.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Philippines and Bermuda sign data protection agreement to strengthen cross-border cooperation

The National Privacy Commission of the Republic of the Philippines has signed a memorandum of understanding with the Office of the Privacy Commissioner of the Islands of Bermuda to strengthen cooperation on personal data protection.

The agreement focuses on cross-border enforcement and regulatory collaboration, enabling the exchange of information on investigations and mutual assistance in addressing potential violations of data privacy laws. It also supports coordination in cross-border data breach cases.

The agreement outlines cooperation on developing compatible data protection mechanisms, including certification frameworks and trusted data flow systems. It also promotes training, knowledge sharing and collaboration on emerging privacy issues.

The authority states that the partnership between the Philippines and Bermuda aims to strengthen accountability and global data protection standards, and that the agreement was signed during an international privacy summit in Washington.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot