Child safety concerns dominate Europe’s digital agenda

A growing majority of Europeans believe stronger online protections for children and young people should remain a top policy priority, according to new findings from the Special Eurobarometer on the Digital Decade.

The European Commission said 92% of Europeans consider further action to protect children and young people online a top priority, reflecting sustained concern over the impact of digital platforms on younger users.

Mental health risks linked to social media ranked among the biggest concerns, with 93% of respondents calling for stronger protections. Cyberbullying, online harassment, and better age-restriction mechanisms for inappropriate content were also highlighted by 92% of respondents.

Concerns over AI and online manipulation also remain high. The survey found that 39% of respondents cited privacy or data protection as a barrier to using AI, followed by accuracy or incorrect information at 36% and ethical issues or misuse of generative AI tools at 32%.

Around 87% of Europeans agreed that online manipulation, including disinformation, foreign interference, AI-generated content and deepfakes, poses a threat to democratic processes. Another 80% said AI development should be carefully regulated to ensure safety, even if oversight places constraints on developers.

The findings also show continuing concern over online platforms. Europeans reported being personally affected by fake news and disinformation, misuse of personal data and insufficient protections for minors, with concerns over fake news and child protection showing the sharpest increases since 2024.

Why does it matter?

The findings show that public concern over digital technologies in Europe is increasingly centred on safety, rights and accountability, particularly for children and young people. They also suggest that trust in platforms and AI systems will depend not only on innovation and access, but also on visible safeguards against manipulation, harmful content, privacy risks, and weak protections for minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

Council of the EU pushes for human-centred AI in education systems

The Council of the European Union has approved conclusions calling for an ethical, safe and human-centred approach to AI in education, stressing that teachers should remain at the heart of the learning process as AI tools become more widely used across schools and universities.

The Council said the conclusions focus on strengthening digital skills and AI literacy, guaranteeing inclusion and fairness, empowering teachers, and supporting the well-being of both teachers and learners. It also noted that the relationship between AI and teaching is being addressed for the first time in the EU education policy.

The EU ministers highlighted both the opportunities and risks associated with AI-driven education systems. The Council said AI could improve accessibility, support disadvantaged learners, enable more individualised teaching and assessment methods, and reduce administrative workloads for educators.

At the same time, the conclusions raise concerns about misinformation, algorithmic bias, over-reliance on technology, reduced teacher autonomy, data protection risks and the widening of digital inequalities across Europe. The Council also warned that AI could affect learners’ concentration and skill acquisition, while raising broader societal and environmental concerns.

The conclusions call on national governments to strengthen teachers’ AI and digital skills through training, while encouraging the development and use of education-specific AI tools that provide clear pedagogical value and align with data protection, accountability and risk-awareness requirements.

The Council also said teachers should have opportunities to contribute to the design and evaluation of AI tools used in education, reflecting a digital humanism approach focused on human agency and democratic values.

Member states are urged to ensure AI deployment does not undermine teachers’ autonomy or sustainable working conditions, and that digital tools remain accessible and suitable for all learners. The European Commission was encouraged to support international cooperation, research, ethical guidance, peer-to-peer exchanges and capacity-building as AI adoption accelerates across European education systems.

Why does it matter?

AI is moving into classrooms not only as a learning tool, but as part of how teaching, assessment, administration and student support are organised. The Council’s conclusions underline that education policy will need to address more than technical adoption, including teacher autonomy, digital inequality, learner well-being, data protection and the risk of over-reliance on automated systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cybercrime Atlas launches open-source map of criminal networks

Cybercrime Atlas has launched Cosmos, an open-source platform designed to map global cybercrime networks and strengthen cooperation among defenders, investigators, prosecutors and policymakers.

Hosted by the World Economic Forum’s Centre for Cybersecurity, Cybercrime Atlas aims to build a shared understanding of cybercriminal ecosystems at a time when ransomware, fraud and illicit digital services are becoming increasingly organised and industrialised.

The initiative responds to a long-standing problem in cybercrime disruption: fragmented terminology, isolated investigations and inconsistent reporting structures. Cosmos aims to standardise definitions, organise threat intelligence into a shared structure and help different actors coordinate more effectively across borders.

The first version of the platform contains nine core categories, 229 identified cybercrime-related elements and 849 mapped connections showing how criminal networks, tools and services interact. The dataset is designed to expand as the wider community contributes new intelligence.

Why does it matter?

Cybercrime increasingly functions as an interconnected ecosystem, with specialised groups, tools, infrastructure providers and illicit services supporting one another across borders. A shared map of those relationships could help shift cyber defence from isolated incident response towards more coordinated disruption of criminal networks, while giving investigators and policymakers a clearer view of how digital crime is organised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!

Google warns adversaries are industrialising AI-enabled cyberattacks

Google Threat Intelligence Group says cyber adversaries are moving from early AI experimentation towards the industrial-scale use of generative models across malicious workflows.

In a new report, GTIG says it has identified, for the first time, a threat actor using a zero-day exploit that it believes was developed with AI. The criminal actor had planned to use the exploit in a mass exploitation campaign involving a two-factor authentication bypass, but Google said its proactive discovery may have prevented the campaign from going ahead.

The findings describe several uses of AI in cyber operations. Threat actors linked to the People’s Republic of China and the Democratic People’s Republic of Korea have used AI for vulnerability research, including persona-based prompting, specialised vulnerability datasets and automated analysis of vulnerabilities and proof-of-concept exploits.

Other actors have used AI-assisted coding to support defence evasion, including the development of obfuscation tools, relay infrastructure and malware containing AI-generated decoy logic. Google said these uses show how generative models can accelerate development cycles and make malicious tools harder to detect.

Google also highlights PROMPTSPY, an Android backdoor that uses Gemini API capabilities to interpret device interfaces, generate structured commands, simulate gestures and support more autonomous malware behaviour. The company said it had disabled assets linked to the activity and that no apps containing PROMPTSPY were found on Google Play at the time of its current detection.

AI systems are also becoming direct targets. Google says attackers are compromising AI software dependencies, open-source agent skills, API connectors and AI gateway tools such as LiteLLM. The report warns that such supply-chain attacks could expose API secrets, enable ransomware activity or allow intruders to use internal AI systems for reconnaissance, data theft and deeper network access.

Why does it matter?

Google’s findings suggest that AI-enabled cyber activity is moving beyond basic phishing support or faster research. Generative models are now being used in vulnerability discovery, exploit development, malware obfuscation, autonomous device interaction, information operations and attacks on AI infrastructure itself. That could make some attacks faster, more adaptive and harder to detect, while also turning AI platforms, integrations and supply chains into part of the cyberattack surface.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

New research initiative targets biology with quantum computing and AI

Google has launched REPLIQA, a life sciences and quantum AI research programme backed by a $10 million commitment to five universities. The initiative aims to apply advanced quantum science and AI to biological research, with a long-term focus on improving understanding of human biology and health.

Google Quantum AI and Google.org lead the programme and will support research into complex molecular interactions, including biological processes such as protein folding and cellular responses to new drugs. Google says classical computers often struggle to simulate such interactions accurately, while quantum technologies operate according to the same physical principles that govern molecules.

The funding will support work at Harvard University, the Massachusetts Institute of Technology, the University of California, San Diego, the University of California, Santa Barbara, and the University of Arizona. Google says the programme is intended to build a shared scientific ecosystem around quantum science, AI and life sciences.

The initiative will focus on foundational tools such as quantum sensors and quantum-enhanced AI algorithms that could support future discoveries in biological science and drug development. Google describes REPLIQA as a long-term research effort rather than a programme expected to produce immediate results.

Why does it matter?

REPLIQA points to growing interest in combining quantum science, AI and life sciences to address biological problems that are difficult for classical computing to model. Its significance lies less in immediate health applications and more in the research infrastructure it aims to build: sensors, algorithms and academic partnerships that could eventually improve biological simulations and support future medical discovery.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

UNESCO explores how AI and design can reshape culture and creativity

UNESCO’s Regional Office for East Asia has launched a global call for good practice cases on how AI and design are being used to support culture, creativity, education, sustainability and social inclusion.

The call invites submissions from organisations, institutions, practitioners, educators and innovators using AI together with design approaches to create positive outcomes in cultural and creative sectors. UNESCO says the initiative is looking for practical examples that support culture, creativity, livelihoods, learning, sustainability and social inclusion.

The call focuses on four thematic areas: cultural heritage protection, documentation and interpretation; cultural tourism and visitor experience design; fashion and creative industry innovation; and design education and capacity development.

Selected projects may receive UNESCO recognition, be included in a publication or catalogue, participate in exhibitions or showcases, receive invitations to talks or events, and gain visibility through UNESCO communication channels.

The initiative reflects growing international interest in how AI can support creative and cultural sectors beyond industrial productivity. UNESCO’s framing places design principles such as inclusion, accessibility, cultural relevance and people-centred use at the centre of responsible AI deployment in cultural and educational contexts.

Submissions are open until 15 June 2026, with selected cases scheduled to be announced on 15 July 2026. Applications may be submitted in English or Chinese and are expected to demonstrate practical examples of AI supporting learning, livelihoods, creativity or sustainable development through design-oriented approaches.

Why does it matter?

The call points to a wider effort to shape AI use in culture and creativity around public value rather than solely on automation. By focusing on heritage, tourism, fashion and design education, UNESCO is encouraging examples where AI supports local knowledge, creative livelihoods, cultural access and inclusive innovation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Canada invests in AI and quantum technology firms in British Columbia

Gregor Robertson, Minister of Housing and Infrastructure and Minister responsible for Pacific Economic Development Canada (PacifiCan), announced more than C$17.3 million in funding for eight British Columbia technology companies to accelerate the commercialisation and adoption of AI and quantum technologies.

Through PacifiCan, the federal government is supporting projects focused on robotics, semiconductor manufacturing, AI infrastructure, and quantum supply chains as part of a broader strategy to strengthen domestic innovation and sovereign technology capabilities.

A major share of the investment will support Human in Motion Robotics, which received CAD$3 million to commercialise its AI-powered XoMotion wearable robotic exoskeleton. The company plans to integrate AI into mobility systems, expand manufacturing, and move the technology beyond clinical environments into homes and community settings for people with spinal cord injuries and neurological conditions.

Another funded company, Dream Photonics, will receive more than CAD$1.1 million to establish pilot manufacturing for optical interconnect technologies used in AI and quantum chips. The project aims to strengthen Canada’s domestic semiconductor and quantum ecosystem while creating skilled technology jobs in British Columbia.

The announcement also highlighted the rapid expansion of British Columbia’s AI ecosystem, which now includes nearly 600 AI companies. Canadian officials linked the investments to broader efforts to secure domestic compute infrastructure, strengthen AI supply chains, and position Canada competitively in emerging technologies ahead of events such as Web Summit Vancouver.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Canada advances sovereign AI data centre strategy with TELUS

The Canadian government and TELUS are advancing plans to develop large-scale sovereign AI infrastructure as part of Ottawa’s broader strategy to strengthen domestic compute capacity and support the country’s AI ecosystem.

The initiative was announced by Evan Solomon (Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario) and focuses on a proposed AI data centre project in British Columbia designed to support researchers, businesses, and academic institutions.

A project that forms part of Canada’s ‘Enabling large-scale sovereign AI data centres’ initiative, which was introduced under Budget 2025. Ottawa stated that sovereign compute infrastructure is increasingly important for maintaining national competitiveness in AI while ensuring Canadian data, intellectual property, and economic value remain within the country.

The government also confirmed that no formal funding commitments have yet been distributed, with discussions currently progressing through non-binding memoranda of understanding with selected industry participants.

Local officials argued that large-scale compute infrastructure has become a strategic economic requirement as governments worldwide race to expand AI processing capabilities. Canada believes it holds competitive advantages due to its colder climate, sustainable energy resources, and network infrastructure, all of which could help attract future AI investment and hyperscale data centre development.

Why does it matter?

The race for sovereign AI infrastructure is rapidly becoming one of the most important geopolitical and economic competitions of the digital era. The Canada-TELUS partnership illustrates how countries are moving beyond AI model development alone and shifting focus towards the physical infrastructure required to sustain future AI ecosystems, including data centres, energy capacity, semiconductors, and domestic compute networks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Joint cybersecurity agencies publish guidance on secure adoption of agentic AI

Cybersecurity agencies from Australia, Canada, New Zealand, the United Kingdom and the United States have published joint guidance on the careful adoption of agentic AI services in organisational IT environments.

The guidance is intended to help organisations design, develop, deploy and operate agentic AI systems, and to make informed risk assessments and mitigations. It primarily focuses on large-language-model-based agentic AI systems.

The publication examines threats to and vulnerabilities within agentic AI systems, including risks introduced through system components, integrations and downstream use. It also considers broader risks arising from agentic AI behaviour in IT environments.

The guidance covers wider agentic AI security considerations, specific security risks, best practices for securing agentic AI systems and steps organisations can take to prepare for emerging and future threats.

It was co-authored by the Australian Signals Directorate’s Australian Cyber Security Centre, the US Cybersecurity and Infrastructure Security Agency, the US National Security Agency, the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre and the UK National Cyber Security Centre.

Why does it matter?

Agentic AI systems can act with greater autonomy than conventional software tools, including by interacting with other systems, using integrations and taking steps towards defined goals. That creates new cybersecurity risks when such tools are embedded in organisational IT environments. The joint guidance shows that major cyber agencies are treating agentic AI as an emerging operational security issue, not only as a question of AI policy or experimentation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Health New Zealand issues guidance on use of generative AI and large language models

Health New Zealand has published new guidance on generative AI and large language models for healthcare settings.

The guidance states that the National Artificial Intelligence and Algorithm Expert Advisory Group evaluates the use of generative AI tools and LLMs and recommends caution in their application across Health New Zealand environments. It notes that further data is needed to assess risks and benefits in the New Zealand health context.

Employees and contractors are prohibited from entering personal, confidential or sensitive patient or organisational information into unapproved LLMs or generative AI tools. The guidance also says such tools must not be used for clinical decisions or personalised patient advice.

Staff using generative AI tools in other contexts must take full responsibility for checking the information generated and acknowledge when generative AI has been used to create content. Anyone planning to use generative AI or LLMs is also asked to seek advice from the advisory group.

The guidance highlights potential risks including privacy breaches, inaccurate or misleading outputs, bias in training data, lack of transparency in model outputs, data sovereignty concerns and intellectual property risks. It also notes that generative AI systems may not adequately support te reo Māori and other minority languages spoken in Aotearoa New Zealand.

Why does it matter?

The guidance shows how health systems are beginning to set practical boundaries for generative AI before its use becomes routine in clinical and administrative settings. By prohibiting unapproved tools for patient data, clinical decisions and personalised advice, Health New Zealand is drawing a clear line between limited productivity uses and high-risk healthcare applications. In contrast, its references to Māori data sovereignty and language support widen the governance frame to include equity, cultural rights and data protection concerns that standard technology policies may not fully address.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot