Council presidency launches talks on AI deepfakes and cyberattacks

EU member states are preparing to open formal discussions on the risks posed by AI-powered deepfakes and their use in cyberattacks, following an initiative by the current Council presidency.

The talks are intended to assess how synthetic media may undermine democratic processes and public trust across the bloc.

According to sources, capitals will also begin coordinated exchanges on the proposed Democracy Shield, a framework aimed at strengthening resilience against foreign interference and digitally enabled manipulation.

Deepfakes are increasingly viewed as a cross-cutting threat, combining disinformation, cyber operations and influence campaigns.

The timeline set out by the presidency foresees structured discussions among national experts before escalating the issue to the ministerial level. The approach reflects growing concern that existing cyber and media rules are insufficient to address rapidly advancing AI-generated content.

An initiative that signals a broader shift within the Council towards treating deepfakes not only as a content moderation challenge, but as a security risk with implications for elections, governance and institutional stability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNESCO ethics framework guides national AI roadmap in Lao PDR

Lao PDR has unveiled plans for a national AI strategy guided by UNESCO’s ethics framework to support responsible and inclusive digital development. The framework will inform policy design across governance, education, infrastructure, and economic transformation.

The assessment outlines Laos’ readiness to govern AI, noting progress in digital policy alongside gaps in access, skills, and research capacity. Officials stressed the need for homegrown AI solutions that respect culture, reduce inequality, and deliver broad social benefit.

UNESCO and the UN Country Team said the strategy aligns with Laos’ broader digital transformation goals under its 10th development plan. The initiative aims to improve coordination, increase R&D investment, and modernise education to support ethical AI deployment.

Lao PDR joins 77 countries worldwide using UNESCO’s tools to shape national AI policies, reinforcing its commitment to sustainable innovation, ethical governance, and inclusive growth as artificial intelligence becomes central to future development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

India considers social media bans for children under 16

India is emerging as a potential test case for age-based social media restrictions as several states examine Australia-style bans on children’s access to platforms.

Goa and Andhra Pradesh are studying whether to prohibit social media use for those under 16, citing growing concerns over online safety and youth well-being. The debate has also reached the judiciary, with the Madras High Court urging the federal government to consider similar measures.

The proposals carry major implications for global technology companies, given that India’s internet population exceeds one billion users and continues to skew young.

Platforms such as Meta, Google and X rely heavily on India for long-term growth, advertising revenue and user expansion. Industry voices argue parental oversight is more effective than government bans, warning that restrictions could push minors towards unregulated digital spaces.

Australia’s under-16 ban, which entered force in late 2025, has already exposed enforcement difficulties, particularly around age verification and privacy risks. Determining users’ ages accurately remains challenging, while digital identity systems raise concerns about data security and surveillance.

Legal experts note that internet governance falls under India’s federal authority, limiting what individual states can enforce without central approval.

Although the data protection law of India includes safeguards for children, full implementation will extend through 2027, leaving policymakers to balance child protection, platform accountability and unintended consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Austrian watchdog rules against Microsoft education tracking

Microsoft has been found to have unlawfully placed tracking cookies on a child’s device without valid consent, following a ruling by Austria’s data protection authority.

The case stems from a complaint filed by a privacy group, noyb, concerning Microsoft 365 Education, a platform used by millions of pupils and teachers across Europe.

According to the decision, Microsoft deployed cookies that analysed user behaviour, collected browser data and served advertising purposes, despite being used in an educational context involving minors. The Austrian authority ordered the company to cease the unlawful tracking within four weeks.

Noyb warned the ruling could have broader implications for organisations relying on Microsoft software, particularly schools and public bodies. A data protection lawyer at the group criticised Microsoft’s approach to privacy, arguing that protections appear secondary to marketing considerations.

The ruling follows earlier GDPR findings against Microsoft, including violations of access rights and concerns raised over the European Commission’s own use of Microsoft 365.

Although previous enforcement actions were closed after contractual changes, regulatory scrutiny of Microsoft’s education and public sector products continues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Anthropic CEO warns of civilisation-level AI risk

Anthropic chief executive Dario Amodei has issued a stark warning that superhuman AI could inflict civilisation-level damage unless governments and industry act far more quickly and seriously.

In a forthcoming essay, Amodei argues humanity is approaching a critical transition that will test whether political, social and technological systems are mature enough to handle unprecedented power.

Amodei believes AI systems will soon outperform humans across nearly every field, describing a future ‘country of geniuses in a data centre’ capable of autonomous and continuous creation.

He warns that such systems could rival nation-states in influence, accelerating economic disruption while placing extraordinary power in the hands of a small number of actors.

Among the gravest dangers, Amodei highlights mass displacement of white-collar jobs, rising biological security risks and the empowerment of authoritarian governments through advanced surveillance and control.

He also cautions that AI companies themselves pose systemic risks due to their control over frontier models, infrastructure and user attention at a global scale.

Despite the severity of his concerns, Amodei maintains cautious optimism, arguing that meaningful governance, transparency and public engagement could still steer AI development towards beneficial outcomes.

Without urgent action, however, he warns that financial incentives and political complacency may override restraint during the most consequential technological shift humanity has faced.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI reshapes university language classrooms

Universities are increasingly integrating AI into foreign language teaching as lecturers search for more flexible and personalised learning methods. AI-powered tools are being used to generate teaching materials, adapt content to student needs and expand practice beyond classroom limits.

Despite growing interest, adoption among language lecturers remains uneven across higher education. Studies suggest AI-supported learning can improve student motivation by offering personalised feedback and judgment-free speaking practice.

Educators highlight the value of AI for supporting curriculum and creating resources, particularly for less commonly taught languages. Tools can generate targeted dialogues, simplified texts and pronunciation feedback that would otherwise require significant manual effort.

Human interaction, however, remains central to effective language learning. Lecturers stress that AI works best as a supplement, enhancing teaching quality without replacing real-world communication and pedagogical expertise.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT model draws scrutiny over Grokipedia citations

OpenAI’s latest GPT-5.2 model has sparked concern after repeatedly citing Grokipedia, an AI-generated encyclopaedia launched by Elon Musk’s xAI, raising fresh fears of misinformation amplification.

Testing by The Guardian showed the model referencing Grokipedia multiple times when answering questions on geopolitics and historical figures.

Launched in October 2025, the AI-generated platform rivals Wikipedia but relies solely on automated content without human editing. Critics warn that limited human oversight raises risks of factual errors and ideological bias, as Grokipedia faces criticism for promoting controversial narratives.

OpenAI said its systems use safety filters and diverse public sources, while xAI dismissed the concerns as media distortion. The episode deepens scrutiny of AI-generated knowledge platforms amid growing regulatory and public pressure for transparency and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France accelerates rapid ban on social media for under-15s

French President Emmanuel Macron has called for an accelerated legislative process to introduce a nationwide ban on social media for children under 15 by September.

Speaking in a televised address, Macron said the proposal would move rapidly through parliament so that explicit rules are in place before the new school year begins.

Macron framed the initiative as a matter of child protection and digital sovereignty, arguing that foreign platforms or algorithmic incentives should not shape young people’s cognitive and emotional development.

He linked excessive social media use to manipulation, commercial exploitation and growing psychological harm among teenagers.

Data from France’s health watchdog show that almost half of teenagers spend between two and five hours a day on their smartphones, with the vast majority accessing social networks daily.

Regulators have associated such patterns with reduced self-esteem and increased exposure to content linked to self-harm, drug use and suicide, prompting legal action by families against major platforms.

The proposal from France follows similar debates in the UK and Australia, where age-based access restrictions have already been introduced.

The French government argues that decisive national action is necessary instead of waiting for a slower Europe-wide consensus, although Macron has reiterated support for a broader EU approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI startup secures $5M to transform children’s digital learning

AI education start-up Sparkli has raised $5 million in seed funding to develop an ‘anti-chatbot’ AI platform to transform how children engage with digital content.

Unlike traditional chatbots that focus on general conversation, Sparkli positions its AI as an interactive learning companion, guiding kids through topics such as math, science and language skills in a dynamic, age-appropriate format.

The funding will support product development, content creation and expansion into new markets. Founders say the platform addresses increasing concerns about passive screen time by offering educational interactions that blend AI responsiveness with curriculum-aligned activities.

The company emphasises safe design and parental controls to ensure technology supports learning outcomes rather than distraction.

Investors backing Sparkli see demand for responsible AI applications for children that can enhance cognition and motivation while preserving digital well-being. As schools and homes increasingly integrate AI tools, Sparkli aims to position itself at the intersection of educational technology and child-centred innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Education for Countries programme signals OpenAI push into public education policy

OpenAI has launched the Education for Countries programme, a new global initiative designed to support governments in modernising education systems and preparing workforces for an AI-driven economy.

The programme responds to a widening gap between rapid advances in AI capabilities and people’s ability to use them effectively in everyday learning and work.

Education systems are positioned at the centre of closing that gap, as research suggests a significant share of core workplace skills will change by the end of the decade.

By integrating AI tools, training and research into schools and universities, national education frameworks can evolve alongside technological change and better equip students for future labour markets.

The programme combines access to tools such as ChatGPT Edu and advanced language models with large-scale research on learning outcomes, tailored national training schemes and internationally recognised certifications.

A global network of governments, universities and education leaders will also share best practices and shape responsible approaches to AI use in classrooms.

Initial partners include Estonia, Greece, Italy, Jordan, Kazakhstan, Slovakia, Trinidad and Tobago and the United Arab Emirates. Early national rollouts, particularly in Estonia, already involve tens of thousands of students and educators, with further countries expected to join later in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!