Analysis reveals Grok generated 3 million sexualised images

A new analysis found Grok generated an estimated three million sexualised images in 11 days, including around 23,000 appearing to depict children. The findings raise serious concerns over safeguards, content moderation, and platform responsibility.

The surge followed the launch of Grok’s one-click image editing feature in late December, which quickly gained traction among users. Restrictions were later introduced, including paid access limits and technical measures to prevent image undressing.

Researchers based their estimates on a random sample of 20,000 images, extrapolating from these results to more than 4.6 million images generated during the study period. Automated tools and manual review identified sexualised content and confirmed cases involving individuals appearing under 18.

Campaigners have warned that the findings expose significant gaps in AI safety controls, particularly in protecting children. Calls are growing for stricter oversight, stronger accountability, and more robust safeguards before large-scale AI image deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan arrests suspect over AI deepfake pornography

Police in Japan have arrested a man accused of creating and selling non-consensual deepfake pornography using AI tools. The Tokyo Metropolitan Police Department said thousands of manipulated images of female celebrities were distributed through paid websites.

Investigators in Japan allege the suspect generated hundreds of thousands of images over two years using freely available generative AI software. Authorities say the content was promoted on social media before being sold via subscription platforms.

The arrest follows earlier cases in Japan and reflects growing concern among police worldwide. In South Korea, law enforcement has reported hundreds of arrests linked to deepfake sexual crimes, while cases have also emerged in the UK.

European agencies, including Europol, have also coordinated arrests tied to AI-generated abuse material. Law enforcement bodies say the spread of accessible AI tools is forcing rapid changes in forensic investigation and in the handling of digital evidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Stanford and Swiss institutes unite on open AI models

Stanford University, ETH Zurich, and EPFL have launched a transatlantic partnership to develop open-source AI models prioritising societal values over commercial interests.

The partnership was formalised through a memorandum of understanding signed during the World Economic Forum meeting in Davos.

The agreement establishes long-term cooperation in AI research, education, and innovation, with a focus on large-scale multimodal models. The initiative aims to strengthen academia’s influence over global AI by promoting transparency, accountability, and inclusive access.

Joint projects will develop open datasets, evaluation benchmarks, and responsible deployment frameworks, alongside researcher exchanges and workshops. The effort aims to embed human-centred principles into technical progress while supporting interdisciplinary discovery.

Academic leaders said the alliance reinforces open science and cultural diversity amid growing corporate influence over foundation models. The collaboration positions universities as central drivers of ethical, trustworthy, and socially grounded AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google adds Personal Intelligence to AI Search

Google has expanded AI Search with Personal Intelligence, enabling more personalised responses using Gmail and Google Photos data. The feature aims to combine global information with individual context to deliver search results tailored to each user.

Eligible Google AI Pro and AI Ultra subscribers can opt in to securely connect their Gmail and Photos accounts, allowing Search to draw on personal preferences, travel plans, purchases, and memories.

The system uses contextual insights to generate recommendations that reflect users’ habits, interests, and upcoming activities.

Personal Intelligence enhances shopping, travel planning, and lifestyle discovery by anticipating needs and offering customised suggestions. Privacy controls remain central, with users able to manage data connections and turn off personal context at any time.

The feature is launching as an experimental Labs release for English-language users in the United States, with broader availability expected following testing. Google said ongoing feedback will guide refinements as the system continues to evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Education for Countries programme signals OpenAI push into public education policy

OpenAI has launched the Education for Countries programme, a new global initiative designed to support governments in modernising education systems and preparing workforces for an AI-driven economy.

The programme responds to a widening gap between rapid advances in AI capabilities and people’s ability to use them effectively in everyday learning and work.

Education systems are positioned at the centre of closing that gap, as research suggests a significant share of core workplace skills will change by the end of the decade.

By integrating AI tools, training and research into schools and universities, national education frameworks can evolve alongside technological change and better equip students for future labour markets.

The programme combines access to tools such as ChatGPT Edu and advanced language models with large-scale research on learning outcomes, tailored national training schemes and internationally recognised certifications.

A global network of governments, universities and education leaders will also share best practices and shape responsible approaches to AI use in classrooms.

Initial partners include Estonia, Greece, Italy, Jordan, Kazakhstan, Slovakia, Trinidad and Tobago and the United Arab Emirates. Early national rollouts, particularly in Estonia, already involve tens of thousands of students and educators, with further countries expected to join later in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Burkina Faso pushes digital sovereignty through national infrastructure supervision

Burkina Faso has launched work on a Digital Infrastructure Supervision Centre as part of a broader effort to strengthen national oversight of digital public infrastructure and reduce exposure to external digital risks.

The project forms a core pillar of the government’s digital sovereignty strategy amid rising cybersecurity threats across public systems.

Led by the Ministry of Digital Transition, Posts and Electronic Communications, the facility is estimated to cost $5.4 million and is scheduled for completion by October.

Authorities state that the centre will centralise oversight of the national backbone network, secure cyberspace operations and supervise the functioning of domestic data centres instead of relying on external monitoring mechanisms.

Government officials argue that the supervision centre will enable resilient and sovereign management of critical digital systems while supporting a policy requiring sensitive national data to remain within domestic infrastructure.

The initiative also complements recent investments in biometric identity systems and regional digital identity frameworks.

Beyond infrastructure security, the project is positioned as groundwork for future AI adoption by strengthening sovereign data and connectivity systems.

The leadership of Burkina Faso continues to emphasise digital autonomy as a strategic priority across governance, identity management and emerging technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok restructures operations for US market

TikTok has finalised a deal allowing the app to continue operating in America by separating its US business from its global operations. The agreement follows years of political pressure in the US over national security concerns.

Under the arrangement, a new entity will manage TikTok’s US operations, with user data and algorithms handled inside the US. The recommendation algorithm has been licensed and will now be trained only on US user data to meet American regulatory requirements.

Ownership of TikTok’s US business is shared among American and international investors, while China-based ByteDance retains a minority stake. Oracle will oversee data security and cloud infrastructure for users in the US.

Analysts say the changes could alter how the app functions for the roughly 200 million users in the US. Questions remain over whether a US-trained algorithm will perform as effectively as the global version.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU cyber rules target global tech dependence

The European Union has proposed new cybersecurity rules aimed at reducing reliance on high-risk technology suppliers, particularly from China. In the European Union, policymakers argue existing voluntary measures failed to curb dependence on vendors such as Huawei and ZTE.

The proposal would introduce binding obligations for telecom operators across the European Union to phase out Chinese equipment. At the same time, officials have warned that reliance on US cloud and satellite services also poses security risks for Europe.

Despite increased funding and expanded certification plans, divisions remain within the European Union. Countries including Germany and France support stricter sovereignty rules, while others favour continued partnerships with US technology firms.

Analysts say the lack of consensus in the European Union could weaken the impact of the reforms. Without clear enforcement and investment in European alternatives, Europe may struggle to reduce dependence on both China and the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI Act strengthens training rules despite 2025 Digital Omnibus reforms

The European AI Regulation reinforces training and awareness as core compliance requirements, even as the EU considers simplifications through the proposed Digital Omnibus. Regulation (EU) 2024/1689 sets a risk-based framework for AI systems under the AI Act.

AI literacy is promoted through a multi-level approach. The EU institutions focus on public awareness, national authorities support voluntary codes of conduct, and organisations are currently required under the AI Act to ensure adequate AI competence among staff and third parties involved in system use.

A proposed amendment to Article 4, submitted in November 2025 under the Digital Omnibus, would replace mandatory internal competence requirements with encouragement-based measures. The change seeks to reduce administrative burden without removing AI Act risk management duties.

Even if adopted, the amendment would not eliminate the practical need for AI training. Competence in AI systems remains essential for governance, transparency, monitoring, and incident handling, particularly for high-risk use cases regulated by the AI Act.

Companies are therefore expected to continue investing in tailored AI training across management, technical, legal, and operational roles. Embedding awareness and competence into risk management frameworks remains critical to compliance and risk mitigation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Humanoid robots and AI take centre stage as Musk joins Davos 2026

Elon Musk made his first appearance at the World Economic Forum in Davos despite years of public criticism towards the gathering, arguing that AI and robotics represent the only realistic route to global abundance.

Speaking alongside BlackRock chief executive Larry Fink, Musk framed robotics as a civilisational shift rather than a niche innovation, claiming widespread automation will raise living standards and reshape economic growth.

Musk predicted a future where robots outnumber humans, with humanoid systems embedded across industry, healthcare and domestic life.

He highlighted elder care as a key use case in ageing societies facing labour shortages, suggesting that robotics could compensate for demographic decline rather than relying solely on migration or extended working lives.

Tesla’s Optimus humanoid robots are already performing simple factory tasks, with more complex functions expected within a year.

Musk indicated public sales could begin by 2027 once reliability thresholds are met. He also argued autonomous driving is largely resolved, pointing to expanding robotaxi deployments in the US and imminent regulatory decisions in Europe and China.

The global market for humanoid robotics remains relatively small, but analysts expect rapid expansion as AI capabilities improve and costs fall.

Musk at Davos 2026 presented robotics as an engine for economic acceleration, suggesting ubiquitous automation could unlock productivity gains on a scale comparable to past industrial revolutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!