Analysis reveals Grok generated 3 million sexualised images

A new analysis found Grok generated an estimated three million sexualised images in 11 days, including around 23,000 appearing to depict children. The findings raise serious concerns over safeguards, content moderation, and platform responsibility.

The surge followed the launch of Grok’s one-click image editing feature in late December, which quickly gained traction among users. Restrictions were later introduced, including paid access limits and technical measures to prevent image undressing.

Researchers based their estimates on a random sample of 20,000 images, extrapolating from these results to more than 4.6 million images generated during the study period. Automated tools and manual review identified sexualised content and confirmed cases involving individuals appearing under 18.

Campaigners have warned that the findings expose significant gaps in AI safety controls, particularly in protecting children. Calls are growing for stricter oversight, stronger accountability, and more robust safeguards before large-scale AI image deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Stanford and Swiss institutes unite on open AI models

Stanford University, ETH Zurich, and EPFL have launched a transatlantic partnership to develop open-source AI models prioritising societal values over commercial interests.

The partnership was formalised through a memorandum of understanding signed during the World Economic Forum meeting in Davos.

The agreement establishes long-term cooperation in AI research, education, and innovation, with a focus on large-scale multimodal models. The initiative aims to strengthen academia’s influence over global AI by promoting transparency, accountability, and inclusive access.

Joint projects will develop open datasets, evaluation benchmarks, and responsible deployment frameworks, alongside researcher exchanges and workshops. The effort aims to embed human-centred principles into technical progress while supporting interdisciplinary discovery.

Academic leaders said the alliance reinforces open science and cultural diversity amid growing corporate influence over foundation models. The collaboration positions universities as central drivers of ethical, trustworthy, and socially grounded AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google adds Personal Intelligence to AI Search

Google has expanded AI Search with Personal Intelligence, enabling more personalised responses using Gmail and Google Photos data. The feature aims to combine global information with individual context to deliver search results tailored to each user.

Eligible Google AI Pro and AI Ultra subscribers can opt in to securely connect their Gmail and Photos accounts, allowing Search to draw on personal preferences, travel plans, purchases, and memories.

The system uses contextual insights to generate recommendations that reflect users’ habits, interests, and upcoming activities.

Personal Intelligence enhances shopping, travel planning, and lifestyle discovery by anticipating needs and offering customised suggestions. Privacy controls remain central, with users able to manage data connections and turn off personal context at any time.

The feature is launching as an experimental Labs release for English-language users in the United States, with broader availability expected following testing. Google said ongoing feedback will guide refinements as the system continues to evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Education for Countries programme signals OpenAI push into public education policy

OpenAI has launched the Education for Countries programme, a new global initiative designed to support governments in modernising education systems and preparing workforces for an AI-driven economy.

The programme responds to a widening gap between rapid advances in AI capabilities and people’s ability to use them effectively in everyday learning and work.

Education systems are positioned at the centre of closing that gap, as research suggests a significant share of core workplace skills will change by the end of the decade.

By integrating AI tools, training and research into schools and universities, national education frameworks can evolve alongside technological change and better equip students for future labour markets.

The programme combines access to tools such as ChatGPT Edu and advanced language models with large-scale research on learning outcomes, tailored national training schemes and internationally recognised certifications.

A global network of governments, universities and education leaders will also share best practices and shape responsible approaches to AI use in classrooms.

Initial partners include Estonia, Greece, Italy, Jordan, Kazakhstan, Slovakia, Trinidad and Tobago and the United Arab Emirates. Early national rollouts, particularly in Estonia, already involve tens of thousands of students and educators, with further countries expected to join later in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Burkina Faso pushes digital sovereignty through national infrastructure supervision

Burkina Faso has launched work on a Digital Infrastructure Supervision Centre as part of a broader effort to strengthen national oversight of digital public infrastructure and reduce exposure to external digital risks.

The project forms a core pillar of the government’s digital sovereignty strategy amid rising cybersecurity threats across public systems.

Led by the Ministry of Digital Transition, Posts and Electronic Communications, the facility is estimated to cost $5.4 million and is scheduled for completion by October.

Authorities state that the centre will centralise oversight of the national backbone network, secure cyberspace operations and supervise the functioning of domestic data centres instead of relying on external monitoring mechanisms.

Government officials argue that the supervision centre will enable resilient and sovereign management of critical digital systems while supporting a policy requiring sensitive national data to remain within domestic infrastructure.

The initiative also complements recent investments in biometric identity systems and regional digital identity frameworks.

Beyond infrastructure security, the project is positioned as groundwork for future AI adoption by strengthening sovereign data and connectivity systems.

The leadership of Burkina Faso continues to emphasise digital autonomy as a strategic priority across governance, identity management and emerging technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cambodia Internet Governance Forum marks major step toward inclusive digital policy

The first national Internet Governance Forum in Cambodia has taken place, establishing a new platform for digital policy dialogue. The Cambodia Internet Governance Forum (CamIGF) included civil society, private sector and youth participants.

The forum follows an Internet Universality Indicators assessment led by UNESCO and national partners. The assessment recommended a permanent multistakeholder platform for digital governance, grounded in human rights, openness, accessibility and participation.

Opening remarks from national and international stakeholders framed the CamIGF as a move toward people-centred and rights-based digital transformation. Speakers stressed the need for cross-sector cooperation to ensure connectivity, innovation and regulation deliver public benefit.

Discussions focused on online safety in the age of AI, meaningful connectivity, youth participation and digital rights. The programme also included Cambodia’s Youth Internet Governance Forum, highlighting young people’s role in addressing data protection and digital skills gaps.

By institutionalising a national IGF, Cambodia joins a growing global network using multistakeholder dialogue to guide digital policy. UNESCO confirmed continued support for implementing assessment recommendations and strengthening inclusive digital governance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WEF paper warns of widening AI investment gap

Policy-makers are being urged to take a more targeted approach to ‘sovereign AI’ spending, as a new paper released alongside the World Economic Forum meeting in Davos argues that no country can realistically build every part of the AI stack alone. Instead, the authors recommend treating AI sovereignty as ‘strategic interdependence’, combining selective domestic investment with trusted partnerships and alliances.

The paper, co-authored by the World Economic Forum and Bain & Co, highlights how heavily the United States and China dominate the global AI landscape. It estimates that the two countries capture around 65% of worldwide investment across the AI value chain, reflecting a full-stack model, from chips and cloud infrastructure to applications, that most other economies cannot match at the same scale.

For smaller and mid-sized economies, that imbalance can translate into a competitive disadvantage, because AI infrastructure, such as data centres and computing capacity, is increasingly viewed as the backbone of national AI capability. Still, the report argues that faster-moving countries can carve out a niche by focusing on a few priority areas, pooling regional capacity, or securing access through partnerships rather than trying to replicate the US-China approach.

The message was echoed in Davos by Nvidia chief executive Jensen Huang, who said every country should treat AI as essential infrastructure, comparable to electricity grids and transport networks. He argued that building AI data centres could drive demand for well-paid skilled trades, from electricians and plumbers to network engineers, framing the boom as a major job creator rather than a trigger for widespread job losses.

At the same time, the paper warns that physical constraints could slow expansion, including the availability of land, energy and water, as well as shortages of highly skilled workers. It also notes that local regulation can delay projects, although some industry groups argue that regulatory and cost pressures may push countries to innovate sooner in efficiency and greener data-centre design.

In the UK, industry body UKAI says high energy prices, limited grid capacity, complex planning rules and public scrutiny already create the same hurdles many other countries may soon face. It argues these constraints are helping drive improvements in efficiency, system design and coordination, seen as building blocks for more sustainable AI infrastructure.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Tata’s $11 billion Innovation City plan gains global visibility at Davos

Tata Sons plans to invest $11 billion to build a large ‘Innovation City’ near the upcoming Navi Mumbai International Airport, according to Maharashtra Chief Minister Devendra Fadnavis, speaking at the World Economic Forum (WEF) in Davos. He said the project has drawn strong interest from international investors and will include major infrastructure upgrades alongside a data centre.

Fadnavis said the aim is to turn Mumbai and its wider region into a global, ‘plug-and-play’ innovation hub where companies can quickly set up and scale new technologies. He described the initiative as the first of its kind in India and said work is expected to begin within six to eight months.

The location next to the Adani Group–developed Navi Mumbai Airport is being positioned as an advantage, linking global connectivity with the high-tech industry. The project also reflects a broader global rush to expand data centres as companies roll out AI services, with firms such as Microsoft, Alphabet, and Amazon investing heavily in new capacity worldwide.

Maharashtra, which contributes more than 10 percent of India’s GDP and hosts the country’s financial capital, is also pushing a wider infrastructure drive, including a $30 billion plan to upgrade Mumbai. State leaders have framed these investments as part of an effort to boost growth and respond to economic pressures, including unemployment.

The Innovation City is expected to support India’s ambitions in AI and semiconductors, with national officials pointing to a public-private partnership approach rather than leaving development solely to big tech companies. Alongside this, the state is exploring energy innovation, including potential collaborations on small modular nuclear reactors, following recent legislative support for smaller-scale nuclear projects.

Taken together, the plan is being presented as a bid to attract global investment, accelerate high-tech development, and strengthen India’s role in emerging industrial and technology shifts centred on AI, advanced manufacturing, and digital infrastructure.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI Act strengthens training rules despite 2025 Digital Omnibus reforms

The European AI Regulation reinforces training and awareness as core compliance requirements, even as the EU considers simplifications through the proposed Digital Omnibus. Regulation (EU) 2024/1689 sets a risk-based framework for AI systems under the AI Act.

AI literacy is promoted through a multi-level approach. The EU institutions focus on public awareness, national authorities support voluntary codes of conduct, and organisations are currently required under the AI Act to ensure adequate AI competence among staff and third parties involved in system use.

A proposed amendment to Article 4, submitted in November 2025 under the Digital Omnibus, would replace mandatory internal competence requirements with encouragement-based measures. The change seeks to reduce administrative burden without removing AI Act risk management duties.

Even if adopted, the amendment would not eliminate the practical need for AI training. Competence in AI systems remains essential for governance, transparency, monitoring, and incident handling, particularly for high-risk use cases regulated by the AI Act.

Companies are therefore expected to continue investing in tailored AI training across management, technical, legal, and operational roles. Embedding awareness and competence into risk management frameworks remains critical to compliance and risk mitigation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The House of Lords backs social media ban for under-16s

The upper house of the Parliament of the United Kingdom,, the House of Lords has voted in favour of banning under-16s from social media platforms, backing an amendment to the government’s schools bill by 261 votes to 150. The proposal would require ministers to define restricted platforms and enforce robust age verification within a year.

Political momentum for tighter youth protections has grown after Australia’s similar move, with cross-party support emerging at Westminster. More than 60 Labour MPs have joined Conservatives in urging a UK ban, increasing pressure ahead of a Commons vote.

Supporters argue that excessive social media use contributes to declining mental health, online radicalisation, and classroom disruption. Critics warn that a blanket ban could push teenagers toward less regulated platforms and limit positive benefits, urging more vigorous enforcement of existing safety rules.

The government has rejected the amendment and launched a three-month consultation on age checks, curfews, and curbing compulsive online behaviour. Ministers maintain that further evidence is needed before introducing new legal restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!