ChatGPT model draws scrutiny over Grokipedia citations

OpenAI’s latest GPT-5.2 model has sparked concern after repeatedly citing Grokipedia, an AI-generated encyclopaedia launched by Elon Musk’s xAI, raising fresh fears of misinformation amplification.

Testing by The Guardian showed the model referencing Grokipedia multiple times when answering questions on geopolitics and historical figures.

Launched in October 2025, the AI-generated platform rivals Wikipedia but relies solely on automated content without human editing. Critics warn that limited human oversight raises risks of factual errors and ideological bias, as Grokipedia faces criticism for promoting controversial narratives.

OpenAI said its systems use safety filters and diverse public sources, while xAI dismissed the concerns as media distortion. The episode deepens scrutiny of AI-generated knowledge platforms amid growing regulatory and public pressure for transparency and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok faces regulatory scrutiny in South Korea over explicit AI content

South Korea has moved towards regulatory action against Grok, the generative AI chatbot developed by xAI, following allegations that the system was used to generate and distribute sexually exploitative deepfake images.

The country’s Personal Information Protection Commission has launched a preliminary fact-finding review to assess whether violations occurred and whether the matter falls within its legal remit.

The review follows international reports accusing Grok of facilitating the creation of explicit and non-consensual images of real individuals, including minors.

Under the Personal Information Protection Act of South Korea, generating or altering sexual images of identifiable people without consent may constitute unlawful handling of personal data, exposing providers to enforcement action.

Concerns have intensified after civil society groups estimated that millions of explicit images were produced through Grok over a short period, with thousands involving children.

Several governments, including those in the US, Europe and Canada, have opened inquiries, while parts of Southeast Asia have opted to block access to the service altogether.

In response, xAI has introduced technical restrictions preventing users from generating or editing images of real people. Korean regulators have also demanded stronger youth protection measures from X, warning that failure to address criminal content involving minors could result in administrative penalties.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Major European banks unite to develop euro-backed stablecoin

A consortium of 10 central European banks has established a new company, Qivalis, to develop and issue a euro-pegged stablecoin, targeting a launch in the second half of 2026, subject to regulatory approval.

The initiative seeks to offer a European alternative to US dollar-dominated digital payment systems and strengthen the region’s strategic autonomy in digital finance.

The participating banks include BNP Paribas, ING, UniCredit, KBC, Danske Bank, SEB, Caixabank, DekaBank, Banca Sella, and Raiffeisen Bank International, with BNP Paribas joining after the initial announcement.

Former Coinbase Germany chief executive Jan-Oliver Sell will lead Qivalis as CEO, while former NatWest chair Howard Davies has been appointed chair. The Amsterdam-based company plans to build a workforce of up to 50 employees over the next two years.

Initial use cases will focus on crypto trading, enabling fast, low-cost payments and settlements, with broader applications planned later. The project emerges as stablecoins grow rapidly, led by dollar-backed tokens, while limited € alternatives drive regulatory interest and ECB engagement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oklahoma advances voluntary Bitcoin payments framework

Oklahoma lawmakers have introduced Senate Bill 2064, proposing a legal framework that allows businesses, state employees, and residents to receive payments in Bitcoin without designating it as legal tender.

The bill recognises Bitcoin as a financial instrument, aligning with constitutional limits while enabling its voluntary use across payroll, procurement, and private transactions.

Under the proposal, state employees could opt to receive wages in Bitcoin, US dollars, or a combination of both at the start of each pay period. Payments would be settled at prevailing market rates and deposited into either self-hosted wallets or approved custodial accounts.

Vendors contracting with the state could also choose Bitcoin on a per-transaction basis, while crypto-native firms would benefit from reduced regulatory friction.

The legislation instructs the State Treasurer to appoint a payment processor and develop operational rules, with contracts targeted for completion by early 2027.

If approved, the framework would take effect in November 2026, positioning Oklahoma among a small group of US states exploring direct Bitcoin integration into public finance, alongside initiatives already launched in Texas and New Hampshire.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Writers challenge troubling AI assumptions about language and style

A growing unease among writers is emerging as AI tools reshape how language is produced and perceived. Long-established habits, including the use of em dashes and semicolons, are increasingly being viewed with suspicion as machine-generated text becomes more common.

The concern is not opposition to AI itself, but the blurring of boundaries between human expression and automated output. Writers whose work was used to train large language models without consent say stylistic traits developed over decades are now being misread as algorithmic authorship.

Academic and editorial norms are also shifting under this pressure. Teaching practices that once valued rhythm, voice, and individual cadence are increasingly challenged by stricter stylistic rules, sometimes framed as safeguards against sloppy or machine-like writing rather than as matters of taste or craft.

At the same time, productivity tools embedded into mainstream software continue to intervene in the writing process, offering substitutions and revisions that prioritise clarity and efficiency over nuance. Such interventions risk flattening language and discouraging the idiosyncrasies that define human authorship.

As AI becomes embedded in publishing, education, and professional writing, the debate is shifting from detection to preservation. Many writers warn that protecting human voice and stylistic diversity is essential, arguing that affectless, uniform prose would erode creativity and trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Stanford and Swiss institutes unite on open AI models

Stanford University, ETH Zurich, and EPFL have launched a transatlantic partnership to develop open-source AI models prioritising societal values over commercial interests.

The partnership was formalised through a memorandum of understanding signed during the World Economic Forum meeting in Davos.

The agreement establishes long-term cooperation in AI research, education, and innovation, with a focus on large-scale multimodal models. The initiative aims to strengthen academia’s influence over global AI by promoting transparency, accountability, and inclusive access.

Joint projects will develop open datasets, evaluation benchmarks, and responsible deployment frameworks, alongside researcher exchanges and workshops. The effort aims to embed human-centred principles into technical progress while supporting interdisciplinary discovery.

Academic leaders said the alliance reinforces open science and cultural diversity amid growing corporate influence over foundation models. The collaboration positions universities as central drivers of ethical, trustworthy, and socially grounded AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Education for Countries programme signals OpenAI push into public education policy

OpenAI has launched the Education for Countries programme, a new global initiative designed to support governments in modernising education systems and preparing workforces for an AI-driven economy.

The programme responds to a widening gap between rapid advances in AI capabilities and people’s ability to use them effectively in everyday learning and work.

Education systems are positioned at the centre of closing that gap, as research suggests a significant share of core workplace skills will change by the end of the decade.

By integrating AI tools, training and research into schools and universities, national education frameworks can evolve alongside technological change and better equip students for future labour markets.

The programme combines access to tools such as ChatGPT Edu and advanced language models with large-scale research on learning outcomes, tailored national training schemes and internationally recognised certifications.

A global network of governments, universities and education leaders will also share best practices and shape responsible approaches to AI use in classrooms.

Initial partners include Estonia, Greece, Italy, Jordan, Kazakhstan, Slovakia, Trinidad and Tobago and the United Arab Emirates. Early national rollouts, particularly in Estonia, already involve tens of thousands of students and educators, with further countries expected to join later in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Burkina Faso pushes digital sovereignty through national infrastructure supervision

Burkina Faso has launched work on a Digital Infrastructure Supervision Centre as part of a broader effort to strengthen national oversight of digital public infrastructure and reduce exposure to external digital risks.

The project forms a core pillar of the government’s digital sovereignty strategy amid rising cybersecurity threats across public systems.

Led by the Ministry of Digital Transition, Posts and Electronic Communications, the facility is estimated to cost $5.4 million and is scheduled for completion by October.

Authorities state that the centre will centralise oversight of the national backbone network, secure cyberspace operations and supervise the functioning of domestic data centres instead of relying on external monitoring mechanisms.

Government officials argue that the supervision centre will enable resilient and sovereign management of critical digital systems while supporting a policy requiring sensitive national data to remain within domestic infrastructure.

The initiative also complements recent investments in biometric identity systems and regional digital identity frameworks.

Beyond infrastructure security, the project is positioned as groundwork for future AI adoption by strengthening sovereign data and connectivity systems.

The leadership of Burkina Faso continues to emphasise digital autonomy as a strategic priority across governance, identity management and emerging technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cambodia Internet Governance Forum marks major step toward inclusive digital policy

The first national Internet Governance Forum in Cambodia has taken place, establishing a new platform for digital policy dialogue. The Cambodia Internet Governance Forum (CamIGF) included civil society, private sector and youth participants.

The forum follows an Internet Universality Indicators assessment led by UNESCO and national partners. The assessment recommended a permanent multistakeholder platform for digital governance, grounded in human rights, openness, accessibility and participation.

Opening remarks from national and international stakeholders framed the CamIGF as a move toward people-centred and rights-based digital transformation. Speakers stressed the need for cross-sector cooperation to ensure connectivity, innovation and regulation deliver public benefit.

Discussions focused on online safety in the age of AI, meaningful connectivity, youth participation and digital rights. The programme also included Cambodia’s Youth Internet Governance Forum, highlighting young people’s role in addressing data protection and digital skills gaps.

By institutionalising a national IGF, Cambodia joins a growing global network using multistakeholder dialogue to guide digital policy. UNESCO confirmed continued support for implementing assessment recommendations and strengthening inclusive digital governance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI Act strengthens training rules despite 2025 Digital Omnibus reforms

The European AI Regulation reinforces training and awareness as core compliance requirements, even as the EU considers simplifications through the proposed Digital Omnibus. Regulation (EU) 2024/1689 sets a risk-based framework for AI systems under the AI Act.

AI literacy is promoted through a multi-level approach. The EU institutions focus on public awareness, national authorities support voluntary codes of conduct, and organisations are currently required under the AI Act to ensure adequate AI competence among staff and third parties involved in system use.

A proposed amendment to Article 4, submitted in November 2025 under the Digital Omnibus, would replace mandatory internal competence requirements with encouragement-based measures. The change seeks to reduce administrative burden without removing AI Act risk management duties.

Even if adopted, the amendment would not eliminate the practical need for AI training. Competence in AI systems remains essential for governance, transparency, monitoring, and incident handling, particularly for high-risk use cases regulated by the AI Act.

Companies are therefore expected to continue investing in tailored AI training across management, technical, legal, and operational roles. Embedding awareness and competence into risk management frameworks remains critical to compliance and risk mitigation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!