Grok faces regulatory scrutiny in South Korea over explicit AI content

South Korea has moved towards regulatory action against Grok, the generative AI chatbot developed by xAI, following allegations that the system was used to generate and distribute sexually exploitative deepfake images.

The country’s Personal Information Protection Commission has launched a preliminary fact-finding review to assess whether violations occurred and whether the matter falls within its legal remit.

The review follows international reports accusing Grok of facilitating the creation of explicit and non-consensual images of real individuals, including minors.

Under the Personal Information Protection Act of South Korea, generating or altering sexual images of identifiable people without consent may constitute unlawful handling of personal data, exposing providers to enforcement action.

Concerns have intensified after civil society groups estimated that millions of explicit images were produced through Grok over a short period, with thousands involving children.

Several governments, including those in the US, Europe and Canada, have opened inquiries, while parts of Southeast Asia have opted to block access to the service altogether.

In response, xAI has introduced technical restrictions preventing users from generating or editing images of real people. Korean regulators have also demanded stronger youth protection measures from X, warning that failure to address criminal content involving minors could result in administrative penalties.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France accelerates rapid ban on social media for under-15s

French President Emmanuel Macron has called for an accelerated legislative process to introduce a nationwide ban on social media for children under 15 by September.

Speaking in a televised address, Macron said the proposal would move rapidly through parliament so that explicit rules are in place before the new school year begins.

Macron framed the initiative as a matter of child protection and digital sovereignty, arguing that foreign platforms or algorithmic incentives should not shape young people’s cognitive and emotional development.

He linked excessive social media use to manipulation, commercial exploitation and growing psychological harm among teenagers.

Data from France’s health watchdog show that almost half of teenagers spend between two and five hours a day on their smartphones, with the vast majority accessing social networks daily.

Regulators have associated such patterns with reduced self-esteem and increased exposure to content linked to self-harm, drug use and suicide, prompting legal action by families against major platforms.

The proposal from France follows similar debates in the UK and Australia, where age-based access restrictions have already been introduced.

The French government argues that decisive national action is necessary instead of waiting for a slower Europe-wide consensus, although Macron has reiterated support for a broader EU approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New phishing attacks exploit visual URL tricks to impersonate major brands

Generative phishing techniques are becoming harder to detect as attackers use subtle visual tricks in web addresses to impersonate trusted brands. A new campaign reported by Cybersecurity News shows how simple character swaps create fake websites that closely resemble real ones on mobile browsers.

The phishing attacks rely on a homoglyph technique where the letters ‘r’ and ‘n’ are placed together to mimic the appearance of an ‘m’ in a domain name. On smaller screens, the difference is difficult to spot, allowing phishing pages to appear almost identical to real Microsoft or Marriott login sites.

Cybersecurity researchers observed domains such as rnicrosoft.com being used to send fake security alerts and invoice notifications designed to lure victims into entering credentials. Once compromised, accounts can be hijacked for financial fraud, data theft, or wider access to corporate systems.

Experts warn that mobile browsing increases the risk, as users are less likely to inspect complete URLs before logging in. Directly accessing official apps or typing website addresses manually remains the safest way to avoid falling into these traps.

Security specialists also continue to recommend passkeys, strong, unique passwords, and multi-factor authentication across all major accounts, as well as heightened awareness of domains that visually resemble familiar brands through character substitution.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft confirms Outlook disruption

Microsoft confirmed a service disruption affecting Outlook and Microsoft 365 users in the US, with problems first reported on Wednesday afternoon. The outage primarily affected business and enterprise customers nationwide.

In the US, users reported difficulties sending and receiving email, alongside problems accessing services such as Teams, SharePoint and OneDrive. Microsoft said part of its North America infrastructure was failing to process traffic correctly.

Engineers in the US began rebalancing traffic and restoring affected systems to stabilise services. Microsoft said recovery was under way, though full resolution would take additional time.

The incident highlights the reliance of organisations in the US on cloud-based productivity tools. Businesses across the country experienced disruptions extending into the evening as work and communication systems remained unstable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Writers challenge troubling AI assumptions about language and style

A growing unease among writers is emerging as AI tools reshape how language is produced and perceived. Long-established habits, including the use of em dashes and semicolons, are increasingly being viewed with suspicion as machine-generated text becomes more common.

The concern is not opposition to AI itself, but the blurring of boundaries between human expression and automated output. Writers whose work was used to train large language models without consent say stylistic traits developed over decades are now being misread as algorithmic authorship.

Academic and editorial norms are also shifting under this pressure. Teaching practices that once valued rhythm, voice, and individual cadence are increasingly challenged by stricter stylistic rules, sometimes framed as safeguards against sloppy or machine-like writing rather than as matters of taste or craft.

At the same time, productivity tools embedded into mainstream software continue to intervene in the writing process, offering substitutions and revisions that prioritise clarity and efficiency over nuance. Such interventions risk flattening language and discouraging the idiosyncrasies that define human authorship.

As AI becomes embedded in publishing, education, and professional writing, the debate is shifting from detection to preservation. Many writers warn that protecting human voice and stylistic diversity is essential, arguing that affectless, uniform prose would erode creativity and trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn phishing campaign exposes dangerous DLL sideloading attack

A multi-faceted phishing campaign is abusing LinkedIn private messages to deliver weaponised malware using DLL sideloading, security researchers have warned. The activity relies on PDFs and archive files that appear trustworthy to bypass conventional security controls.

Attackers contact targets on LinkedIn and send self-extracting archives disguised as legitimate documents. When opened, a malicious DLL is sideloaded into a trusted PDF reader, triggering memory-resident malware that establishes encrypted command-and-control channels.

Using LinkedIn messages increases engagement by exploiting professional trust and bypassing email-focused defences. DLL sideloading allows malicious code to run inside legitimate applications, complicating detection.

The campaign enables credential theft, data exfiltration and lateral movement through in-memory backdoors. Encrypted command-and-control traffic makes containment more difficult.

Organisations using common PDF software or Python tooling face elevated risk. Defenders are advised to strengthen social media phishing awareness, monitor DLL loading behaviour and rotate credentials where compromise is suspected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Education for Countries programme signals OpenAI push into public education policy

OpenAI has launched the Education for Countries programme, a new global initiative designed to support governments in modernising education systems and preparing workforces for an AI-driven economy.

The programme responds to a widening gap between rapid advances in AI capabilities and people’s ability to use them effectively in everyday learning and work.

Education systems are positioned at the centre of closing that gap, as research suggests a significant share of core workplace skills will change by the end of the decade.

By integrating AI tools, training and research into schools and universities, national education frameworks can evolve alongside technological change and better equip students for future labour markets.

The programme combines access to tools such as ChatGPT Edu and advanced language models with large-scale research on learning outcomes, tailored national training schemes and internationally recognised certifications.

A global network of governments, universities and education leaders will also share best practices and shape responsible approaches to AI use in classrooms.

Initial partners include Estonia, Greece, Italy, Jordan, Kazakhstan, Slovakia, Trinidad and Tobago and the United Arab Emirates. Early national rollouts, particularly in Estonia, already involve tens of thousands of students and educators, with further countries expected to join later in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Burkina Faso pushes digital sovereignty through national infrastructure supervision

Burkina Faso has launched work on a Digital Infrastructure Supervision Centre as part of a broader effort to strengthen national oversight of digital public infrastructure and reduce exposure to external digital risks.

The project forms a core pillar of the government’s digital sovereignty strategy amid rising cybersecurity threats across public systems.

Led by the Ministry of Digital Transition, Posts and Electronic Communications, the facility is estimated to cost $5.4 million and is scheduled for completion by October.

Authorities state that the centre will centralise oversight of the national backbone network, secure cyberspace operations and supervise the functioning of domestic data centres instead of relying on external monitoring mechanisms.

Government officials argue that the supervision centre will enable resilient and sovereign management of critical digital systems while supporting a policy requiring sensitive national data to remain within domestic infrastructure.

The initiative also complements recent investments in biometric identity systems and regional digital identity frameworks.

Beyond infrastructure security, the project is positioned as groundwork for future AI adoption by strengthening sovereign data and connectivity systems.

The leadership of Burkina Faso continues to emphasise digital autonomy as a strategic priority across governance, identity management and emerging technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cambodia Internet Governance Forum marks major step toward inclusive digital policy

The first national Internet Governance Forum in Cambodia has taken place, establishing a new platform for digital policy dialogue. The Cambodia Internet Governance Forum (CamIGF) included civil society, private sector and youth participants.

The forum follows an Internet Universality Indicators assessment led by UNESCO and national partners. The assessment recommended a permanent multistakeholder platform for digital governance, grounded in human rights, openness, accessibility and participation.

Opening remarks from national and international stakeholders framed the CamIGF as a move toward people-centred and rights-based digital transformation. Speakers stressed the need for cross-sector cooperation to ensure connectivity, innovation and regulation deliver public benefit.

Discussions focused on online safety in the age of AI, meaningful connectivity, youth participation and digital rights. The programme also included Cambodia’s Youth Internet Governance Forum, highlighting young people’s role in addressing data protection and digital skills gaps.

By institutionalising a national IGF, Cambodia joins a growing global network using multistakeholder dialogue to guide digital policy. UNESCO confirmed continued support for implementing assessment recommendations and strengthening inclusive digital governance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

TikTok restructures operations for US market

TikTok has finalised a deal allowing the app to continue operating in America by separating its US business from its global operations. The agreement follows years of political pressure in the US over national security concerns.

Under the arrangement, a new entity will manage TikTok’s US operations, with user data and algorithms handled inside the US. The recommendation algorithm has been licensed and will now be trained only on US user data to meet American regulatory requirements.

Ownership of TikTok’s US business is shared among American and international investors, while China-based ByteDance retains a minority stake. Oracle will oversee data security and cloud infrastructure for users in the US.

Analysts say the changes could alter how the app functions for the roughly 200 million users in the US. Questions remain over whether a US-trained algorithm will perform as effectively as the global version.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot