Sovereign AI race continues with three finalists in South Korea

South Korea has narrowed its race to develop a sovereign AI model, eliminating Naver and NCSoft from the government-backed competition. LG AI Research, SK Telecom, and Upstage now advance toward final selection by 2027.

The Ministry of Science and ICT emphasised that independent AI must be trained from scratch with initialised weights. Models reusing pre-trained results, even open source, do not meet this standard.

A wild-card round allows previously eliminated teams to re-enter the competition. Despite this option, major companies have declined, citing unclear benefits and high resource demands.

Industry observers warn that reduced participation could slow momentum for South Korea’s AI ambitions. The outcome is expected to shape the country’s approach to homegrown AI and technological independence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European Parliament moves to force AI companies to pay news publishers

Lawmakers in the EU are moving closer to forcing technology companies to pay news publishers for the use of journalistic material in model training, according to a draft copyright report circulating in the European Parliament.

The text forms part of a broader effort to update copyright enforcement as automated content systems expand across media and information markets.

Compromise amendments also widen the scope beyond payment obligations, bringing AI-generated deepfakes and synthetic manipulation into sharper focus.

MEPs argue that existing legal tools fail to offer sufficient protection for publishers, journalists and citizens when automated systems reproduce or distort original reporting.

The report reflects growing concern that platform-driven content extraction undermines the sustainability of professional journalism. Lawmakers are increasingly framing compensation mechanisms as a corrective measure rather than as voluntary licensing or opaque commercial arrangements.

If adopted, the position of the Parliament would add further regulatory pressure on large technology firms already facing tighter scrutiny under the Digital Markets Act and related digital legislation, reinforcing Europe’s push to assert control over data use, content value and democratic safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI firms fall short of EU transparency rules on training data

Several major AI companies appear slow to meet EU transparency obligations, raising concerns over compliance with the AI Act.

Under the regulation, developers of large foundation models must disclose information about training data sources, allowing creators to assess whether copyrighted material has been used.

Such disclosures are intended to offer a minimal baseline of transparency, covering the use of public datasets, licensed material and scraped websites.

While open-source providers such as Hugging Face have already published detailed templates, leading commercial developers have so far provided only broad descriptions of data usage instead of specific sources.

Formal enforcement of the rules will not begin until later in the year, extending a grace period for companies that released models after August 2025.

The European Commission has indicated willingness to impose fines if necessary, although it continues to assess whether newer models fall under immediate obligations.

The issue is likely to become politically sensitive, as stricter enforcement could affect US-based technology firms and intensify transatlantic tensions over digital regulation.

Transparency under the AI Act may therefore test both regulatory resolve and international relations as implementation moves closer.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea faces mounting pressure from US AI chip tariffs

New US tariffs on advanced AI chips are drawing scrutiny over their impact on global supply chains, with South Korea monitoring potential effects on its semiconductor industry.

The US administration has approved a 25 percent tariff on advanced chips that are imported into the US and then re-exported to third countries. The measure is widely seen as aimed at restricting the flow of AI accelerators to China.

The tariff thresholds are expected to cover processors such as Nvidia’s H200 and AMD’s MI325X, which rely on high-bandwidth memory supplied by Samsung Electronics and SK hynix.

Industry officials say most memory exports from South Korea to the US are used in domestic data centres, which are exempt under the proclamation, reducing direct exposure for suppliers.

South Korea’s trade ministry has launched consultations with industry leaders and US counterparts to assess risks and ensure Korean firms receive equal treatment to competitors in Taiwan, Japan and the EU.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

xAI faces stricter pollution rules for Memphis data centre

US regulators have closed a loophole that allowed Elon Musk’s AI company, xAI, to operate gas-burning turbines at its Memphis data centre without full air pollution permits. The move follows concerns over emissions and local health impacts.

The US Environmental Protection Agency clarified that mobile gas turbines cannot be classified as ‘non-road engines’ to avoid Clean Air Act requirements. Companies must now obtain permits if their combined emissions exceed regulatory thresholds.

Local authorities had previously allowed the turbines to operate without public consultation or environmental review. The updated federal rule may slow xAI’s expansion plans in the Memphis area.

The Colossus data centre, opened in 2024, supports training and inference for Grok AI models and other services linked to Musk’s X platform. NVIDIA hardware is used extensively at the site.

Residents and environmental groups have raised concerns about air quality, particularly in nearby communities. Legal advocates say xAI’s future operations will be closely monitored for regulatory compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sadiq Khan voices strong concerns over AI job impact

London Mayor Sir Sadiq Khan has warned that AI could become a ‘weapon of mass destruction of jobs‘ if its impact is not managed correctly. He said urgent action is needed to prevent large-scale unemployment.

Speaking at Mansion House in the UK capital, Khan said London is particularly exposed due to the concentration of finance, professional services, and creative industries. He described the potential impact on jobs as ‘colossal’.

Khan said AI could improve public services and help tackle challenges such as cancer care and climate change. At the same time, he warned that reckless use could increase inequality and concentrate wealth and power.

Polling by City Hall suggests more than half of London workers expect AI to affect their jobs within a year. Sadiq Khan said entry-level roles may disappear fastest, limiting opportunities for young people.

The mayor announced a new task force to assess how Londoners can be supported through the transition. His office will also commission free AI training for residents.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Matthew McConaughey moves decisively to protect AI likeness rights

Oscar-winning actor Matthew McConaughey has trademarked his image and voice to protect them from unauthorised use by AI platforms. His lawyers say the move is intended to safeguard consent and attribution in an evolving digital environment.

Several clips, including his well-known catchphrase from Dazed and Confused, have been registered with the United States Patent and Trademark Office. Legal experts say it is the first time an actor has used trademark law to address potential AI misuse of their likeness.

McConaughey’s legal team said there is no evidence of his image being manipulated by AI so far. The trademarks are intended to act as a preventative measure against unauthorised copying or commercial use.

The actor said he wants to ensure any future use of his voice or appearance is approved. Lawyers also said the approach could help capture value created through licensed AI applications.

Concerns over deepfakes and synthetic media are growing across the entertainment industry. Other celebrities have faced unauthorised AI-generated content, prompting calls for stronger legal protections.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare acquires Human Native to build a fair AI content licensing model

San Francisco-based company Cloudflare has acquired Human Native, an AI data marketplace designed to connect content creators with AI developers seeking high-quality training and inference material.

A move that reflects growing pressure to establish clearer economic rules for how online content is used by AI systems.

The acquisition is intended to help creators and publishers decide whether to block AI access entirely, optimise material for machine use, or license content for payment instead of allowing uncontrolled scraping.

Cloudflare says the tools developed through Human Native will support transparent pricing and fair compensation across the AI supply chain.

Human Native, founded in 2024 and backed by UK-based investors, focuses on structuring original content so it can be discovered, accessed and purchased by AI developers through standardised channels.

The team includes researchers and engineers with experience across AI research, design platforms and financial media.

Cloudflare argues that access to reliable and ethically sourced data will shape long-term competition in AI. By integrating Human Native into its wider platform, the company aims to support a more sustainable internet economy that balances innovation with creator rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Regulators press on with Grok investigations in Britain and Canada

Britain and Canada are continuing regulatory probes into xAI’s Grok chatbot, signalling that official scrutiny will persist despite the company’s announcement of new safeguards. Authorities say concerns remain over the system’s ability to generate explicit and non-consensual images.

xAI said it had updated Grok to block edits that place real people in revealing clothing and restricted image generation in jurisdictions where such content is illegal. The company did not specify which regions are affected by the new limits.

Reuters testing found Grok was still capable of producing sexualised images, including in Britain. Social media platform X and xAI did not respond to questions about how effective the changes have been.

UK regulator Ofcom said its investigation remains ongoing, despite welcoming xAI’s announcement. A privacy watchdog in Canada also confirmed it is expanding an existing probe into both X and xAI.

Pressure is growing internationally, with countries including France, India, and the Philippines raising concerns. British Technology Secretary Liz Kendall said the Online Safety Act gives the government tools to hold platforms accountable for harmful content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Brazil excluded from WhatsApp rival AI chatbot ban

WhatsApp has excluded Brazil from its new restriction on third-party general-purpose chatbots, allowing AI providers to continue operating on the platform despite a broader policy shift affecting other markets.

The decision follows action by the competition authority of Brazil, which ordered Meta to suspend elements of the policy while assessing whether the rules unfairly disadvantage rival chatbot providers in favour of Meta AI.

Developers have been informed that services linked to Brazilian phone numbers do not need to stop responding to users or issue service warnings.

Elsewhere, WhatsApp has introduced a 90-day grace period starting in mid-January, requiring chatbot developers to halt responses and notify users that services will no longer function on the app.

The policy applies to tools such as ChatGPT and Grok, while customer service bots used by businesses remain unaffected.

Italy has already secured a similar exemption after regulatory scrutiny, while the EU has opened an antitrust investigation into the new rules.

Meta continues to argue that general-purpose AI chatbots place technical strain on systems designed for business messaging instead of acting as an open distribution platform for AI services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!