AI sovereignty test in South Korea reaches a critical phase

South Korea’s flagship AI foundation model project has entered a decisive phase after accusations that leading participants relied on foreign open source components instead of building systems entirely independently.

The controversy has reignited debate over how ‘from scratch’ development should be defined within government-backed AI initiatives aimed at strengthening national sovereignty.

Scrutiny has focused on Naver Cloud after developers identified near-identical similarities between its vision encoder and models released by Alibaba, alongside disclosures that audio components drew on OpenAI technology.

The dispute now sits with the Ministry of Science and ICT, which must determine whether independence applies only to a model’s core or extends to all major components.

An outcome that is expected to shape South Korea’s AI strategy by balancing deeper self-reliance against the realities of global open-source ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU faces pressure to strengthen Digital Markets Act oversight

Rivals of major technology firms have criticised the European Commission for weak enforcement of the Digital Markets Act, arguing that slow procedures and limited transparency undermine the regulation’s effectiveness.

Feedback gathered during a Commission consultation highlights concerns about delaying tactics, interface designs that restrict user choice, and circumvention strategies used by designated gatekeepers.

The Digital Markets Act entered into force in March 2024, prompting several non-compliance investigations against Apple, Meta and Google. Although Apple and Meta have already faced fines, follow-up proceedings remain ongoing, while Google has yet to receive sanctions.

Smaller technology firms argue that enforcement lacks urgency, particularly in areas such as self-preferencing, data sharing, interoperability and digital advertising markets.

Concerns also extend to AI and cloud services, where respondents say the current framework fails to reflect market realities.

Generative AI tools, such as large language models, raise questions about whether existing platform categories remain adequate or whether new classifications are necessary. Cloud services face similar scrutiny, as major providers often fall below formal thresholds despite acting as critical gateways.

The Commission plans to submit a review report to the European Parliament and the Council by early May, drawing on findings from the consultation.

Proposed changes include binding timelines and interim measures aimed at strengthening enforcement and restoring confidence in the bloc’s flagship competition rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Telegram bonds frozen amid ongoing international sanctions framework

Around $500 million in bonds issued by Telegram remain frozen within Russia’s financial settlement system following the application of international sanctions.

The situation reflects how global regulatory measures can continue to affect corporate assets even when companies operate across multiple jurisdictions.

According to reports, the frozen bonds were issued in 2021 and are held at Russia’s National Settlement Depository.

Telegram said its more recent $1.7 billion bond issuance in 2025 involved international investors, with no participation from Russian capital, and was purchased mainly by institutional funds based outside Russia.

Telegram stated that bond repayments follow established international procedures through intermediaries, meaning payment obligations are fulfilled regardless of whether individual bondholders face restrictions.

Financial results for 2025 also showed losses, linked in part to a decline in cryptocurrency valuations, which reflected broader market conditions rather than company-specific factors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT reaches 40 million daily users for health advice

More than 40 million people worldwide now use ChatGPT daily for health-related advice, according to OpenAI.

Over 5 percent of all messages sent to the chatbot relate to healthcare, with three in five US adults reporting use in the past three months. Many interactions occur outside clinic hours, highlighting the demand for AI guidance in navigating complex medical systems.

Users primarily turn to AI to check symptoms, understand medical terms, and explore treatment options.

OpenAI emphasises that ChatGPT helps patients gain agency over their health, particularly in rural areas where hospitals and specialised services are scarce.

The technology also supports healthcare professionals by reducing administrative burdens and providing timely information.

Despite growing adoption, regulatory oversight remains limited. Some US states have attempted to regulate AI in healthcare, and lawsuits have emerged over cases where AI-generated advice has caused harm.

OpenAI argues that ChatGPT supplements rather than replaces medical services, helping patients interpret information, prepare for care, and navigate gaps in access.

Healthcare workers are also increasingly using AI. Surveys show that two in five US professionals, including nurses and pharmacists, use generative AI weekly to draft notes, summarise research, and streamline workflows.

OpenAI plans to release healthcare policy recommendations to guide the responsible adoption of AI in clinical settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplochatbot!

EU pushes for open-source commercialisation to reduce tech dependence

The European Commission is preparing a strategy to commercialise European open-source software in an effort to strengthen digital sovereignty and reduce dependence on foreign technology providers.

The plan follows a consultation highlighting that EU funding has delivered innovation, although commercial scale has often emerged outside Europe instead of within it.

Open-source software plays a strategic role by decentralising development and limiting reliance on dominant technology firms.

Commission officials argue that research funding alone cannot deliver competitive alternatives, particularly when public and private contracts continue to favour proprietary systems operated by non-European companies.

An upcoming strategy, due alongside the Cloud and AI Development Act in early 2026, that will prioritise community upscaling, industrial deployment and market integration.

Governance reforms and stronger supply chain security are expected to address vulnerabilities that can affect widely used open-source components.

Financial sustainability will also feature prominently, with public sector partnerships encouraged to support long-term viability.

Brussels hopes wider public adoption of open-source tools will replace expensive or data-extractive proprietary software, reinforcing Europe’s technological autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New UK cyber strategy focuses on trust in online public services

The UK government has announced new measures to strengthen the security and resilience of online public services as more interactions with the state move online. Ministers say public confidence is essential as citizens increasingly rely on digital systems for everyday services.

Backed by more than £210 million, the UK Government Cyber Action Plan outlines how cyber defences and digital resilience will be improved across the public sector. A new Government Cyber Unit will coordinate risk identification, incident response, and action on complex threats spanning multiple departments.

The plan underpins wider efforts to digitise public services, including benefits applications, tax payments, and healthcare access. Officials argue that secure systems can reduce bureaucracy and improve efficiency, but only if users trust that their data is protected.

The announcement coincides with parliamentary debate on the Cyber Security and Resilience Bill, which sets clearer expectations for companies supplying services to the government. The legislation is intended to strengthen cyber resilience across critical supply chains.

Ministers also highlighted new steps to address software supply chain risks, including a Software Security Ambassador Scheme promoting basic security practices. The government says stronger cyber resilience is essential to protect public services and maintain public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Morgan Stanley files to launch Bitcoin and Solana ETFs as Wall Street embraces crypto

In the US, Morgan Stanley has moved to launch exchange-traded funds linked to Bitcoin and Solana, signalling that major banks are no longer prepared to watch the crypto market from the sidelines.

Filings submitted to the Securities and Exchange Commission show the bank intends to offer funds tied to the prices of both crypto assets, making it the first of the ten biggest US banks by assets to pursue crypto ETFs directly.

Interest from Wall Street has been strengthened by regulatory changes introduced under the Trump administration, which created clearer rules for stablecoins and crypto-related investment products.

BlackRock’s Bitcoin ETFs have already become a major source of revenue, encouraging banks to seek a more active role instead of limiting themselves to custody services.

The trend is expected to have implications for European investors. US-listed crypto ETFs cannot normally be sold to retail investors in the EU because they do not comply with UCITS requirements.

However, Morgan Stanley has been developing an EU-compliant ETF platform and is working with partners to align with both UCITS and the EU’s Markets in Crypto-Assets framework.

The shift suggests crypto has become too commercially significant for Wall Street institutions to ignore, with banks increasingly treating digital assets as part of mainstream financial services rather than a peripheral experiment.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chatbots under scrutiny in China over AI ‘boyfriend’ and ‘girlfriend’ services

China’s cyberspace regulator has proposed new limits on AI ‘boyfriend’ and ‘girlfriend’ chatbots, tightening oversight of emotionally interactive artificial intelligence services.

Draft rules released on 27 December would require platforms to intervene when users express suicidal or self-harm tendencies, while strengthening protections for minors and restricting harmful content.

The regulator defines the services as AI systems that simulate human personality traits and emotional interaction. The proposals are open for public consultation until 25 January.

The draft bans chatbots from encouraging suicide, engaging in emotional manipulation, or producing obscene, violent, or gambling-related content. Minors would need guardian consent to access AI companionship.

Platforms would also be required to disclose clearly that users are interacting with AI rather than humans. Legal experts in China warn that enforcement may be challenging, particularly in identifying suicidal intent through language cues alone.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

California launches DROP tool to erase data broker records

Residents in California now have a simpler way to force data brokers to delete their personal information.

The state has launched the Delete Requests and Opt-Out Platform, known as DROP, allowing residents to submit one verified deletion request that applies to every registered data broker instead of contacting each company individually.

A system that follows the Delete Act, passed in 2023, and is intended to create a single control point for consumer data removal.

Once a resident submits a request, the data brokers must begin processing it from August 2026 and will have 90 days to act. If data is not deleted, residents may need to provide extra identifying details.

First-party data collected directly by companies can still be retained, while data from public records, such as voter rolls, remains exempt. Highly sensitive data may fall under separate legal protections such as HIPAA.

The California Privacy Protection Agency argues that broader data deletion could reduce identity theft, AI-driven impersonation, fraud risk and unwanted marketing contact.

Penalties for non-compliance include daily fines for brokers who fail to register or ignore deletion orders. The state hopes the tool will make data rights meaningful instead of purely theoretical.

A launch that comes as regulators worldwide examine how personal data is used, traded and exploited.

California is positioning itself as a leader in consumer privacy enforcement, while questions continue about how effectively DROP will operate when the deadline arrives in 2026.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk says users are liable for the illegal Grok content

Scrutiny has intensified around X after its Grok chatbot was found generating non-consensual explicit images when prompted by users.

Grok had been positioned as a creative AI assistant, yet regulators reacted swiftly once altered photos were linked to content involving minors. Governments and rights groups renewed pressure on platforms to prevent abusive use of generative AI.

India’s Ministry of Electronics and IT issued a notice to X demanding an Action Taken Report within 72 hours, citing failure to restrict unlawful content.

Authorities in France referred similar cases to prosecutors and urged enforcement under the EU’s Digital Services Act, signalling growing international resolve to control AI misuse.

Elon Musk responded by stating users, instead of Grok, would be legally responsible for illegal material generated through prompts. The company said offenders would face permanent bans and cooperation with law enforcement.

Critics argue that transferring liability to users does not remove the platform’s duty to embed stronger safeguards.

Independent reports suggest Grok has previously been involved in deepfake creation, creating a wider debate about accountability in the AI sector. The outcome could shape expectations worldwide regarding how platforms design and police powerful AI tools.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!