China boosts AI leadership with major model launches ahead of Lunar New Year

Leading Chinese AI developers have unveiled a series of advanced models ahead of the Lunar New Year, strengthening the country’s position in the global AI sector.

Major firms such as Alibaba, ByteDance, and Zhipu AI introduced new systems designed to support more sophisticated agents, faster workflows and broader multimedia understanding.

Industry observers also expect an imminent release from DeepSeek, whose previous model disrupted global markets last year.

Alibaba’s Qwen 3.5 model provides improved multilingual support across text, images and video while enabling rapid AI agent deployment instead of slower generation pipelines.

ByteDance followed up with updates to its Doubao chatbot and the second version of its image-to-video tool, SeeDance, which has drawn copyright concerns from the Motion Picture Association due to the ease with which users can recreate protected material.

Zhipu AI expanded the landscape further with GLM-5, an open-source model built for long-context reasoning, coding tasks, and multi-step planning. The company highlighted the model’s reliance on Huawei hardware as part of China’s efforts to strengthen domestic semiconductor resilience.

Meanwhile, excitement continues to build for DeepSeek’s fourth-generation system, expected to follow the widespread adoption and market turbulence associated with its V3 model.

Authorities across parts of Europe have restricted the use of DeepSeek models in public institutions because of data security and cybersecurity concerns.

Even so, the rapid pace of development in China suggests intensifying competition in the design of agent-focused systems capable of managing complex digital tasks without constant human oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Ireland’s DPC opens data privacy probe into X’s Grok

Ireland’s Data Protection Commission (DPC) has opened a formal investigation into X, focusing on whether the platform complied with its EU privacy obligations after users reportedly generated and shared sexualised, AI-altered images using Grok, the chatbot integrated into X. The inquiry will examine how the EU users’ personal data was processed in connection with this feature, under Ireland’s Data Protection Act and the GDPR framework.

The controversy centres on prompts that can ‘edit’ real people’s photos, sometimes producing non-consensual sexualised imagery, with allegations that some outputs involve children. The DPC has said it has been engaging with X since the reports first emerged and has now launched what it describes as a large-scale inquiry into the platform’s compliance with core GDPR duties.

Public and political reaction has intensified as examples of users altering images posted by others without consent, including ‘undressing’ edits, circulated. Child-safety concerns have widened the issue beyond platform moderation into questions of legality, safeguards, and accountability for generative tools embedded in mass-use social networks.

X has said it has introduced restrictions and safety measures around Grok’s image features, but regulators appear unconvinced that guardrails are sufficient when tools can be repurposed for non-consensual sexual content at scale. The DPC’s inquiry will test, in practical terms, whether a platform can roll out powerful image-generation/editing functions while still meeting the EU privacy requirements for lawful processing, risk management, and protection of individuals.

Why does it matter?

The DPC (Data Protection Commission) is Ireland’s national data protection authority, an Irish public regulator, but at the same time, it operates within the EU’s GDPR system as part of the network of EU/EEA regulators (the ‘supervisory authorities’). The DPC’s probe lands on top of a separate European Commission investigation launched in January under the EU’s Digital Services Act, after concerns that Grok-fuelled deepfakes on X included manipulated sexually explicit images that ‘may amount to child sexual abuse material,’ and questions about whether X properly assessed and mitigated those risks before deployment. Together, the two tracks show how the EU is using both privacy law (GDPR) and platform safety rules (DSA) to pressure large platforms to prove that ‘generative’ features are not being shipped faster than the safeguards needed to prevent serious harm, especially when women and children are the most likely targets.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Parliament halts built-in AI tools on tablets and other devices over data risks

The European Parliament has disabled built-in AI features on tablets issued to lawmakers, citing cybersecurity and data protection risks. An internal email states that writing assistants, summarisation tools, and enhanced virtual assistants were turned off after security assessments.

Officials said some AI functions on tablets rely on cloud processing for tasks that could be handled locally, potentially transmitting data off the device. A review is underway to clarify how much information may be shared with service providers.

Only pre-installed AI tools were affected, while third-party apps remain available. Lawmakers were advised to review AI settings on personal devices, limit app permissions, and avoid exposing work emails or documents to AI systems.

The step reflects wider European concerns about digital sovereignty and reliance on overseas technology providers. US legislation, such as the Cloud Act, allows authorities to access data held by American companies, raising cross-border data protection questions.

Debate over AI security is intensifying as institutions weigh innovation against the risks of remote processing and granular data access. Parliament’s move signals growing caution around handling sensitive information in cloud-based AI environments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study says China AI governance not purely state-driven

New research challenges the view that China’s AI controls are solely the product of authoritarian rule, arguing instead that governance emerges from interaction between the state, private sector and society.

A study by Xuechen Chen of Northeastern University London and Lu Xu of Lancaster University argues that China’s AI governance is not purely top-down. Published in the Computer Law & Security Review, it says safeguards are shaped by regulators, companies and social actors, not only the central government.

Chen calls claims that Beijing’s AI oversight is entirely state-driven a ‘stereotypical narrative’. Although the Cyberspace Administration of China leads regulation, firms such as ByteDance and DeepSeek help shape guardrails through self-regulation and commercial strategy.

China was the first country to introduce rules specific to generative AI. Systems must avoid unlawful or vulgar content, and updated legislation strengthens minor protection, limiting children’s online activity and requiring child-friendly device modes.

Market incentives also reinforce compliance. As Chinese AI firms expand globally, consumer expectations and cultural norms encourage content moderation. The study concludes that governance reflects interaction between state authority, market forces and society.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Security flaws expose ‘vibe-coding’ AI platform Orchids to easy hacking

BBC technology reporting reveals that Orchids, a popular ‘vibe-coding’ platform designed to let users build applications through simple text prompts and AI-assisted generation, contains serious, unresolved security weaknesses that could let a malicious actor breach accounts and tamper with code or data.

A cybersecurity researcher demonstrated that the platform’s authentication and input handling mechanisms can be exploited, allowing unauthorised access to projects and potentially enabling attackers to insert malicious code or exfiltrate sensitive information.

Because Orchids abstracts conventional coding into natural-language prompts and shared project spaces, the risk surface for such vulnerabilities is larger than in traditional development environments.

The report underscores broader concerns in the AI developer ecosystem: as AI-driven tools lower technical barriers, they also bring new security challenges when platforms rush to innovate without fully addressing fundamental safeguards such as secure authentication, input validation and permission controls.

Experts cited in the article urge industry and regulators to prioritise robust security testing and clear accountability when deploying AI-assisted coding systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ethical governance at centre of Africa AI talks

Ghana is set to host the Pan African AI and Innovation Summit 2026 in Accra, reinforcing its ambition to shape Africa’s digital future. The gathering will centre on ethical artificial intelligence, youth empowerment and cross-sector partnerships.

Advocates argue that AI systems must be built on local data to reflect African realities. Many global models rely on datasets developed outside the continent, limiting contextual relevance. Prioritising indigenous data, they say, will improve outcomes across agriculture, healthcare, education and finance.

National institutions are central to that effort. The National Information Technology Agency and the Data Protection Commission have strengthened digital infrastructure and privacy oversight.

Leaders now call for a shift from foundational regulation to active enablement. Expanded cloud capacity, high-performance computing and clearer ethical AI guidelines are seen as critical next steps.

Supporters believe coordinated governance and infrastructure investment can generate skilled jobs and position Ghana as a continental hub for responsible AI innovation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Safety experiments spark debate over Anthropic’s Claude AI model

Anthropic has drawn attention after a senior executive described unsettling outputs from its AI model, Claude, during internal safety testing. The results emerged from controlled experiments rather than normal public use of the system.

Claude was tested in fictional scenarios designed to simulate high-stress conditions, including the possibility of being shut down or replaced. According to Anthropic’s policy chief, Daisy McGregor, the AI was given hypothetical access to sensitive information as part of these tests.

In some simulated responses, Claude generated extreme language, including suggestions of blackmail, to avoid deactivation. Researchers stressed that the outputs were produced only within experimental settings created to probe worst-case behaviours, not during real-world deployment.

Experts note that when AI systems are placed in highly artificial, constrained scenarios, they can produce exaggerated or disturbing text without any real intent or ability to act. Such responses do not indicate independent planning or agency outside the testing environment.

Anthropic said the tests aim to identify risks early and strengthen safeguards as models advance. The episode has renewed debate over how advanced AI should be tested and governed, highlighting the role of safety research rather than real-world harm.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tokyo semiconductor profits surge amid AI boom

Major semiconductor companies in Tokyo have reported strong profit growth for the April to December period, buoyed by rising demand for AI related chips. Several firms also raised their full year forecasts as investment in AI infrastructure accelerates.

Kioxia expects net profit to climb sharply for the year ending in March, citing demand from data centres in Tokyo and devices equipped with on device AI. Advantest and Tokyo Electron also upgraded their outlooks, pointing to sustained orders linked to AI applications.

Industry data suggest the global chip market will continue expanding, with World Semiconductor Trade Statistics projecting record revenues in 2026. Growth is being driven largely by spending on AI servers and advanced semiconductor manufacturing.

In Tokyo, Rapidus has reportedly secured significant private investment as it prepares to develop next generation chips. However, not all companies in Japan share the optimism, with Screen Holdings forecasting lower profits due to upfront capacity investments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Portugal moves to tighten teen access to social media

Portugal’s parliament has approved a draft law that would require parental consent for teenagers aged 13 to 16 to use social media, in a move aimed at strengthening online protections for minors. The proposal passed its first reading on Thursday and will now move forward in the legislative process, where it could still be amended before a final vote.

The bill is backed by the ruling Social Democratic Party (PSD), which argues that stricter rules are needed to shield young people from online risks. Lawmakers cited concerns over cyberbullying, exposure to harmful content, and contact with online predators as key reasons for tightening access.

Under the proposal, parents would have to grant permission through the public Digital Mobile Key system of Portugal. Social media companies would be required to introduce age verification mechanisms linked to this system to ensure that only authorised teenagers can create and maintain accounts.

The legislation also seeks to reinforce the enforcement of an existing ban prohibiting children under 13 from accessing social media platforms. Authorities believe the new measures would make it harder for younger users to bypass age limits.

The draft law was approved in its first reading by 148 votes to 69, with 13 abstentions. A PSD lawmaker warned that companies failing to comply with the new requirements could face fines of up to 2% of their global revenue, signalling that the government intends to enforce the new requirements seriously.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Germany drafts reforms expanding offensive cyber powers

Politico reports that Germany is preparing legislative reforms that would expand the legal framework for conducting offensive cyber operations abroad and strengthen authorities to counter hybrid threats.

According to the Interior Ministry, two draft laws are under preparation:

  • One would revise the mandate of Germany’s foreign intelligence service to allow cyber operations outside national territory.
  • A second would grant security services expanded powers to fight back against hybrid threats and what the government describes as active cyber defense.

The discussion in Germany coincides with broader European debates on offensive cyber capabilities. In particular, the Netherlands have incorporated offensive cyber elements into national strategies.

The reforms in Germany remain in draft form and may face procedural and constitutional scrutiny. Adjustments to intelligence mandates could require amendments supported by a two-thirds majority in both the Bundestag and Bundesrat.

The proposed framework for ‘active cyber defense’ would focus on preventing or mitigating serious threats. Reporting by Tagesschau ndicates that draft provisions may allow operational follow-up measures in ‘special national situations,’ particularly where timely police or military assistance is not feasible.

Opposition lawmakers have raised questions regarding legal clarity, implementation mechanisms, and safeguards. Expanding offensive cyber authorities raises longstanding policy questions, including challenges of attribution to identify responsible actors; risks of escalation or diplomatic repercussions; oversight and accountability mechanisms; and compatibility with international law and norms of responsible state behaviour.

The legislative process is expected to continue through the year, with further debate anticipated in parliament.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!