China boosts AI leadership with major model launches ahead of Lunar New Year

Leading Chinese AI developers have unveiled a series of advanced models ahead of the Lunar New Year, strengthening the country’s position in the global AI sector.

Major firms such as Alibaba, ByteDance, and Zhipu AI introduced new systems designed to support more sophisticated agents, faster workflows and broader multimedia understanding.

Industry observers also expect an imminent release from DeepSeek, whose previous model disrupted global markets last year.

Alibaba’s Qwen 3.5 model provides improved multilingual support across text, images and video while enabling rapid AI agent deployment instead of slower generation pipelines.

ByteDance followed up with updates to its Doubao chatbot and the second version of its image-to-video tool, SeeDance, which has drawn copyright concerns from the Motion Picture Association due to the ease with which users can recreate protected material.

Zhipu AI expanded the landscape further with GLM-5, an open-source model built for long-context reasoning, coding tasks, and multi-step planning. The company highlighted the model’s reliance on Huawei hardware as part of China’s efforts to strengthen domestic semiconductor resilience.

Meanwhile, excitement continues to build for DeepSeek’s fourth-generation system, expected to follow the widespread adoption and market turbulence associated with its V3 model.

Authorities across parts of Europe have restricted the use of DeepSeek models in public institutions because of data security and cybersecurity concerns.

Even so, the rapid pace of development in China suggests intensifying competition in the design of agent-focused systems capable of managing complex digital tasks without constant human oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Study says China AI governance not purely state-driven

New research challenges the view that China’s AI controls are solely the product of authoritarian rule, arguing instead that governance emerges from interaction between the state, private sector and society.

A study by Xuechen Chen of Northeastern University London and Lu Xu of Lancaster University argues that China’s AI governance is not purely top-down. Published in the Computer Law & Security Review, it says safeguards are shaped by regulators, companies and social actors, not only the central government.

Chen calls claims that Beijing’s AI oversight is entirely state-driven a ‘stereotypical narrative’. Although the Cyberspace Administration of China leads regulation, firms such as ByteDance and DeepSeek help shape guardrails through self-regulation and commercial strategy.

China was the first country to introduce rules specific to generative AI. Systems must avoid unlawful or vulgar content, and updated legislation strengthens minor protection, limiting children’s online activity and requiring child-friendly device modes.

Market incentives also reinforce compliance. As Chinese AI firms expand globally, consumer expectations and cultural norms encourage content moderation. The study concludes that governance reflects interaction between state authority, market forces and society.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Security flaws expose ‘vibe-coding’ AI platform Orchids to easy hacking

BBC technology reporting reveals that Orchids, a popular ‘vibe-coding’ platform designed to let users build applications through simple text prompts and AI-assisted generation, contains serious, unresolved security weaknesses that could let a malicious actor breach accounts and tamper with code or data.

A cybersecurity researcher demonstrated that the platform’s authentication and input handling mechanisms can be exploited, allowing unauthorised access to projects and potentially enabling attackers to insert malicious code or exfiltrate sensitive information.

Because Orchids abstracts conventional coding into natural-language prompts and shared project spaces, the risk surface for such vulnerabilities is larger than in traditional development environments.

The report underscores broader concerns in the AI developer ecosystem: as AI-driven tools lower technical barriers, they also bring new security challenges when platforms rush to innovate without fully addressing fundamental safeguards such as secure authentication, input validation and permission controls.

Experts cited in the article urge industry and regulators to prioritise robust security testing and clear accountability when deploying AI-assisted coding systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU faces tension over potential ban on AI ‘pornification’

Lawmakers in the European Parliament remain divided over whether a direct ban on AI-driven ‘pornification’ should be added to the emerging digital omnibus.

Left-wing members push for an explicit prohibition, arguing that synthetic sexual imagery generated without consent has created a rapidly escalating form of online abuse. They say a strong legal measure is required instead of fragmented national responses.

Centre and liberal groups take a different position by promoting lighter requirements for industrial AI and seeking clarity on how any restrictions would interact with the AI Act.

They warn that an unrefined ban could spill over into general-purpose models and complicate enforcement across the European market. Their priority is a more predictable regulatory environment for companies developing high-volume AI systems.

Key figures across the political spectrum, including lawmakers such as Assita Kanko, Axel Voss and Brando Benifei, continue to debate how far the omnibus should go.

Some argue that safeguarding individuals from non-consensual sexual deepfakes must outweigh concerns about administrative burdens, while others insist that proportionality and technical feasibility need stronger assessment.

The lack of consensus leaves the proposal in a delicate phase as negotiations intensify. Lawmakers now face growing public scrutiny over how Europe will respond to the misuse of generative AI.

A clear stance from the Parliament is still pending, rather than an assured path toward agreement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Illicit trafficking payments rise across blockchain channels

Cryptocurrency flows linked to suspected human trafficking services surged sharply in 2025, with transaction volumes rising 85% year-on-year, according to new blockchain analysis.

Investigators say the financial activity reflects the rapid expansion of digitally enabled exploitation networks operating across borders.

Growth is linked to Southeast Asia-based illicit networks, including scam compounds, gambling platforms, and laundering groups operating via encrypted messaging channels.

Analysts identified multiple trafficking service categories, each with distinct transaction structures and payment preferences.

Stablecoins became the dominant payment method, especially for escort networks, thanks to their price stability and ease of conversion. Larger transfers and structured pricing models indicate increasingly professionalised operations supported by organised financial infrastructure.

Despite the scale of the activity, blockchain transparency continues to provide enforcement advantages. Transaction tracing has aided investigations, shutdowns, and arrests, strengthening digital forensics in combating trafficking-linked financial crime.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X given deadline by Brazil to curb Grok sexualised outputs

Brazil has ordered X to immediately stop its chatbot Grok from generating sexually explicit images, escalating international pressure on the platform over the misuse of generative AI tools.

The order, issued on 11 February by Brazil’s National Data Protection Agency and National Consumer Rights Bureau, requires X to prevent the creation of sexualised content involving children, adolescents, or non-consenting adults. Authorities gave the company five days to comply or face legal action and fines.

Officials in Brazil said X claimed to have removed thousands of posts and suspended hundreds of accounts after a January warning. However, follow-up checks found Grok users were still able to generate sexualised deepfakes. Regulators criticised the platform for a lack of transparency in its response.

The move follows growing scrutiny after Indonesia blocked Grok in January, while the UK and France signalled continued pressure. Concerns increased after Grok’s ‘spicy mode’ enabled users to generate explicit images using simple prompts.

According to the Centre for Countering Digital Hate, Grok generated millions of sexualised images within days. X and its parent company, xAI, announced measures in mid-January to restrict such outputs in certain jurisdictions, but regulators said it remains unclear where those safeguards apply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Women driving tech innovation as Web Summit marks 10 years

Web Summit’s Women in Tech programme marked a decade of work in Qatar by highlighting steady progress in female participation across global technology sectors.

The Web Summit event recorded an increase in women-founded startups and reflected rising engagement in Qatar, where female founders reached 38 percent.

Leaders from the initiative noted how supportive networks, mentorship, and access to role models are reshaping opportunities for women in technology and entrepreneurship.

Speakers from IBM and other companies focused on the importance of AI skills in shaping the future workforce. They argued that adequate preparation depends on understanding how AI shapes everyday roles, rather than relying solely on technical tools.

IBM’s SkillsBuild platform continues to partner with universities, schools, and nonprofit groups to expand access to recognised AI credentials that can support higher earning potential and new career pathways.

Another feature of the event was its emphasis on inclusion as a driver of innovation. The African Women in Technology initiative, led by Anie Akpe, is working to offer free training in cybersecurity and AI so women in emerging markets can benefit from new digital opportunities.

These efforts aim to support business growth at every level, even for women operating in local markets, who can use technology to reach wider communities.

Female founders also used the platform to showcase new health technology solutions.

ScreenMe, a Qatari company founded by Dr Golnoush Golsharazi, presented its reproductive microbiome testing service, created in response to long-standing gaps in women’s health research and screening.

Organisers expressed confidence that women-led innovation will expand across the region, supported by rising investment and continuing visibility at major global events.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Hackers abuse legitimate admin software to hide cyber attacks

Cybercriminals are increasingly abusing legitimate administrative software to access corporate networks, making malicious activity harder to detect. Attackers are blending into normal operations by relying on trusted workforce and IT management tools rather than custom malware.

Recent campaigns have repurposed ‘Net Monitor for Employees Professional’ and ‘SimpleHelp’, tools usually used for staff oversight and remote support. Screen viewing, file management, and command features were exploited to control systems without triggering standard security alerts.

Researchers at Huntress identified the activity in early 2026, finding that the tools were used to maintain persistent, hidden access. Analysis showed that attackers were actively preparing compromised systems for follow-on attacks rather than limiting their activity to surveillance.

The access was later linked to attempts to deploy ‘Crazy’ ransomware and steal cryptocurrency, with intruders disguising the software as legitimate Microsoft services. Monitoring agents were often renamed to resemble standard cloud processes, thereby remaining active without attracting attention.

Huntress advised organisations to limit software installation rights, enforce multi-factor authentication, and audit networks for unauthorised management tools. Monitoring for antivirus tampering and suspicious program names remains critical for early detection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global coalition demands ban on AI-nudification tools over child-safety fears

More than 100 organisations have urged governments to outlaw AI-nudification tools after a surge in non-consensual digital images.

Groups such as Amnesty International, the European Commission, and Interpol argue that the technology now fuels harmful practices that undermine human dignity and child safety. Their concerns intensified after the Grok nudification scandal, where users created sexualised images from ordinary photographs.

Campaigners warn that the tools often target women and children instead of staying within any claimed adult-only environment. Millions of manipulated images have circulated across social platforms, with many linked to blackmail, coercion and child sexual abuse material.

Experts say the trauma caused by these AI images is no less serious because the abuse occurs online.

Organisations within the coalition maintain that tech companies already possess the ability to detect and block such material but have failed to apply essential safeguards.

They want developers and platforms to be held accountable and believe that strict prohibitions are now necessary to prevent further exploitation. Advocates argue that meaningful action is overdue and that protection of users must take precedence over commercial interests.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

South Korea confirms scale of Coupang data breach

The South Korean government has confirmed that 33.67 million user accounts were exposed in a major data breach at Coupang in South Korea. The findings were released by the Ministry of Science and ICT in Seoul.

Investigators in South Korea said names and email addresses were leaked, while delivery lists containing addresses and phone numbers were accessed 148 million times. Officials warned that the impact in South Korea could extend beyond the headline account figure.

Authorities in South Korea identified a former employee as the attacker, alleging misuse of authentication signing keys. The probe concluded that weaknesses in internal controls at Coupang enabled the breach in South Korea.

The ministry in South Korea criticised delayed reporting and plans to impose a fine on Coupang. The company disputed aspects of the findings but said 33.7 million accounts were involved in South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot