From Milan-Cortina to factory floors, AI powers Zhejiang manufacturing

As Chinese skater Sun Long stood on the Milan-Cortina Winter Olympics podium, the vivid red of his uniform reflected more than national pride. It also highlighted AI’s expanding role in China’s textile manufacturing.

In Shaoxing, AI-powered image systems calibrate fabric colours in real time. Factory managers say digital printing has lifted pass rates from about 50% to above 90%, easing longstanding production bottlenecks.

Tyre manufacturing firm Zhongce Rubber Group uses AI to generate multiple 3D designs in minutes. Engineers report shorter development cycles and reduced manual input across research and testing.

Electric vehicle maker Zeekr uses AI visual inspection in its 5G-enabled factory. Officials say tyre verification now takes seconds, helping eliminate assembly errors.

Provincial authorities in China report that large industrial firms are fully digitalized. Zhejiang plans to further integrate AI by 2027, expanding smart factories and industrial intelligence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study says China AI governance not purely state-driven

New research challenges the view that China’s AI controls are solely the product of authoritarian rule, arguing instead that governance emerges from interaction between the state, private sector and society.

A study by Xuechen Chen of Northeastern University London and Lu Xu of Lancaster University argues that China’s AI governance is not purely top-down. Published in the Computer Law & Security Review, it says safeguards are shaped by regulators, companies and social actors, not only the central government.

Chen calls claims that Beijing’s AI oversight is entirely state-driven a ‘stereotypical narrative’. Although the Cyberspace Administration of China leads regulation, firms such as ByteDance and DeepSeek help shape guardrails through self-regulation and commercial strategy.

China was the first country to introduce rules specific to generative AI. Systems must avoid unlawful or vulgar content, and updated legislation strengthens minor protection, limiting children’s online activity and requiring child-friendly device modes.

Market incentives also reinforce compliance. As Chinese AI firms expand globally, consumer expectations and cultural norms encourage content moderation. The study concludes that governance reflects interaction between state authority, market forces and society.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hollywood groups challenge ByteDance over Seedance 2.0 copyright concerns

ByteDance is facing scrutiny from Hollywood organisations over its AI video generator Seedance 2.0. Industry groups allege the system uses actors’ likenesses and copyrighted material without permission.

The Motion Picture Association said the tool reflects large-scale unauthorised use of protected works. Chairman Charles Rivkin called on ByteDance to halt what he described as infringing activities that undermine creators’ rights and jobs.

SAG-AFTRA also criticised the platform, citing concerns over the use of members’ voices and images. Screenwriter Rhett Reese warned that rapid AI development could reshape opportunities for creative professionals.

ByteDance acknowledged the concerns and said it would strengthen safeguards to prevent misuse of intellectual property. The company reiterated its commitment to respecting copyright while addressing complaints.

The dispute underscores wider tensions between technological innovation and rights protection as generative AI tools expand. Legal experts say the outcome could influence how AI video systems operate within existing copyright frameworks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Qwen3.5 debuts with hybrid architecture and expanded multimodal capabilities

Alibaba has released Qwen3.5-397B-A17B, the first open-weight model in its Qwen3.5 series. Designed as a native vision-language system, it contains 397 billion parameters, though only 17 billion are activated per forward pass to improve efficiency.

The model uses a hybrid architecture that combines sparse mixture-of-experts with linear attention via Gated Delta Networks. According to the company, this design improves inference speed while maintaining strong results across reasoning, coding, and agent benchmarks.

Multilingual coverage expands from 119 to 201 languages and dialects, supported by a 250k vocabulary and larger visual-text pretraining datasets. Alibaba says the model achieves performance comparable to significantly larger predecessors.

A hosted version, Qwen3.5-Plus, is available through Alibaba Cloud Model Studio, with a 1-million-token context window and built-in adaptive tool use. Reinforcement learning environments were scaled to prioritise generalisation across tasks rather than narrow optimisation.

Infrastructure upgrades include an FP8 training pipeline and an asynchronous reinforcement learning framework to improve efficiency and stability. Alibaba positions Qwen3.5 as a base for multimodal agents that support reasoning, search, and coding.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Prominent United Nations leaders to attend AI Impact Summit 2026

Senior United Nations leaders, including Antonio Guterres, will take part in the AI Impact Summit 2026, set to be held in New Delhi from 16 to 20 February. The event will be the first global AI summit of this scale to be convened in the Global South.

The Summit is organised by the Ministry of Electronics and Information Technology and will bring together governments, international organisations, industry, academia, and civil society. Talks will focus on responsible AI development aligned with the Sustainable Development Goals.

More than 30 United Nations-led side events will accompany the Summit, spanning food security, health, gender equality, digital infrastructure, disaster risk reduction, and children’s safety. Guterres said shared understandings are needed to build guardrails and unlock the potential of AI for the common good.

Other participants include Volker Turk, Amandeep Singh Gill, Kristalina Georgieva, and leaders from the International Labour Organization, International Telecommunication Union, and other UN bodies. Senior representatives from UNDP, UNESCO, UNICEF, UN Women, FAO, and WIPO are also expected to attend.

The Summit follows the United Nations General Assembly’s appointment of 40 members to a new international scientific panel on AI. The body will publish annual evidence-based assessments to support global AI governance, including input from IIT Madras expert Balaraman Ravindran.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Quebec examines AI debt collection practices

Quebec’s financial regulator has opened a review into how AI tools are being used to collect consumer debt across the province. The Autorité des marchés financiers is examining whether automated systems comply with governance, privacy and fairness standards in Quebec.

Draft guidelines released in 2025 require institutions in Quebec to maintain registries of AI systems, conduct bias testing and ensure human oversight. Public consultations closed in November, with regulators stressing that automation must remain explainable and accountable.

Many debt collection platforms now rely on predictive analytics to tailor the timing, tone and frequency of messages sent to borrowers in Quebec. Regulators are assessing whether such personalisation risks undue pressure or opaque decision making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Security flaws expose ‘vibe-coding’ AI platform Orchids to easy hacking

BBC technology reporting reveals that Orchids, a popular ‘vibe-coding’ platform designed to let users build applications through simple text prompts and AI-assisted generation, contains serious, unresolved security weaknesses that could let a malicious actor breach accounts and tamper with code or data.

A cybersecurity researcher demonstrated that the platform’s authentication and input handling mechanisms can be exploited, allowing unauthorised access to projects and potentially enabling attackers to insert malicious code or exfiltrate sensitive information.

Because Orchids abstracts conventional coding into natural-language prompts and shared project spaces, the risk surface for such vulnerabilities is larger than in traditional development environments.

The report underscores broader concerns in the AI developer ecosystem: as AI-driven tools lower technical barriers, they also bring new security challenges when platforms rush to innovate without fully addressing fundamental safeguards such as secure authentication, input validation and permission controls.

Experts cited in the article urge industry and regulators to prioritise robust security testing and clear accountability when deploying AI-assisted coding systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup raises $100m to predict human behaviour

Artificial intelligence startup Simile has raised $100m to develop a model designed to predict human behaviour in commercial and corporate contexts. The funding round was led by Index Ventures with participation from Bain Capital Ventures and other investors.

The company is building a foundation model trained on interviews, transaction records and behavioural science research. Its AI simulations aim to forecast customer purchases and anticipate questions analysts may raise during earnings calls.

Simile says the technology could offer an alternative to traditional focus groups and market testing. Retail trials have included using the system to guide decisions on product placement and inventory.

Founded by Stanford-affiliated researchers, the startup recently emerged from stealth after months of development. Prominent AI figures, including Fei-Fei Li and Andrej Karpathy, joined the funding round as it seeks to scale predictive decision-making tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI adoption reshapes UK scale-up hiring policy framework

AI adoption is prompting UK scale-ups to recalibrate workforce policies. Survey data indicates that 33% of founders anticipate job cuts within the next year, while 58% are already delaying or scaling back recruitment as automation expands. The prevailing approach centres on cautious workforce management rather than immediate restructuring.

Instead of large-scale redundancies, many firms are prioritising hiring freezes and reduced vacancy postings. This policy choice allows companies to contain costs and integrate AI gradually, limiting workforce growth while assessing long-term operational needs.

The trend aligns with broader labour market caution in the UK, where vacancies have cooled amid rising business costs and technological transition. Globally, the technology sector has experienced significant layoffs in 2026, reinforcing concerns about how AI-driven efficiency strategies are reshaping employment models.

At the same time, workforce readiness remains a structural policy challenge. Only a small proportion of founders consider the UK workforce prepared for widespread AI adoption, underscoring calls for stronger investment in skills development and reskilling frameworks as automation capabilities advance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ethical governance at centre of Africa AI talks

Ghana is set to host the Pan African AI and Innovation Summit 2026 in Accra, reinforcing its ambition to shape Africa’s digital future. The gathering will centre on ethical artificial intelligence, youth empowerment and cross-sector partnerships.

Advocates argue that AI systems must be built on local data to reflect African realities. Many global models rely on datasets developed outside the continent, limiting contextual relevance. Prioritising indigenous data, they say, will improve outcomes across agriculture, healthcare, education and finance.

National institutions are central to that effort. The National Information Technology Agency and the Data Protection Commission have strengthened digital infrastructure and privacy oversight.

Leaders now call for a shift from foundational regulation to active enablement. Expanded cloud capacity, high-performance computing and clearer ethical AI guidelines are seen as critical next steps.

Supporters believe coordinated governance and infrastructure investment can generate skilled jobs and position Ghana as a continental hub for responsible AI innovation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!