China boosts AI leadership with major model launches ahead of Lunar New Year

Leading Chinese AI developers have unveiled a series of advanced models ahead of the Lunar New Year, strengthening the country’s position in the global AI sector.

Major firms such as Alibaba, ByteDance, and Zhipu AI introduced new systems designed to support more sophisticated agents, faster workflows and broader multimedia understanding.

Industry observers also expect an imminent release from DeepSeek, whose previous model disrupted global markets last year.

Alibaba’s Qwen 3.5 model provides improved multilingual support across text, images and video while enabling rapid AI agent deployment instead of slower generation pipelines.

ByteDance followed up with updates to its Doubao chatbot and the second version of its image-to-video tool, SeeDance, which has drawn copyright concerns from the Motion Picture Association due to the ease with which users can recreate protected material.

Zhipu AI expanded the landscape further with GLM-5, an open-source model built for long-context reasoning, coding tasks, and multi-step planning. The company highlighted the model’s reliance on Huawei hardware as part of China’s efforts to strengthen domestic semiconductor resilience.

Meanwhile, excitement continues to build for DeepSeek’s fourth-generation system, expected to follow the widespread adoption and market turbulence associated with its V3 model.

Authorities across parts of Europe have restricted the use of DeepSeek models in public institutions because of data security and cybersecurity concerns.

Even so, the rapid pace of development in China suggests intensifying competition in the design of agent-focused systems capable of managing complex digital tasks without constant human oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Meta explores AI system for digital afterlife

Meta has been granted a patent describing an AI system that could simulate a person’s social media activity, even after their death. The patent, originally filed in 2023 and approved in late December, outlines how AI could replicate a user’s online presence by drawing on their past posts, messages and interactions.

According to the filing, a large language model could analyse a person’s digital history, including comments, chats, voice messages and reactions, to generate new content that mirrors their tone and behaviour. The system could respond to other users, publish updates and continue conversations in a way that resembles the original account holder.

The patent suggests the technology could be used when someone is temporarily absent from a platform, but it also explicitly addresses the possibility of continuing activity after a user’s death. It notes that such a scenario would carry more permanent implications, as the person would not be able to return and reclaim control of the account.

More advanced versions of the concept could potentially simulate voice or even video interactions, effectively creating a digital persona capable of engaging with others in real time. The idea aligns with previous comments by Meta CEO Mark Zuckerberg, who has said AI could one day help people interact with digital representations of loved ones, provided consent mechanisms are in place.

Meta has stressed that the patent does not signal an imminent product launch, describing it as a protective filing for a concept that may never be developed. Still, similar services offered by startups have already sparked ethical debate, raising questions about digital identity, consent and the emotional impact of recreating the online presence of someone who has died.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

AI cheating allegation sparks discrimination lawsuit

A University of Michigan student has filed a federal lawsuit accusing the university of disability discrimination after professors allegedly claimed she used AI to write her essays. The student, identified in court documents as ‘Jane Doe,’ denies using AI and argues that symptoms linked to her medical conditions were wrongly interpreted as signs of cheating.

According to the complaint, Doe has obsessive-compulsive disorder and generalised anxiety disorder. Her lawyers argue that traits associated with those conditions, including a formal tone, structured writing, and consistent style, were cited by instructors as evidence that her work was AI-generated. They say she provided proof and medical documentation supporting her case but was still subjected to disciplinary action and prevented from graduating.

The lawsuit alleges that the university failed to provide appropriate disability-related accommodations during the academic integrity process. It also claims that the same professor who raised the concerns remained responsible for grading and overseeing remedial work, despite what the complaint describes as subjective judgments and questionable AI-detection methods.

The case highlights broader tensions on campuses as educators grapple with the rapid rise of generative AI tools. Professors across the United States report growing difficulty distinguishing between student work and machine-generated text, while students have increasingly challenged accusations they say rely on unreliable detection software.

Similar legal disputes have emerged elsewhere, with students and families filing lawsuits after being accused of submitting AI-written assignments. Research has suggested that some AI-detection systems can produce inaccurate results, raising concerns about fairness and due process in academic settings.

The University of Michigan has been asked to comment on the lawsuit, which is likely to intensify debate over how institutions balance academic integrity, disability rights, and the limits of emerging AI detection technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Researchers teach AI to interpret complex scientific data from brain scans to alloy design

Research teams are developing artificial intelligence systems designed to assist scientists in making sense of complex, high-dimensional data across disciplines such as neuroscience and materials engineering.

Traditional analysis methods often require extensive human expertise and time; AI models trained to identify patterns, reduce noise, and suggest hypotheses could significantly accelerate research cycles.

In neuroscience, AI is being used to extract meaningful features from detailed brain imaging datasets, enabling better understanding of neural processes and potentially enhancing diagnosis and treatment development.

In materials science, generative and predictive models help identify promising alloy compositions and properties by learning from vast experimental datasets, reducing reliance on trial-and-error experimentation.

Researchers emphasise that these AI tools don’t replace domain expertise but rather augment scientists’ abilities to navigate complex datasets, improve reproducibility and prioritise experiments with higher scientific payoff.

Ethical considerations and careful validation remain important to ensure models don’t propagate biases or misinterpret subtle signals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE launches first AI clinical platform

A Pakistani American surgeon has launched what is described as the UAE’s first AI clinical intelligence platform across the country’s public healthcare system. The rollout was announced in Dubai in partnership with Emirates Health Services.

Boston Health AI, founded by Dr Adil Haider, introduced the platform known as Amal at a major health expo in Dubai. The system conducts structured medical interviews in Arabic, English and Urdu before consultations, generating summaries for physicians.

The company said the technology aims to reduce documentation burdens and cognitive load on clinicians in the UAE. By organising patient histories and symptoms in advance, Amal is designed to support clinical decision making and improve workflow efficiency in Dubai and other emirates.

Before entering the UAE market, Boston Health AI deployed its platform in Pakistan across more than 50 healthcare facilities. The firm states that over 30,000 patient interactions were recorded in Pakistan, where a local team continues to develop and refine the AI system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quebec examines AI debt collection practices

Quebec’s financial regulator has opened a review into how AI tools are being used to collect consumer debt across the province. The Autorité des marchés financiers is examining whether automated systems comply with governance, privacy and fairness standards in Quebec.

Draft guidelines released in 2025 require institutions in Quebec to maintain registries of AI systems, conduct bias testing and ensure human oversight. Public consultations closed in November, with regulators stressing that automation must remain explainable and accountable.

Many debt collection platforms now rely on predictive analytics to tailor the timing, tone and frequency of messages sent to borrowers in Quebec. Regulators are assessing whether such personalisation risks undue pressure or opaque decision making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Security flaws expose ‘vibe-coding’ AI platform Orchids to easy hacking

BBC technology reporting reveals that Orchids, a popular ‘vibe-coding’ platform designed to let users build applications through simple text prompts and AI-assisted generation, contains serious, unresolved security weaknesses that could let a malicious actor breach accounts and tamper with code or data.

A cybersecurity researcher demonstrated that the platform’s authentication and input handling mechanisms can be exploited, allowing unauthorised access to projects and potentially enabling attackers to insert malicious code or exfiltrate sensitive information.

Because Orchids abstracts conventional coding into natural-language prompts and shared project spaces, the risk surface for such vulnerabilities is larger than in traditional development environments.

The report underscores broader concerns in the AI developer ecosystem: as AI-driven tools lower technical barriers, they also bring new security challenges when platforms rush to innovate without fully addressing fundamental safeguards such as secure authentication, input validation and permission controls.

Experts cited in the article urge industry and regulators to prioritise robust security testing and clear accountability when deploying AI-assisted coding systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Five lesser-known SPACs tapping AI, quantum and digital asset innovation

In a recent episode of Ticker Take, financial analysts spotlight five SPACs that fly under the radar but are linked with next-generation tech sectors such as quantum computing, artificial intelligence infrastructure, tokenised assets and genomics/health tech.

The list reflects renewed investor interest in SPACs as an alternative route to public markets for early-stage innovators outside mainstream IPO pipelines.

Crane Harbor Acquisition Corp (CHAC) is targeting Xanadu Quantum Technologies, a Canadian quantum computing company planning to go public via SPAC, aiming to accelerate quantum hardware development.

Churchill Capital Corp X (CCCX) is set to merge with Infleqtion, a firm building quantum computers and precision sensing systems, in an ~$1.8 billion deal.

Cantor Equity Partners II (CEPT) is associated with Securitize, a digital securities platform enabling regulated tokenisation of real-world assets (including potentially AI/tech-linked assets).

Willow Lane Acquisition (WLAC) is linked to Boost Run, an AI-enabled delivery-optimization platform, offering exposure to logistics tech with generative and predictive capabilities.

Perceptive Capital Solutions Corp (PCSC) is connected to Freenome, a company focused on AI-driven early cancer detection and genomics, blending AI with life-science innovation.

Together, these SPAC deals illustrate how blank-check vehicles are resurfacing in markets for AI, quantum and digital transformation, offering investors early access to companies that might otherwise take longer to reach public markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup raises $100m to predict human behaviour

Artificial intelligence startup Simile has raised $100m to develop a model designed to predict human behaviour in commercial and corporate contexts. The funding round was led by Index Ventures with participation from Bain Capital Ventures and other investors.

The company is building a foundation model trained on interviews, transaction records and behavioural science research. Its AI simulations aim to forecast customer purchases and anticipate questions analysts may raise during earnings calls.

Simile says the technology could offer an alternative to traditional focus groups and market testing. Retail trials have included using the system to guide decisions on product placement and inventory.

Founded by Stanford-affiliated researchers, the startup recently emerged from stealth after months of development. Prominent AI figures, including Fei-Fei Li and Andrej Karpathy, joined the funding round as it seeks to scale predictive decision-making tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI adoption reshapes UK scale-up hiring policy framework

AI adoption is prompting UK scale-ups to recalibrate workforce policies. Survey data indicates that 33% of founders anticipate job cuts within the next year, while 58% are already delaying or scaling back recruitment as automation expands. The prevailing approach centres on cautious workforce management rather than immediate restructuring.

Instead of large-scale redundancies, many firms are prioritising hiring freezes and reduced vacancy postings. This policy choice allows companies to contain costs and integrate AI gradually, limiting workforce growth while assessing long-term operational needs.

The trend aligns with broader labour market caution in the UK, where vacancies have cooled amid rising business costs and technological transition. Globally, the technology sector has experienced significant layoffs in 2026, reinforcing concerns about how AI-driven efficiency strategies are reshaping employment models.

At the same time, workforce readiness remains a structural policy challenge. Only a small proportion of founders consider the UK workforce prepared for widespread AI adoption, underscoring calls for stronger investment in skills development and reskilling frameworks as automation capabilities advance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!