Multiply Labs targets automation in cell therapy manufacturing

Robotics firm Multiply Labs is introducing automation into cell therapy manufacturing to cut costs by more than 70% and increase output. The startup applies industrial robotics to clean-room environments, replacing slow and contamination-prone manual processes.

Founded in 2016, the San Francisco-based company collaborates with leading cell therapy developers, including Kyverna Therapeutics and Legend Biotech. Its robotic systems perform sterile, precision tasks involved in producing gene-modified cell therapies at scale.

Multiply Labs uses NVIDIA Omniverse to create digital twins of laboratory environments and Isaac Sim to train robots for specialised workflows. Humanoid robots built on NVIDIA’s Isaac GR00T model are also being developed to assist with material handling while maintaining hygiene standards.

Cell therapies involve modifying patient or donor cells to treat various conditions, including cancers, autoimmune diseases, and genetic disorders. The highly customised nature of these treatments makes production costly and sensitive to human error, increasing the risk of failed batches.

By automating thousands of delicate steps, robotics improves consistency, reduces contamination, and preserves expert knowledge. Multiply Labs states that automation could enable the mass production of life-saving therapies at a lower cost and greater availability.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia raises concerns over AI misuse on X

The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.

Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.

The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.

X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.

eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.

Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.

Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.

Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI enters Colorado classrooms as schools experiment with generative tools

Teachers across Colorado are exploring how AI can be utilised as an instructional assistant to support classroom instruction and student learning.

Some educators are experimenting with generative AI tools that help with tasks like lesson planning, summarising material and creating examples, while also educating students on responsible use of AI.

The broader trend mirrors state and district efforts to develop AI strategies for education. Reports indicate that many districts are establishing steering committees and policies to guide the safe and effective use of classrooms.

In contrast, others limit student access due to privacy concerns, underscoring the need for training and clear guidelines.

Teachers have noted both benefits, such as time savings and personalised support, and challenges, including ethical questions about plagiarism and student independence, highlighting a period of experimentation and adjustment as AI becomes part of mainstream education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Young people worry about jobs and inflation

Rising living costs and economic instability are the biggest worries for young people worldwide. A World Economic Forum survey shows inflation dominates personal and global concerns.

Many young people fear that AI-driven automation will shrink entry-level job opportunities. Two-thirds expect fewer early career roles despite growing engagement with AI tools.

Nearly 60 per cent already use AI to build skills and improve employability. Side hustles and freelance work are increasingly common responses to economic pressure.

Youth respondents call for quality jobs, better education access and affordable housing. Climate change also ranks among the most serious long-term global risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China pushes frontier tech from research to real-world applications

Innovations across China are moving rapidly from laboratories into everyday use, spanning robotics, autonomous vehicles and quantum computing. Airports, hotels and city streets are increasingly becoming testing grounds for advanced technologies.

In Hefei, humanoid cleaning robots developed by local start-up Zerith are already operating in public venues across major cities. The company scaled from prototype to mass production within a year, securing significant commercial orders.

Beyond robotics, frontier research is finding industrial applications in energy, healthcare and manufacturing. Advances from fusion research and quantum mechanics are being adapted for cancer screening, battery safety and precision measurement.

Policy support and investment are accelerating this transition from research to market. National planning and local funding initiatives aim to turn scientific breakthroughs into scalable technologies with global reach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI gap reflects China’s growing technological ambitions

China’s AI sector could narrow the technological AI gap with the United States through growing risk-taking and innovation, according to leading researchers. Despite export controls on advanced chipmaking tools, Chinese firms are accelerating development across multiple AI fields.

Yao Shunyu, a former senior researcher at ChatGPT maker OpenAI and now Tencent’s AI scientist, said a Chinese company could become the world’s leading AI firm within three to five years. He pointed to China’s strengths in electricity supply and infrastructure as key advantages.

Yao said the main bottlenecks remain production capacity, including access to advanced lithography machines and a mature software ecosystem. Such limits still restrict China’s ability to manufacture the most advanced semiconductors and narrow the AI gap with the US.

China has developed a working prototype of an extreme-ultraviolet lithography machine that could eventually rival Western technology. However, Reuters reported the system has not yet produced functioning chips.

Sources familiar with the project said commercial chip production using the machine may not begin until around 2030. Until then, Chinese AI ambitions are likely to remain constrained by hardware limitations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Teen victim turns deepfake experience into education

A US teenager targeted by explicit deepfake images has helped create a new training course. The programme aims to support students, parents and school staff facing online abuse.

The course explains how AI tools are used to create sexualised fake images. It also outlines legal rights, reporting steps and available victim support resources.

Research shows deepfake abuse is spreading among teenagers, despite stronger laws. One in eight US teens know someone targeted by non-consensual fake images.

Developers say education remains critical as AI tools become easier to access. Schools are encouraged to adopt training to protect students and prevent harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital twins gain momentum through AI

AI is accelerating the creation of digital twins by reducing the time and labour required to build complex models. Consulting firm McKinsey says specialised virtual replicas can take six months or more to develop, but generative AI tools can now automate much of the coding process.

McKinsey analysts say AI can structure inputs and synthesise outputs for these simulations, while the models provide safe testing environments for AI systems. Together, the technologies can reduce costs, shorten development cycles, and accelerate deployment.

Quantum Elements, a startup backed by QNDL Participations and the USC Viterbi School of Engineering, is applying this approach to quantum computing. Its Constellation platform combines AI agents, natural language tools, and simulation software.

The company says quantum systems are hard to model because qubits behave differently across hardware types such as superconducting circuits, trapped ions, and photonics. These variations affect stability, error rates, and performance.

By using digital twins, developers can test algorithms, simulate noise, and evaluate error correction without building physical hardware. Quantum Elements says this can cut testing time from months to minutes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Patients notified months after Canopy Healthcare cyber incident

Canopy Healthcare, one of New Zealand’s largest private medical oncology providers, has disclosed a data breach affecting patient and staff information, six months after the incident occurred.

The company said an unauthorised party accessed part of its administration systems on 18 July 2025, copying a ‘small’ amount of data. Affected information may include patient records, passport details, and some bank account numbers.

Canopy said it remains unclear exactly which individuals were impacted and what data was taken, adding that no evidence has emerged of the information being shared or published online.

Patients began receiving notifications in December 2025, prompting criticism over the delay. One affected patient said they were unhappy to learn about the breach months after it happened.

The New Zealand company said it notified police and the Privacy Commissioner at the time, secured a High Court injunction to prevent misuse of the data, and confirmed that its medical services continue to operate normally.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyber Fortress strengthens European cyber resilience

Luxembourg has hosted its largest national cyber defence exercise, Cyber Fortress, bringing together military and civilian specialists to practise responding to real-time cyberattacks on digital systems.

Since its launch in 2021, Cyber Fortress has evolved beyond a purely technical drill. The exercise now includes a realistic fictional scenario supported by media injections, creating a more immersive and practical training environment for participants.

This year’s edition expanded its international reach, with teams joining from Belgium, Latvia, Malta and the EU Cyber Rapid Response Teams. Around 100 participants also took part from a parallel site in Latvia, working alongside Luxembourg-based teams.

The exercise focuses on interoperability during cyber crises. Participants respond to multiple simulated attacks while protecting critical services, including systems linked to drone operations and other sensitive infrastructure.

Cyber Fortress now covers technical, procedural and management aspects of cyber defence. A new emphasis on disinformation, deepfakes and fake news reflects the growing importance of information warfare.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!