Eli Lilly and NVIDIA invest in AI-driven pharmaceutical innovation

NVIDIA and Eli Lilly have announced a joint AI co-innovation lab aimed at advancing drug discovery by combining AI with pharmaceutical research.

The partnership combines Lilly’s experience in medical development with NVIDIA’s expertise in accelerated computing and AI infrastructure.

The two companies plan to invest up to $1 billion over five years in research capacity, computing resources and specialist talent.

Based in the San Francisco Bay Area, the lab will support large-scale data generation and model development using NVIDIA platforms, instead of relying solely on traditional laboratory workflows.

Beyond early research, the collaboration is expected to explore applications of AI across manufacturing, clinical development and supply chain operations.

Both NVIDIA and Eli Lilly claim the initiative is designed to enhance efficiency and scalability in medical production while fostering long-term innovation in the life sciences sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Morocco outlines national AI roadmap to 2030

Morocco is preparing to unveil ‘Maroc IA 2030’, a national AI roadmap designed to structure the country’s AI ecosystem and strengthen digital transformation.

The strategy seeks to modernise public services, improve interoperability across digital systems and enhance economic competitiveness, according to officials ahead of the ‘AI Made in Morocco’ event in Rabat.

A central element of the plan involves the creation of Al Jazari Institutes, a national network of AI centres of excellence connecting academic research with innovation and regional economic needs.

A roadmap that prioritises technological autonomy, trusted AI use, skills development, support for local innovation and balanced territorial coverage instead of fragmented deployment.

The initiative builds on the Digital Morocco 2030 strategy launched in 2024, which places AI at the core of national digital policy.

Authorities expect the combined efforts to generate around 240,000 digital jobs and contribute approximately $10 billion to gross domestic product by 2030, while improving the international AI readiness ranking of Morocco.

Additional measures include the establishment of a General Directorate for AI and Emerging Technologies to oversee public policy and the development of an Arab African regional digital hub in partnership with the United Nations Development Programme.

Their main goal is to support sustainable and responsible digital innovation.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Multiply Labs targets automation in cell therapy manufacturing

Robotics firm Multiply Labs is introducing automation into cell therapy manufacturing to cut costs by more than 70% and increase output. The startup applies industrial robotics to clean-room environments, replacing slow and contamination-prone manual processes.

Founded in 2016, the San Francisco-based company collaborates with leading cell therapy developers, including Kyverna Therapeutics and Legend Biotech. Its robotic systems perform sterile, precision tasks involved in producing gene-modified cell therapies at scale.

Multiply Labs uses NVIDIA Omniverse to create digital twins of laboratory environments and Isaac Sim to train robots for specialised workflows. Humanoid robots built on NVIDIA’s Isaac GR00T model are also being developed to assist with material handling while maintaining hygiene standards.

Cell therapies involve modifying patient or donor cells to treat various conditions, including cancers, autoimmune diseases, and genetic disorders. The highly customised nature of these treatments makes production costly and sensitive to human error, increasing the risk of failed batches.

By automating thousands of delicate steps, robotics improves consistency, reduces contamination, and preserves expert knowledge. Multiply Labs states that automation could enable the mass production of life-saving therapies at a lower cost and greater availability.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia raises concerns over AI misuse on X

The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.

Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.

The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.

X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.

eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.

Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.

Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.

Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude expands into healthcare and life sciences

Healthcare and life sciences organisations face increasing administrative pressure, fragmented systems, and rapidly evolving research demands. At the same time, regulatory compliance, safety, and trust remain critical requirements across all clinical and scientific operations.

Anthropic has launched new tools and connectors for Claude in Microsoft Foundry to support enterprise-scale AI workflows. Built on Azure’s secure infrastructure, the platform promotes responsible integration across data, compliance, and workflow automation environments.

The new capabilities are designed specifically for healthcare and life sciences use cases, including prior authorisation review, claims appeals processing, care coordination, and patient triage.

In research and development, the tools support protocol drafting, regulatory submissions, bioinformatics analysis, and experimental design.

According to Anthropic, the updates build on significant improvements in Claude’s underlying models, delivering stronger performance in areas such as scientific interpretation, computational biology, and protein understanding.

The aim is to enable faster, more reliable decision-making across regulated, real-world workflows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI enters Colorado classrooms as schools experiment with generative tools

Teachers across Colorado are exploring how AI can be utilised as an instructional assistant to support classroom instruction and student learning.

Some educators are experimenting with generative AI tools that help with tasks like lesson planning, summarising material and creating examples, while also educating students on responsible use of AI.

The broader trend mirrors state and district efforts to develop AI strategies for education. Reports indicate that many districts are establishing steering committees and policies to guide the safe and effective use of classrooms.

In contrast, others limit student access due to privacy concerns, underscoring the need for training and clear guidelines.

Teachers have noted both benefits, such as time savings and personalised support, and challenges, including ethical questions about plagiarism and student independence, highlighting a period of experimentation and adjustment as AI becomes part of mainstream education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered toys navigate safety concerns after early missteps

Toy makers at the Consumer Electronics Show highlighted efforts to improve AI in playthings following troubling early reports of chatbots giving unsuitable responses to children’s questions.

A recent Public Interest Research Group report found that some AI toys, such as an AI-enabled teddy bear, produced inappropriate advice, prompting companies like FoloToy to update their models and suspend problematic products.

Among newer devices, Curio’s Grok toy, which refuses to answer questions deemed inappropriate and allows parental overrides, has earned independent safety certification. However, concerns remain about continuous listening and data privacy.

Experts advise parents to be cautious about toys that retain information over time or engage in ongoing interactions with young users.

Some manufacturers are positioning AI toys as educational tools, for example, language-learning companions with time-limited, guided chat interactions, and others have built in flags to alert parents when inappropriate content arises.

Despite these advances, critics argue that self-regulation is insufficient and call for clearer guardrails and possible regulation to protect children in AI-toy environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Welsh government backs AI adoption with £2.1m support

The Welsh Government is providing £2.1 million in funding to support small and medium-sized businesses across Wales in adopting AI. The initiative aims to promote the ethical and practical use of AI, enhancing productivity and competitiveness.

Business Wales will receive £600,000 to deliver an AI awareness and adoption programme, following recent reviews on SME productivity. Additional funding will enhance tourism and events through targeted AI projects and practical workshops.

A further £1 million will expand AI upskilling through the Flexible Skills Programme, addressing digital skills gaps across regions and sectors. Employers will contribute part of the training costs to support inclusive growth.

Swansea-based Something Different Wholesale is already using AI to automate tasks, analyse market data and improve customer services. Welsh ministers say the funding supports the responsible adoption of AI, aligned with the AI Plan for Wales.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Young people worry about jobs and inflation

Rising living costs and economic instability are the biggest worries for young people worldwide. A World Economic Forum survey shows inflation dominates personal and global concerns.

Many young people fear that AI-driven automation will shrink entry-level job opportunities. Two-thirds expect fewer early career roles despite growing engagement with AI tools.

Nearly 60 per cent already use AI to build skills and improve employability. Side hustles and freelance work are increasingly common responses to economic pressure.

Youth respondents call for quality jobs, better education access and affordable housing. Climate change also ranks among the most serious long-term global risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI gap reflects China’s growing technological ambitions

China’s AI sector could narrow the technological AI gap with the United States through growing risk-taking and innovation, according to leading researchers. Despite export controls on advanced chipmaking tools, Chinese firms are accelerating development across multiple AI fields.

Yao Shunyu, a former senior researcher at ChatGPT maker OpenAI and now Tencent’s AI scientist, said a Chinese company could become the world’s leading AI firm within three to five years. He pointed to China’s strengths in electricity supply and infrastructure as key advantages.

Yao said the main bottlenecks remain production capacity, including access to advanced lithography machines and a mature software ecosystem. Such limits still restrict China’s ability to manufacture the most advanced semiconductors and narrow the AI gap with the US.

China has developed a working prototype of an extreme-ultraviolet lithography machine that could eventually rival Western technology. However, Reuters reported the system has not yet produced functioning chips.

Sources familiar with the project said commercial chip production using the machine may not begin until around 2030. Until then, Chinese AI ambitions are likely to remain constrained by hardware limitations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!