Eli Lilly and NVIDIA invest in AI-driven pharmaceutical innovation

NVIDIA and Eli Lilly have announced a joint AI co-innovation lab aimed at advancing drug discovery by combining AI with pharmaceutical research.

The partnership combines Lilly’s experience in medical development with NVIDIA’s expertise in accelerated computing and AI infrastructure.

The two companies plan to invest up to $1 billion over five years in research capacity, computing resources and specialist talent.

Based in the San Francisco Bay Area, the lab will support large-scale data generation and model development using NVIDIA platforms, instead of relying solely on traditional laboratory workflows.

Beyond early research, the collaboration is expected to explore applications of AI across manufacturing, clinical development and supply chain operations.

Both NVIDIA and Eli Lilly claim the initiative is designed to enhance efficiency and scalability in medical production while fostering long-term innovation in the life sciences sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Morocco outlines national AI roadmap to 2030

Morocco is preparing to unveil ‘Maroc IA 2030’, a national AI roadmap designed to structure the country’s AI ecosystem and strengthen digital transformation.

The strategy seeks to modernise public services, improve interoperability across digital systems and enhance economic competitiveness, according to officials ahead of the ‘AI Made in Morocco’ event in Rabat.

A central element of the plan involves the creation of Al Jazari Institutes, a national network of AI centres of excellence connecting academic research with innovation and regional economic needs.

A roadmap that prioritises technological autonomy, trusted AI use, skills development, support for local innovation and balanced territorial coverage instead of fragmented deployment.

The initiative builds on the Digital Morocco 2030 strategy launched in 2024, which places AI at the core of national digital policy.

Authorities expect the combined efforts to generate around 240,000 digital jobs and contribute approximately $10 billion to gross domestic product by 2030, while improving the international AI readiness ranking of Morocco.

Additional measures include the establishment of a General Directorate for AI and Emerging Technologies to oversee public policy and the development of an Arab African regional digital hub in partnership with the United Nations Development Programme.

Their main goal is to support sustainable and responsible digital innovation.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia raises concerns over AI misuse on X

The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.

Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.

The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.

X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.

eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.

Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.

Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.

Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China pushes frontier tech from research to real-world applications

Innovations across China are moving rapidly from laboratories into everyday use, spanning robotics, autonomous vehicles and quantum computing. Airports, hotels and city streets are increasingly becoming testing grounds for advanced technologies.

In Hefei, humanoid cleaning robots developed by local start-up Zerith are already operating in public venues across major cities. The company scaled from prototype to mass production within a year, securing significant commercial orders.

Beyond robotics, frontier research is finding industrial applications in energy, healthcare and manufacturing. Advances from fusion research and quantum mechanics are being adapted for cancer screening, battery safety and precision measurement.

Policy support and investment are accelerating this transition from research to market. National planning and local funding initiatives aim to turn scientific breakthroughs into scalable technologies with global reach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Photonic secures $130 million to scale quantum computing systems

Canadian quantum computing company Photonic has raised $130 million in the first close of a new investment round led by Planet First Partners. New backers include RBC and TELUS, alongside returning investors.

The funding brings Photonic’s total capital raised to $271 million and supports the development of fault-tolerant quantum systems. The company combines silicon-based qubits with built-in photonic connectivity.

Photonic’s entanglement-first architecture is designed to scale across existing global telecom networks. The approach aims to enable large, distributed quantum computers rather than isolated machines.

Headquartered in Vancouver, Photonic plans to utilise the investment to accelerate key product milestones and expand its team. Investors see strong potential across finance, sustainability, telecommunications and security sectors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Patients notified months after Canopy Healthcare cyber incident

Canopy Healthcare, one of New Zealand’s largest private medical oncology providers, has disclosed a data breach affecting patient and staff information, six months after the incident occurred.

The company said an unauthorised party accessed part of its administration systems on 18 July 2025, copying a ‘small’ amount of data. Affected information may include patient records, passport details, and some bank account numbers.

Canopy said it remains unclear exactly which individuals were impacted and what data was taken, adding that no evidence has emerged of the information being shared or published online.

Patients began receiving notifications in December 2025, prompting criticism over the delay. One affected patient said they were unhappy to learn about the breach months after it happened.

The New Zealand company said it notified police and the Privacy Commissioner at the time, secured a High Court injunction to prevent misuse of the data, and confirmed that its medical services continue to operate normally.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyber Fortress strengthens European cyber resilience

Luxembourg has hosted its largest national cyber defence exercise, Cyber Fortress, bringing together military and civilian specialists to practise responding to real-time cyberattacks on digital systems.

Since its launch in 2021, Cyber Fortress has evolved beyond a purely technical drill. The exercise now includes a realistic fictional scenario supported by media injections, creating a more immersive and practical training environment for participants.

This year’s edition expanded its international reach, with teams joining from Belgium, Latvia, Malta and the EU Cyber Rapid Response Teams. Around 100 participants also took part from a parallel site in Latvia, working alongside Luxembourg-based teams.

The exercise focuses on interoperability during cyber crises. Participants respond to multiple simulated attacks while protecting critical services, including systems linked to drone operations and other sensitive infrastructure.

Cyber Fortress now covers technical, procedural and management aspects of cyber defence. A new emphasis on disinformation, deepfakes and fake news reflects the growing importance of information warfare.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK considers regulatory action after Grok’s deepfake images on X

UK Prime Minister Keir Starmer is consulting Canada and Australia on a coordinated response to concerns surrounding social media platform X, after its AI assistant Grok was used to generate sexualised deepfake images of women and children.

The discussions focus on shared regulatory approaches rather than immediate bans.

X acknowledged weaknesses in its AI safeguards and limited image generation to paying users. Lawmakers in several countries have stated that further regulatory scrutiny may be required, while Canada has clarified that no prohibition is currently under consideration, despite concerns over platform responsibility.

In the UK, media regulator Ofcom is examining potential breaches of online safety obligations. Technology secretary Liz Kendall confirmed that enforcement mechanisms remain available if legal requirements are not met.

Australian Prime Minister Anthony Albanese also raised broader concerns about social responsibility in the use of generative AI.

X owner Elon Musk rejected accusations of non-compliance, describing potential restrictions as censorship and suppression of free speech.

European authorities requested the preservation of internal records for possible investigations, while Indonesia and Malaysia have already blocked access to the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India mandates live identity checks for crypto users

India’s Financial Intelligence Unit has tightened crypto compliance, requiring live identity checks, location verification, and stronger Client Due Diligence. The measures aim to prevent money laundering, terrorist financing, and misuse of digital asset services.

Crypto platforms must now collect multiple identifiers from users, including IP addresses, device IDs, wallet addresses, transaction hashes, and timestamps.

Verification also requires users to provide a Permanent Account Number and a secondary ID, such as a passport, Aadhaar, or voter ID, alongside OTP confirmation for email and phone numbers.

Bank accounts must be validated via a penny-drop mechanism to confirm ownership and operational status.

Enhanced due diligence will apply to high-risk transactions and relationships, particularly those involving users from designated high-risk jurisdictions and tax havens. Platforms must monitor red flags and apply extra scrutiny to comply with the new guidelines.

Industry experts have welcomed the updated rules, describing them as a positive step for India’s crypto ecosystem. The measures are viewed as enhancing transparency, protecting users, and aligning the sector with global anti-money laundering standards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Indonesia and Malaysia restrict access to Grok AI over content safeguards

Malaysia and Indonesia have restricted access to Grok, the AI chatbot available through the X platform, following concerns about its image generation capabilities.

Authorities said the tool had been used to create manipulated images depicting real individuals in sexually explicit contexts.

Regulatory bodies in Malaysia and Indonesia stated that the decision was based on the absence of sufficient safeguards to prevent misuse.

Requests for additional risk mitigation measures were communicated to the platform operator, with access expected to remain limited until further protections are introduced.

The move has drawn attention from regulators in other regions, where online safety frameworks allow intervention when digital services fail to address harmful content. Discussions have focused on platform responsibility, content moderation standards, and compliance with existing legal obligations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!