CERN unveils AI strategy to advance research and operations

CERN has approved a comprehensive AI strategy to guide its use across research, operations, and administration. The strategy unites initiatives under a coherent framework to promote responsible and impactful AI for science and operational excellence.

It focuses on four main goals: accelerating scientific discovery, improving productivity and reliability, attracting and developing talent, and enabling AI at scale through strategic partnerships with industry and member states.

Common tools and shared experiences across sectors will strengthen CERN’s community and ensure effective deployment.

Implementation will involve prioritised plans and collaboration with EU programmes, industry, and member states to build capacity, secure funding, and expand infrastructure. Applications of AI will support high-energy physics experiments, future accelerators, detectors, and data-driven decision-making.

AI is now central to CERN’s mission, transforming research methodologies and operations. From intelligent automation to scalable computational insight, the technology is no longer optional but a strategic imperative for the organisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Agentic AI drives a new identity security crisis

New research from Rubrik Zero Labs warns that agentic AI is reshaping the identity landscape faster than organisations can secure it.

The study reveals a surge in non-human identities created through automation and API driven workflows, with numbers now exceeding human users by a striking margin.

Most firms have already introduced AI agents into their identity systems or plan to do so, yet many struggle to govern the growing volume of machine credentials.

Experts argue that identity has become the primary attack surface as remote work, cloud adoption and AI expansion remove traditional boundaries. Threat actors increasingly rely on valid credentials instead of technical exploits, which makes weaknesses in identity governance far more damaging.

Rubrik’s researchers and external analysts agree that a single compromised key or forgotten agent account can provide broad access to sensitive environments.

Industry specialists highlight that agentic AI disrupts established IAM practices by blurring distinctions between human and machine activity.

Organisations often cannot determine whether a human or an automated agent performed a critical action, which undermines incident investigations and weakens zero-trust strategies. Poor logging, weak lifecycle controls and abandoned machine identities further expand the attack surface.

Rubrik argues that identity resilience is becoming essential, since IAM tools alone cannot restore trust after a breach. Many firms have already switched IAM providers, reflecting widespread dissatisfaction with current safeguards.

Analysts recommend tighter control of agent creation, stronger credential governance and a clearer understanding of how AI-driven identities reshape operational and security risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft expands AI model Aurora to improve global weather forecasts

Extreme weather displaced over 800,000 people worldwide in 2024, highlighting the importance of accurate forecasts for saving lives, protecting infrastructure, and supporting economies. Farmers, coastal communities, and energy operators rely on timely forecasts to prepare and respond effectively.

Microsoft is reaffirming its commitment to Aurora, an AI model designed to help scientists better understand Earth systems. Trained on vast datasets, Aurora can predict weather, track hurricanes, monitor air quality, and model ocean waves and energy flows.

The platform will remain open-source, enabling researchers worldwide to innovate, collaborate, and apply it to new climate and weather challenges.

Through partnerships with Professor Rich Turner at the University of Cambridge and initiatives like SPARROW, Microsoft is expanding access to high-quality environmental data.

Community-deployable weather stations are improving data coverage and forecast reliability in underrepresented regions. Aurora’s open-source releases, including model weights and training pipelines, will let scientists and developers adapt and build upon the platform.

The AI model has applications beyond research, with energy companies, commodity traders, and national meteorological services exploring its use.

By supporting forecasting systems tailored to local environments, Aurora aims to improve resilience against extreme weather, optimise renewable energy, and drive innovation across multiple industries, from humanitarian aid to financial services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic uncovers a major AI-led cyberattack

The US R&D firm, Anthropic, has revealed details of the first known cyber espionage operation largely executed by an autonomous AI system.

Suspicious activity detected in September 2025 led to an investigation that uncovered an attack framework, which used Claude Code as an automated agent to infiltrate about thirty high-value organisations across technology, finance, chemicals and government.

The attackers relied on recent advances in model intelligence, agency and tool access.

By breaking tasks into small prompts and presenting Claude as a defensive security assistant instead of an offensive tool, they bypassed safeguards and pushed the model to analyse systems, identify weaknesses, write exploit code and harvest credentials.

The AI completed most of the work with only a few moments of human direction, operating at a scale and speed that human hackers would struggle to match.

Anthropic responded by banning accounts, informing affected entities and working with authorities as evidence was gathered. The company argues that the case shows how easily sophisticated operations can now be carried out by less-resourced actors who use agentic AI instead of traditional human teams.

Errors such as hallucinated credentials remain a limitation, yet the attack marks a clear escalation in capability and ambition.

The firm maintains that the same model abilities exploited by the attackers are needed for cyber defence. Greater automation in threat detection, vulnerability analysis and incident response is seen as vital.

Safeguards, stronger monitoring and wider information sharing are presented as essential steps for an environment where adversaries are increasingly empowered by autonomous AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Baidu launches new AI chips amid China’s self-sufficiency push

In a strategic move aligned with national technology ambitions, Baidu announced two newly developed AI chips, the M100 and the M300, at its annual developer and client event.

The M100, designed by Baidu’s chip subsidiary Kunlunxin Technology, targets inference efficiency for large models using mixture-of-experts techniques, while the M300 is engineered for training very large multimodal models comprising trillions of parameters.

The M100 is slated for release in early 2026 and the M300 in 2027, according to Baidu, which claims they will deliver ‘powerful, low-cost and controllable AI computing power’ to support China’s drive for technological self-sufficiency.

Baidu also revealed plans for clustered architectures such as the Tianchi256 stack in the first half of 2026 and the Tianchi512 in the second half of 2026, intended to boost inference capacity through large-scale interconnects of chips.

This announcement illustrates how China’s tech ecosystem is accelerating efforts to reduce dependence on foreign silicon, particularly amid export controls and geopolitical tensions. Domestically-designed AI processors from Baidu and other firms such as Huawei Technologies, Cambricon Technologies and Biren Technology are increasingly positioned to substitute for western hardware platforms.

From a policy and digital diplomacy perspective, the development raises questions about the global semiconductor supply chain, standards of compute sovereignty and how AI-hardware competition may reshape power dynamics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New York Times lawsuit prompts OpenAI to strengthen privacy protections

OpenAI says a New York Times demand to hand over 20 million private ChatGPT conversations threatens user privacy and breaks with established security norms. The request forms part of the Times’ lawsuit over alleged misuse of its content.

The company argues the demand would expose highly personal chats from people with no link to the case. It previously resisted broader requests, including one seeking more than a billion conversations, and says the latest move raises similar concerns about proportionality.

OpenAI says it offered privacy-preserving alternatives, such as targeted searches and high-level usage data, but these were rejected. It adds that chats covered by the order are being de-identified and stored in a secure, legally restricted environment.

The dispute arises as OpenAI accelerates its security roadmap, which includes plans for client-side encryption and automated systems that detect serious safety risks without requiring broad human access. These measures aim to ensure private conversations remain inaccessible to external parties.

OpenAI maintains that strong privacy protections are essential as AI tools handle increasingly sensitive tasks. It says it will challenge any attempt to make private conversations public and will continue to update users as the legal process unfolds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Explainable AI predicts cardiovascular events in hospitalised COVID-19 patients

In the article published by BMC Infectious Diseases, researchers developed predictive models using machine learning (LightGBM) to identify cardiovascular complications (such as arrhythmia, acute heart failure, myocardial infarction) in 10,700 hospitalised COVID-19 patients across Brazil.

The study reports moderate discriminatory performance, with AUROC values of 0.752 and 0.760 for the two models, and high overall accuracy (~94.5%) due to the large majority of non-event cases.

However, due to the rarity of cardiovascular events (~5.3% of cases), the F1-scores for detecting the event class remained very low (5.2% and 4.2%, respectively), signalling that the models struggle to reliably identify the minority class despite efforts to rebalance the data.

Using SHAP (Shapley Additive exPlanations) values, the researchers identified the most influential predictors: age, urea level, platelet count and SatO₂/FiO₂ (oxygen saturation to inspired oxygen fraction) ratio.

The authors emphasise that while the approach shows promise for resource-constrained settings and contributes to risk stratification, the limitations around class imbalance and generalisability remain significant obstacles for clinical use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI platforms approved for Surrey Schools classrooms

Surrey Schools has approved MagicSchool, SchoolAI, and TeachAid for classroom use, giving teachers access through the ONE portal with parental consent. The district says the tools are intended to support instruction while maintaining strong privacy and safety safeguards.

Officials say each platform passes rigorous reviews covering educational value, data protection, and technical security before approval. Teachers receive structured guidance on appropriate use, supported by professional development aligned with wider standards for responsible AI in education.

A two-year digital literacy programme helps staff explore online identity, digital habits, and safe technology use as AI becomes more common in lessons. Students use AI to generate ideas, check code, and analyse scientific or mathematical problems, reinforcing critical reasoning.

Educators stress that pupils are taught to question AI outputs rather than accept them at face value. Leaders argue this approach builds judgment and confidence, preparing young people to navigate automated systems with greater agency beyond school settings.

Families and teachers can access AI safety resources through the ONE platform, including videos, podcasts and the ‘Navigating an AI Future’ series. Materials include recordings from earlier workshops and parent sessions, supporting shared understanding of AI’s benefits and risks across the community.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI credentials grow as AWS launches practical training pathway

AWS is launching four solutions to help close the AI skills gap as demand rises and job requirements shift. The company positions the new tools as a comprehensive learning journey, offering structured pathways that progress from foundational knowledge to hands-on practice and formal validation.

AWS Skill Builder now hosts over 220 free AI courses, ranging from beginner introductions to advanced topics in generative and agentic AI. The platform enables learners to build skills at their own pace, with flexible study options that accommodate work schedules.

Practical experience anchors the new suite. The Meeting Simulator helps learners explain AI concepts to realistic personas and refine communication with instant feedback. Cohorts Studio offers team-based training through study groups, boot camps, and game-based challenges.

AWS is expanding its credential portfolio with the AWS Certified Generative AI Developer – Professional certification. The exam helps cloud practitioners demonstrate proficiency in foundation models, RAG architectures, and responsible deployment, supported by practice tasks and simulated environments.

Learners can validate hands-on capability through new microcredentials that require troubleshooting and implementation in real AWS settings. Combined credentials signal both conceptual understanding and task-ready skills, with Skill Builder’s more expansive library offering a clear starting point for career progression.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI could cut two-thirds of UK retail jobs

Automation and AI could drastically reduce jobs at one of the UK’s largest online retailers. Buy It Direct, which employs over 800 staff, predicts more than 500 positions may be lost within three years, as AI and robotics take over office and warehouse roles.

Chief executive Nick Glynne cited rising national living wage and insurance contributions as factors accelerating the company’s shift towards automation.

The firm has already started outsourcing senior roles overseas, including accountants, managers and IT specialists, in response to higher domestic costs.

HM Treasury defended its policies, highlighting reforms in business rates and international trade deals, alongside a capped corporation tax at 25%.

Meanwhile, concerns are growing across the UK about AI replacing jobs, with graduates in fields such as graphic design and computer science facing increasing competition from technological advancements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot