UK backs Isomorphic Labs to strengthen sovereign AI and drug discovery

The UK government has announced a new investment in London-based Isomorphic Labs through its Sovereign AI Fund, strengthening national efforts to support homegrown AI companies developing strategic technologies.

The company focuses on using frontier AI systems to redesign how medicines are discovered and developed. Isomorphic Labs builds on the scientific foundations of AlphaFold, the DeepMind system capable of predicting protein structures with high accuracy, while expanding into broader AI-driven drug design models across multiple therapeutic areas.

The investment forms part of a wider fundraising round as the company scales efforts to accelerate medicine development and reduce the time traditionally required for pharmaceutical research. British officials described the initiative as part of a broader strategy to strengthen sovereign AI capabilities, support domestic innovation, and ensure future AI breakthroughs remain anchored in the UK economy.

The Sovereign AI programme, launched in 2026, combines venture capital investment with government-backed support for promising UK AI firms. Officials say supported companies must maintain a meaningful British presence while contributing to domestic economic growth, technological leadership, and high-skilled employment.

Why does it matter?

AI is increasingly moving beyond consumer applications and into strategic sectors such as biotechnology, pharmaceuticals, and healthcare infrastructure. The UK’s backing of Isomorphic Labs reflects growing international competition to secure sovereign AI capabilities tied to scientific research, intellectual property, and future economic advantage.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

South Korea reviews AI cyber threat response

The Office of National Security of South Korea held a cybersecurity meeting to review how government agencies are responding to AI-driven cyber threats. The session focused on the growing risks posed by the misuse of advanced AI technologies.

Officials from multiple ministries attended, including science, defence and intelligence bodies, to coordinate responses. The government warned that AI-enabled hacking capabilities are becoming increasingly realistic as global technology companies release more advanced models.

Authorities have instructed relevant agencies to strengthen cooperation with businesses and institutions and distributed guidance on responding to AI-based security risks. Discussions also covered practical measures to support rapid responses to cybersecurity vulnerabilities across public and private sectors.

The government plans to establish a joint technical response team to improve information sharing and enable immediate action. Officials emphasised that while AI increases cyber risks, it also offers opportunities to strengthen security capabilities in South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australian Senate opens inquiry into AI data centres

The Australian Greens announced that the Senate has established a parliamentary inquiry into AI data centres, according to its official statement. The move follows growing concern over the rapid expansion of energy-intensive AI infrastructure and limited federal oversight.

The inquiry will examine environmental, economic and social impacts, including energy and water use, effects on communities, and the regulatory framework governing AI. It aims to better understand how these facilities influence resources and infrastructure.

Greens Senator Sarah Hanson-Young said communities have raised concerns about pressure on energy supply, water availability and environmental protection. She also called for greater transparency and parliamentary scrutiny of agreements involving global technology companies.

The party warned against repeating past regulatory failures and stressed the need for accountability as AI infrastructure expands. The inquiry is expected to gather input from affected communities and stakeholders across Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK backs stronger cooperation on AI and frontier technologies at OSCE

The UK has highlighted both the opportunities and risks linked to frontier technologies during a high-level conference organised by the Organization for Security and Co-operation in Europe in Geneva.

Speaking at the event, UK Tech Envoy Sarah Spencer said AI could support early warning and early action in humanitarian crises, but could also amplify misinformation and instability if misused or deployed without adequate safeguards.

Spencer said responsible governance of frontier technologies requires partnerships between states, institutions, industry and civil society, arguing that such cooperation matters more than individual products in building inclusive, responsible and sustainable digital ecosystems.

She also highlighted the OSCE’s role in fostering dialogue on frontier technologies, reducing misunderstandings and supporting anticipatory approaches to governance. The UK said it was ready to support efforts to ensure technological progress contributes to a safer, more secure and more humane future.

The conference, titled ‘Anticipating technologies – for a safe and humane future’, brought together participants to discuss how emerging technologies are affecting security, stability and international cooperation.

Why does it matter?

The statement places AI and other frontier technologies within a security and diplomacy context, rather than treating them only as innovation issues. It highlights growing concern that emerging technologies can support humanitarian and development goals, but also create risks for misinformation, conflict escalation and strategic stability if governance and cooperation lag behind deployment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU Commission reviews Android DMA rules on interoperability

The European Commission is consulting third parties on proposed measures requiring Alphabet to ensure effective interoperability between Google Android and AI services under the Digital Markets Act.

The draft measures focus on AI services’ access to key Android capabilities, including wake-word activation, contextual data, integration with applications, and access to hardware and software resources needed for reliable and responsive services.

The Commission opened proceedings in January 2026 to specify how Alphabet should comply with DMA interoperability obligations for features relevant to AI services. Its proposed measures cover invocation, context, actions on apps and the operating system, access to resources, and general requirements such as free access, documented frameworks and APIs, technical assistance and reporting.

Stakeholders were asked to comment on the effectiveness, completeness, feasibility and implementation timelines of the proposed measures, particularly from the perspective of AI service providers and Android device manufacturers.

Input from Alphabet and interested third parties may lead to adjustments before the Commission adopts a final decision-making the measures legally binding. The Commission is expected to adopt that decision by 27 July 2026.

Why does it matter?

The case shows how the DMA is being applied to the emerging competitive landscape for AI assistants and mobile operating systems. If third-party AI services need access to Android features such as wake words, contextual data, app actions and on-device resources to compete effectively, interoperability rules could shape which AI tools reach users and how much control gatekeepers retain over mobile AI ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic launches Claude Platform on AWS with managed AI agent tools

Anthropic has made Claude Platform on AWS generally available, giving AWS customers access to Claude Platform features through AWS authentication, billing and infrastructure integrations.

The platform includes Claude Managed Agents, code execution, web search, web fetch, prompt caching, batch processing, citations, support for the Files API, and support for Skills and MCP connectors. Anthropic said new Claude models and beta tools will become available on AWS at the same time they launch on the native Claude API.

Authentication runs through AWS Identity and Access Management, while audit logging is handled through AWS CloudTrail and billing through a single AWS invoice. Anthropic said the service is designed for organisations seeking native Claude Platform functionality while staying within existing AWS credentials, permissions and operational workflows.

The company also clarified the distinction between Claude Platform on AWS and Claude on Amazon Bedrock. Under the new platform, Anthropic operates the service and data is processed outside the AWS boundary.

By contrast, Claude on Amazon Bedrock keeps AWS as the data processor and operates within the AWS boundary, making it more suitable for customers with strict regional data residency requirements or those needing data processed exclusively within AWS infrastructure.

Why does it matter?

The launch shows how competition between major AI providers is shifting towards enterprise deployment, cloud integration and agent-based automation. For organisations, the choice is no longer only about model performance, but also about where data is processed, how access is controlled, how audit logs are handled and whether AI agents can be deployed within existing cloud governance systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Singapore cooperation with Japan targets AI in patent examination

The Intellectual Property Office of Singapore and the Japan Patent Office have announced a new cooperation initiative on the use of AI in patent substantive examination, as patent offices adapt to rapid technological change.

The initiative was announced after a bilateral meeting in Singapore between IPOS Chief Executive Tan Kong Hwee and JPO Commissioner Yasuyuki Kasai. It builds on a Memorandum of Cooperation signed in Tokyo last November.

Under the initiative, IPOS and JPO will launch a bilateral patent examiner exchange programme and hold regular technical exchanges on the use of AI in patent examination. The two offices said the cooperation is intended to strengthen capabilities, share best practices and develop robust processes for high-quality and trusted patent examination.

Tan said AI is reshaping innovation and work processes, making it necessary for IP offices to evolve while maintaining examination quality and trust. Kasai said the cooperation would bring together the experience and expertise of both offices and support innovation in both countries.

The cooperation will also cover patent search and examination quality management, benchmarking of examination practices, IT infrastructure development, operational management and IP policy exchanges. Both offices will also coordinate initiatives to support enterprises, including SMEs, and strengthen trade and IP flows between Singapore and Japan.

IPOS and JPO said the partnership reflects their shared commitment to addressing emerging challenges in the intellectual property landscape and keeping innovation ecosystems trusted, efficient and future-ready.

Why does it matter?

Patent offices are increasingly facing pressure to handle more complex applications while maintaining examination quality, consistency and trust. Cooperation between Singapore and Japan on AI-assisted examination shows how intellectual property authorities are beginning to adapt their own administrative systems to AI, not only to regulate AI-related inventions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New IRIS report links AI narratives to civic action

A report by International Resource for Impact and Storytelling examines how organisations worldwide are adapting to AI and algorithm-driven platforms. It focuses on how technology and storytelling are being used to support democracy and counter harmful narratives.

The study draws on insights from 10 organisations, identifying key approaches such as co-opting technology, countering surveillance and disinformation, and innovating in storytelling. These strategies aim to reshape narratives and challenge authoritarian pressures.

Examples include campaigns addressing digital surveillance, projects using journalism to amplify marginalised voices, and creative approaches to civic engagement. The report also highlights the role of artists and storytellers in influencing how AI is understood.

The findings highlight the growing importance of narrative and culture in the digital landscape, as organisations experiment with new forms of communication and resistance. The research reflects global efforts to align AI with democratic values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Council of the EU pushes for human-centred AI in education systems

The Council of the European Union has approved conclusions calling for an ethical, safe and human-centred approach to AI in education, stressing that teachers should remain at the heart of the learning process as AI tools become more widely used across schools and universities.

The Council said the conclusions focus on strengthening digital skills and AI literacy, guaranteeing inclusion and fairness, empowering teachers, and supporting the well-being of both teachers and learners. It also noted that the relationship between AI and teaching is being addressed for the first time in the EU education policy.

The EU ministers highlighted both the opportunities and risks associated with AI-driven education systems. The Council said AI could improve accessibility, support disadvantaged learners, enable more individualised teaching and assessment methods, and reduce administrative workloads for educators.

At the same time, the conclusions raise concerns about misinformation, algorithmic bias, over-reliance on technology, reduced teacher autonomy, data protection risks and the widening of digital inequalities across Europe. The Council also warned that AI could affect learners’ concentration and skill acquisition, while raising broader societal and environmental concerns.

The conclusions call on national governments to strengthen teachers’ AI and digital skills through training, while encouraging the development and use of education-specific AI tools that provide clear pedagogical value and align with data protection, accountability and risk-awareness requirements.

The Council also said teachers should have opportunities to contribute to the design and evaluation of AI tools used in education, reflecting a digital humanism approach focused on human agency and democratic values.

Member states are urged to ensure AI deployment does not undermine teachers’ autonomy or sustainable working conditions, and that digital tools remain accessible and suitable for all learners. The European Commission was encouraged to support international cooperation, research, ethical guidance, peer-to-peer exchanges and capacity-building as AI adoption accelerates across European education systems.

Why does it matter?

AI is moving into classrooms not only as a learning tool, but as part of how teaching, assessment, administration and student support are organised. The Council’s conclusions underline that education policy will need to address more than technical adoption, including teacher autonomy, digital inequality, learner well-being, data protection and the risk of over-reliance on automated systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Google warns adversaries are industrialising AI-enabled cyberattacks

Google Threat Intelligence Group says cyber adversaries are moving from early AI experimentation towards the industrial-scale use of generative models across malicious workflows.

In a new report, GTIG says it has identified, for the first time, a threat actor using a zero-day exploit that it believes was developed with AI. The criminal actor had planned to use the exploit in a mass exploitation campaign involving a two-factor authentication bypass, but Google said its proactive discovery may have prevented the campaign from going ahead.

The findings describe several uses of AI in cyber operations. Threat actors linked to the People’s Republic of China and the Democratic People’s Republic of Korea have used AI for vulnerability research, including persona-based prompting, specialised vulnerability datasets and automated analysis of vulnerabilities and proof-of-concept exploits.

Other actors have used AI-assisted coding to support defence evasion, including the development of obfuscation tools, relay infrastructure and malware containing AI-generated decoy logic. Google said these uses show how generative models can accelerate development cycles and make malicious tools harder to detect.

Google also highlights PROMPTSPY, an Android backdoor that uses Gemini API capabilities to interpret device interfaces, generate structured commands, simulate gestures and support more autonomous malware behaviour. The company said it had disabled assets linked to the activity and that no apps containing PROMPTSPY were found on Google Play at the time of its current detection.

AI systems are also becoming direct targets. Google says attackers are compromising AI software dependencies, open-source agent skills, API connectors and AI gateway tools such as LiteLLM. The report warns that such supply-chain attacks could expose API secrets, enable ransomware activity or allow intruders to use internal AI systems for reconnaissance, data theft and deeper network access.

Why does it matter?

Google’s findings suggest that AI-enabled cyber activity is moving beyond basic phishing support or faster research. Generative models are now being used in vulnerability discovery, exploit development, malware obfuscation, autonomous device interaction, information operations and attacks on AI infrastructure itself. That could make some attacks faster, more adaptive and harder to detect, while also turning AI platforms, integrations and supply chains into part of the cyberattack surface.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!