UK backs Isomorphic Labs to strengthen sovereign AI and drug discovery

The UK government has announced a new investment in London-based Isomorphic Labs through its Sovereign AI Fund, strengthening national efforts to support homegrown AI companies developing strategic technologies.

The company focuses on using frontier AI systems to redesign how medicines are discovered and developed. Isomorphic Labs builds on the scientific foundations of AlphaFold, the DeepMind system capable of predicting protein structures with high accuracy, while expanding into broader AI-driven drug design models across multiple therapeutic areas.

The investment forms part of a wider fundraising round as the company scales efforts to accelerate medicine development and reduce the time traditionally required for pharmaceutical research. British officials described the initiative as part of a broader strategy to strengthen sovereign AI capabilities, support domestic innovation, and ensure future AI breakthroughs remain anchored in the UK economy.

The Sovereign AI programme, launched in 2026, combines venture capital investment with government-backed support for promising UK AI firms. Officials say supported companies must maintain a meaningful British presence while contributing to domestic economic growth, technological leadership, and high-skilled employment.

Why does it matter?

AI is increasingly moving beyond consumer applications and into strategic sectors such as biotechnology, pharmaceuticals, and healthcare infrastructure. The UK’s backing of Isomorphic Labs reflects growing international competition to secure sovereign AI capabilities tied to scientific research, intellectual property, and future economic advantage.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

South Korea reviews AI cyber threat response

The Office of National Security of South Korea held a cybersecurity meeting to review how government agencies are responding to AI-driven cyber threats. The session focused on the growing risks posed by the misuse of advanced AI technologies.

Officials from multiple ministries attended, including science, defence and intelligence bodies, to coordinate responses. The government warned that AI-enabled hacking capabilities are becoming increasingly realistic as global technology companies release more advanced models.

Authorities have instructed relevant agencies to strengthen cooperation with businesses and institutions and distributed guidance on responding to AI-based security risks. Discussions also covered practical measures to support rapid responses to cybersecurity vulnerabilities across public and private sectors.

The government plans to establish a joint technical response team to improve information sharing and enable immediate action. Officials emphasised that while AI increases cyber risks, it also offers opportunities to strengthen security capabilities in South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australian Senate opens inquiry into AI data centres

The Australian Greens announced that the Senate has established a parliamentary inquiry into AI data centres, according to its official statement. The move follows growing concern over the rapid expansion of energy-intensive AI infrastructure and limited federal oversight.

The inquiry will examine environmental, economic and social impacts, including energy and water use, effects on communities, and the regulatory framework governing AI. It aims to better understand how these facilities influence resources and infrastructure.

Greens Senator Sarah Hanson-Young said communities have raised concerns about pressure on energy supply, water availability and environmental protection. She also called for greater transparency and parliamentary scrutiny of agreements involving global technology companies.

The party warned against repeating past regulatory failures and stressed the need for accountability as AI infrastructure expands. The inquiry is expected to gather input from affected communities and stakeholders across Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK backs stronger cooperation on AI and frontier technologies at OSCE

The UK has highlighted both the opportunities and risks linked to frontier technologies during a high-level conference organised by the Organization for Security and Co-operation in Europe in Geneva.

Speaking at the event, UK Tech Envoy Sarah Spencer said AI could support early warning and early action in humanitarian crises, but could also amplify misinformation and instability if misused or deployed without adequate safeguards.

Spencer said responsible governance of frontier technologies requires partnerships between states, institutions, industry and civil society, arguing that such cooperation matters more than individual products in building inclusive, responsible and sustainable digital ecosystems.

She also highlighted the OSCE’s role in fostering dialogue on frontier technologies, reducing misunderstandings and supporting anticipatory approaches to governance. The UK said it was ready to support efforts to ensure technological progress contributes to a safer, more secure and more humane future.

The conference, titled ‘Anticipating technologies – for a safe and humane future’, brought together participants to discuss how emerging technologies are affecting security, stability and international cooperation.

Why does it matter?

The statement places AI and other frontier technologies within a security and diplomacy context, rather than treating them only as innovation issues. It highlights growing concern that emerging technologies can support humanitarian and development goals, but also create risks for misinformation, conflict escalation and strategic stability if governance and cooperation lag behind deployment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU Commission reviews Android DMA rules on interoperability

The European Commission is consulting third parties on proposed measures requiring Alphabet to ensure effective interoperability between Google Android and AI services under the Digital Markets Act.

The draft measures focus on AI services’ access to key Android capabilities, including wake-word activation, contextual data, integration with applications, and access to hardware and software resources needed for reliable and responsive services.

The Commission opened proceedings in January 2026 to specify how Alphabet should comply with DMA interoperability obligations for features relevant to AI services. Its proposed measures cover invocation, context, actions on apps and the operating system, access to resources, and general requirements such as free access, documented frameworks and APIs, technical assistance and reporting.

Stakeholders were asked to comment on the effectiveness, completeness, feasibility and implementation timelines of the proposed measures, particularly from the perspective of AI service providers and Android device manufacturers.

Input from Alphabet and interested third parties may lead to adjustments before the Commission adopts a final decision-making the measures legally binding. The Commission is expected to adopt that decision by 27 July 2026.

Why does it matter?

The case shows how the DMA is being applied to the emerging competitive landscape for AI assistants and mobile operating systems. If third-party AI services need access to Android features such as wake words, contextual data, app actions and on-device resources to compete effectively, interoperability rules could shape which AI tools reach users and how much control gatekeepers retain over mobile AI ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta tests compromise plan in EU WhatsApp AI access dispute

European Commission officials are examining whether Meta’s policy on access to WhatsApp for AI providers may raise competition concerns in the European Economic Area.

Changes to the WhatsApp Business Solution terms are at the centre of the investigation, particularly as they affect how third-party AI providers can offer services on the platform. The Commission is assessing whether the policy could limit access for competing AI services and reduce choice for users and businesses.

Messaging platforms are becoming important distribution channels for AI-powered services. As chatbots and AI assistants become more integrated into everyday communication tools, access to widely used platforms such as WhatsApp may become an important factor in competition between providers.

Commission officials have said they will examine whether Meta’s conduct complies with the EU competition rules. Opening an investigation does not mean that the Commission has reached a conclusion or found an infringement.

The broader EU scrutiny of large digital platforms is increasingly focused on how access to infrastructure, services and user ecosystems is managed as AI tools become more widely adopted.

Why does it matter?

Competition questions are expanding into AI distribution channels. Messaging platforms can shape which AI services reach users and businesses at scale, making access rules an important part of the emerging AI market. The outcome could influence how major platforms design access policies for third-party AI providers while regulators seek to preserve competition and user choice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!

Data Protection Act regulations bring AI code requirement into force

The UK has brought into force regulations requiring the Information Commissioner to prepare a code of practice on the processing of personal data in relation to AI and automated decision-making.

The Data Protection Act 2018 (Code of Practice on Artificial Intelligence and Automated Decision-Making) Regulations 2026 were made on 16 April, laid before Parliament on 21 April, and came into force on 12 May. The regulations apply across England and Wales, Scotland and Northern Ireland.

Under the regulations, the Information Commissioner must prepare a code giving guidance on good practice in the processing of personal data under the UK GDPR and the Data Protection Act 2018 when developing and using AI and automated decision-making systems.

The code must also include guidance on good practice in the processing of children’s personal data. Automated decision-making is defined by reference to provisions in the UK GDPR and the Data Protection Act 2018 inserted through the Data (Use and Access) Act 2025.

The instrument also modifies the panel requirements for preparing or amending the code. Any panel established to consider the code must not consider or report on aspects relating to national security.

The explanatory note states that no full impact assessment was prepared for the instrument because the regulations themselves are not expected to have a significant impact on the private, voluntary or public sectors. The Information Commissioner must produce an impact assessment when preparing the code.

Why does it matter?

The regulations move UK guidance on AI, automated decision-making and personal data onto a statutory track. The eventual code could become an important reference point for organisations using AI systems that process personal data, particularly where automated decisions or children’s data are involved. For now, the main development is procedural: the Information Commissioner is required to prepare the code, while the practical compliance details will follow through that process.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI education guidelines updated by European Commission

Updated European Commission guidelines on the ethical use of AI and data in teaching and learning aim to help teachers and school leaders use the technology safely and responsibly in line with EU values.

The revised edition updates the Commission’s 2022 guidance to reflect the rapid growth of generative AI in education and the implications of the EU AI Act. The document is non-binding and is intended to support teachers, school leaders and education authorities, rather than serve as enforcement guidance on the AI Act.

AI tools can support lesson planning, personalised learning, assessment, feedback, school administration and the early identification of learning needs, according to the guidelines. At the same time, they warn that general-purpose AI tools were not designed specifically for education and may lack appropriate safeguards.

Ethical and legal considerations should not be treated as an add-on to AI use in schools, but as fundamental to how the technology is understood, adopted and applied, the Commission says. The guidelines highlight risks linked to bias, privacy, lack of transparency, over-reliance, academic integrity and the use of student data by commercial technology providers.

Rules under EU AI Act and the General Data Protection Regulation are also explained in the document. Some AI systems used for admissions, grading, behavioural monitoring, student progress tracking or detecting prohibited behaviour during tests may be classified as high-risk, while emotion recognition systems are prohibited in educational settings except for medical or safety-related reasons.

Key ethical considerations identified in the guidelines include human dignity, fairness, trustworthiness, academic integrity and justified choice. They also provide guiding questions for teachers and schools on human oversight, transparency, explainability, diversity, inclusion, privacy, safety and accountability.

Executive Vice-President Roxana Mînzatu says the ethical use of AI must remain the guiding principle and that teachers are ‘uniquely placed to act as ethical guardians for their students’. The Commission frames the update as part of wider EU work on digital education, skills, AI literacy and the future of education systems in the age of AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China AI ethics draft translated by Georgetown’s CSET

The Center for Security and Emerging Technology (CSET), a policy research organisation within Georgetown University’s Walsh School of Foreign Service, has published an English translation of China’s draft trial measures on ethics reviews for AI technology.

The translated draft says the measures would apply to AI-related scientific and technological activities conducted within China that may pose ethical risks to human health, human dignity, the ecological environment, public order, or sustainable development. It covers universities, research institutions, medical and health institutions, enterprises, and other organisations involved in AI research and development.

Under the draft, organisations with the necessary conditions would be expected to establish AI technology ethics committees, while others could commission specialised ethics service centres to conduct reviews. Review applications would need to include details on the AI activity, algorithms, data sources, data cleaning methods, testing and evaluation, expected applications, user groups, risk assessments, and risk prevention plans.

The review process would focus on fairness and impartiality; controllability and trustworthiness; transparency and explainability; accountability and traceability; and whether the activity has scientific and social value. Committees or service centres would generally have 30 days to approve, reject, or request revisions to an application.

Higher-risk activities would require expert reconsideration. The draft list includes human-computer fusion systems that strongly affect behaviour, psychological or emotional states, or health; AI models and systems able to mobilise public opinion or channel social consciousness; and highly autonomous automated decision-making systems used in safety or personal health-risk scenarios.

Approved AI activities would also be subject to follow-up reviews, generally at intervals of no more than 12 months, while activities requiring expert reconsideration would be subject to follow-up reviews at least every 6 months. Emergency ethics reviews would normally have to be completed within 72 hours.

CSET notes that China released a final trial version of the regulation in April 2026, which it is now translating. The newly published draft translation therefore provides insight into the regulatory structure that preceded the final version, including committee-based ethics review, external service centres, expert reconsideration, and oversight roles for the Ministry of Science and Technology, the Ministry of Industry and Information Technology, and other departments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New South Wales criminalises AI sexual deepfakes

Australia’s New South Wales state has clarified that creating, sharing, or threatening to share sexually explicit images, videos, or audio of a person without consent is a criminal offence, including where the material has been digitally altered or generated using AI.

The state government strengthened protections in 2025 by amending the Crimes Act 1900 to cover digitally generated deepfakes. The law already applied to sexually explicit image material, but now also covers content created or altered by AI to place someone in a sexual situation they were never in.

The reforms mean that non-consensual sexual images or audio are covered regardless of how they were made. Threatening to create or share such material is also a criminal offence in New South Wales, with penalties of up to three years in prison, a fine of up to A$11,000, or both.

Courts can also order offenders to remove or delete the material. Failure to comply with such an order can result in up to 2 years’ imprisonment, a fine of up to A$5,500, or both.

The law operates alongside existing child abuse material offences. Under criminal law, any material depicting a person under 18 in a sexually explicit way can be treated as child abuse material, including AI-generated content.

Criminal proceedings against people under 16 can begin only with the approval of the Director of Public Prosecutions, which is intended to ensure that only the most serious matters involving young people enter the criminal justice system.

Limited exemptions apply for proper purposes, including genuine medical, scientific, law enforcement, or legal proceedings-related purposes. A review of the law will take place 12 months after it comes into effect to assess how it is working and whether changes are needed.

The changes are intended to address the misuse of AI and deepfake technology to harass, shame, or exploit people through fake digital content. New South Wales says its criminal law works alongside national online safety frameworks, including the work of Australia’s eSafety Commissioner, as It seeks to keep privacy and consent protections aligned with emerging technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!