Scotland sets up national AI agency

The Scottish government has launched a dedicated national agency to drive AI strategy and support local tech companies. Leaders say this effort could help boost the economy and establish the nation as a hub for AI development.

Scotland’s strategy highlights existing tech firms and data projects, including plans for major computing campuses and partnerships with global technology companies. Several research institutions and supercomputing initiatives are contributing to innovation.

Healthcare is a focus for AI adoption, with studies showing that AI tools could improve cancer detection, speed up diagnoses, and reduce workload. Academic projects also aim to develop tools to detect early signs of dementia.

Scottish government officials have acknowledged ethical, workforce and environmental concerns around AI deployment. They say policies will include responsible use, job planning and efforts to maximise renewable energy in support of data infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfakes scandal puts Elon Musk and X under scrutiny in France

French prosecutors have escalated concerns about deepfakes linked to Elon Musk’s platform X, alerting US authorities to suspicions that manipulated content may have been used to influence the company’s valuation.

According to the Paris prosecutor’s office, the controversy surrounding sexually explicit deepfakes generated by Grok, X’s AI tool, may have been deliberately amplified to artificially boost the value of X and its associated AI entity ahead of a planned stock market listing in June 2026.

Authorities in France confirmed they had contacted the US Department of Justice and legal representatives at the Securities and Exchange Commission to share findings related to the deepfakes investigation and potential financial implications.

The case builds on an ongoing French probe into X, which initially focused on alleged algorithmic interference in domestic politics. Investigations have since expanded to include the spread of Holocaust denial content and the dissemination of sexualised deepfakes through Grok.

French regulators have taken additional steps, including summoning Musk for a voluntary interview and conducting searches at X’s local offices, actions he has described as politically motivated. Parallel investigations have also been launched in the UK and across the European Union into the use of AI tools to generate harmful deepfakes involving women and minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake abuse crisis escalates worldwide

AI-generated deepfake abuse is emerging as a serious global threat, with women and girls disproportionately affected by non-consensual and harmful digital content. Advances in AI make it easy to create manipulated content that can spread across platforms within minutes and reach millions.

Data highlights the scale of the issue. The vast majority of deepfake content online consists of explicit material, overwhelmingly targeting women.

Accessible and often free tools have lowered the barrier to entry, enabling widespread misuse. At the same time, the ability to endlessly replicate and share such content makes removal nearly impossible once it is published.

Legal responses remain fragmented, with many pre-existing laws leaving gaps in addressing AI-generated deepfake abuse. Enforcement issues, such as cross-border challenges and limited digital forensics capabilities, make it unlikely that perpetrators will face consequences.

Pressure is mounting on governments and technology platforms to act. Calls for reform include clearer legislation, faster obligations to remove content, improved law enforcement capabilities, and stronger support systems for victims.

Without coordinated global action, deepfake abuse is set to expand alongside the technologies enabling it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Telefónica Tech moves to combine AI and quantum computing

Telefónica Tech has partnered with three European firms to bring AI and quantum computing closer together. The collaboration aims to improve how advanced models are developed and deployed across different environments.

The initiative brings together Qilimanjaro Quantum Tech, Multiverse Computing and Qcentroid. Their combined expertise is expected to support more efficient, compact and locally deployable AI systems.

Quantum computing is seen as a way to reduce the heavy processing demands of large AI models. Faster computation could yield more accurate results while reducing the time required to solve complex problems.

Each partner contributes specialised capabilities, from quantum hardware and algorithms to software platforms and orchestration tools. These technologies could support applications such as simulations, edge AI and rapid prototyping.

Telefónica Tech is also strengthening its role in integrating AI and quantum solutions for enterprise clients. The move reflects a broader push to build scalable, sovereign and next-generation digital infrastructure in Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Essex strawberry-picking robot wins national award for industry collaboration

A University of Essex robotics project designed to automate crop harvesting has won the Best Research Project (Industry Collaboration) award at the 2026 UKRI AI & Robotics Research Awards.

The Sustainable smArt Robotic Agriculture (SARA) project was developed in collaboration with industry partners Wilkin and Sons, JEPCO, and GyroPlant, and addresses three interconnected challenges: food security, labour shortages, and sustainability.

Central to the project is the development of low-cost AgriRobotics systems capable of adapting to different crops, tasks, and growing environments, automating repetitive, labour-intensive farm work whilst reducing wastage, carbon footprint, and dependence on increasingly scarce agricultural labour.

The team delivered a live strawberry-harvesting demonstration at the Innovate UK Robotics Industry Showcase in March, an event aligned with UKRI’s announcement of a £52 million competition for Robotics Adoption Hubs.

Building on the project’s success, lead researchers Professor Klaus McDonald-Maier and Dr Vishwanathan Mohan have launched a spinout company, Versatile RobotX, to accelerate the commercialisation of the technology and extend its global impact.

The SARA project previously won the Best Demonstration category at the same awards in 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Inspired Education introduces AI-driven learning for primary schools

Inspired Education has unveiled a new AI-enabled primary teaching model designed to modernise traditional learning systems. The programme aims to better align education with how children learn in a digital and fast-changing environment.

The model combines core academic subjects in the morning with applied learning in the afternoon. Students focus on life skills such as problem-solving, entrepreneurship and communication alongside standard curriculum content.

Learning is structured around mastery rather than age, allowing children to progress at their own pace. AI-powered tools are used to personalise lessons and support faster and more adaptive learning outcomes.

The first early-access programme will launch in Central London in January 2027. Further rollouts are planned across cities, including Lisbon, Milan, Madrid, Mexico City, São Paulo and Auckland.

Developers say the approach responds to growing demand from parents for AI-integrated education. The initiative reflects broader efforts to prepare students with digital, practical and future-ready skills.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

US senator proposes AI rules for children

A US senator has introduced a draft framework to establish nationwide AI rules, with a focus on child safety and copyright protection. The proposal seeks to create a unified federal approach to replace state laws that differ.

The plan would require developers to implement safeguards for minors, including age verification, data protection and mechanisms to report harm. Companies could also face legal action over failures linked to AI system design.

Copyright measures include new standards for identifying AI-generated content and preventing tampering. Authorities would also develop cybersecurity guidelines to support the transparency and authenticity of content.

Debate over this in the US continues over the balance between regulation and innovation, with some stakeholders warning of legal and economic risks. Discussions between lawmakers and the administration are expected to shape a final framework.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI acquires Astral to expand Codex developer tools

Astral is being acquired by OpenAI as developer tooling becomes a bigger focus, with the deal aimed at boosting the capabilities of its Codex platform. The move is expected to bring widely used open-source Python tools into the ecosystem, including uv, Ruff, and ty, which are already embedded in millions of developer workflows.

The acquisition is intended to strengthen Codex’s role across the full software development lifecycle, moving beyond code generation toward more integrated and autonomous systems.

The company has positioned Codex as a system that can plan changes, modify codebases, run tools, and verify results, with usage already growing rapidly. OpenAI reported a threefold increase in users and a fivefold increase in activity this year, bringing its total to more than 2 million weekly active users.

Astral’s tools are seen as a natural fit for this vision, given their role in managing dependencies, enforcing code quality, and improving reliability in Python-based development. Integrating these tools could allow AI agents to interact more directly with the environments developers already use.

The acquisition also reinforces the importance of Python as a core language in modern software development, particularly across AI, data science, and backend systems. OpenAI said it plans to continue supporting Astral’s open-source projects while exploring deeper integration with Codex.

The deal remains subject to regulatory approval, and both companies will operate independently until completion. Once finalised, Astral’s team is expected to join OpenAI’s Codex division as the company continues building AI systems designed to collaborate across the development workflow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU digital wallet nears rollout

Interoperability tests for the European Digital Identity Wallet have marked a significant step towards deployment, following a major industry-wide exercise. Systems were tested under real conditions to ensure compatibility across providers.

The initiative forms part of the EU’s plan to provide citizens with a secure digital wallet for identification and online services. The system will allow users to store identity data and access services, including electronic signatures.

Results showed that most test scenarios were successfully completed, confirming that independent systems can work together effectively. The exercise also highlighted areas requiring further refinement ahead of wider implementation.

EU officials and industry leaders said the progress supports the development of a unified digital ecosystem. The wallet is expected to simplify everyday services while strengthening security and trust in digital identity solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Malaysia tightens rules on data centres

Malaysia has quietly restricted new data centre approvals to projects linked to AI, signalling a strategic shift in its digital economy. Authorities confirmed that non-AI development has been halted for nearly 2 years.

The policy reflects mounting pressure on energy and water resources as demand for data centres accelerates. Officials aim to ensure infrastructure supports high-value AI projects rather than lower-impact investments.

Rapid growth has positioned Malaysia as a key regional hub, attracting major global technology firms. Concerns remain over whether the country risks hosting infrastructure without building local innovation capacity.

Leaders say future efforts will focus on balancing investment with domestic benefits and energy sustainability. Plans include expanding power supply and strengthening national AI capabilities to secure long term gains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot