Data centre security evolves with rise of robot dog patrols

Rising demand for AI and cloud computing is driving a surge in data centre construction, pushing operators to adopt new security solutions. Companies are increasingly deploying robotic dogs to patrol sites and monitor operations.

These four-legged machines can inspect equipment, detect anomalies and alert staff before issues escalate. Merry Frayne, senior director of product management at Boston Dynamics, noted a sharp increase in interest as investment in data infrastructure continues to grow.

Developed by firms such as Boston Dynamics and Ghost Robotics, the robots are designed to support rather than replace human guards. Their use can reduce costs by requiring fewer personnel while maintaining continuous monitoring.

The machines can travel long distances on a single charge and operate across both external and internal environments. Some facilities already use them on pre-programmed patrols to collect data and flag unusual activity.

At the same time, competition in robotics is intensifying globally, with companies exploring humanoid and AI-powered systems. Advances from firms like Nvidia and Tesla highlight how automation is expanding beyond security into broader industrial use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK Government commits up to £2 billion to quantum technologies

The UK Government has announced up to £2 billion in funding for quantum technologies, including more than £1 billion over the next four years, confirmed by UKRI in December 2025, and a new procurement programme called ProQure designed to support the scaling of quantum computing across the UK. 

The announcement is being billed as the country’s ‘Quantum Leap’, positioning the UK as a first mover in quantum commercialisation.

The funding is distributed across several areas: over £500 million for quantum computing to help companies scale and develop applications in pharmaceuticals, financial services, and energy; £125 million for quantum networking; and £205 million for quantum sensing and navigation, with dedicated applications in medical diagnostics, greenhouse gas monitoring, and ultra-secure communications.

A further £13.8 million will be injected into the UK’s five National Quantum Research Hubs, with an additional £90 million for quantum infrastructure and £20 million for skills and commercialisation programmes. 

techUK welcomed the announcement, noting that the UK is already home to 11% of the world’s quantum startups and has attracted 12% of global quantum private equity investment.

The trade association highlighted the ProQure procurement programme as a step in the right direction, but cautioned that sustained, long-term private investment will be essential to support deep-tech companies through lengthy development cycles. 

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Advanced AI education unlocks powerful opportunities across Africa

Advanced AI education is expanding across Africa. Google DeepMind has launched new courses to support the next generation of technical learners and reduce the gap between AI talent and opportunities on the continent.

At the same time, the initiative is supported by targeted funding. Google.org is providing $4 million to train lecturers and develop educational toolkits, aiming to strengthen local capacity and scale AI education.

Moreover, the curriculum focuses on practical and technical skills. Learners gain hands-on experience with generative AI models and transformers, including building and fine-tuning language models, moving beyond basic AI literacy.

In addition, the programme is adapted to African contexts. Developed with input from local experts and institutions, such as the African Institute for Mathematical Sciences, the courses include real-world use cases relevant to the continent.

Furthermore, the initiative aims to address Africa’s underrepresentation in AI research. By expanding access to advanced training, it seeks to increase participation and ensure more inclusive global AI development.

Finally, the programme is designed to scale through educators and institutions. Universities and NGOs can integrate the curriculum, supported by training programmes that equip educators to deliver AI courses effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK quantum ambitions get a boost as Cambridge joins forces with IonQ

The University of Cambridge has announced its largest-ever corporate research partnership, with US quantum technology company IonQ set to install a 256-qubit quantum computer at the Cavendish Laboratory, which will become the most powerful quantum computer in the UK upon installation.

The system will be housed in the newly created IonQ Quantum Innovation Centre at the Ray Dolby Centre, Cambridge’s new physics home.

As part of the collaboration, Innovate UK will provide access and computing time to UKRI’s National Quantum Computing Centre over three years, enabling researchers and early-stage companies across the UK to use the first commercial-scale quantum computer installed at a British university.

The centre’s research portfolio will span quantum computing, networking, sensing, and security.

The partnership aligns with the UK Government’s National Quantum Strategy and its five ‘Quantum Missions’, which set milestones for investment and research to secure the UK’s position as a world leader in quantum technology.

IonQ has been rapidly expanding its capabilities through acquisitions, including Oxford Ionics for $1.08 billion in September 2025 and chipmaker SkyWater Technology for $1.8 billion in January 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU delays tech sovereignty package with AI and Chips Act 2

The European Commission has delayed a flagship tech sovereignty package for the second time, according to its latest College agenda. The measures are now scheduled for adoption on 27 May, after previously being postponed from March to April.

The tech sovereignty package includes several major initiatives aimed at strengthening EU tech sovereignty, such as the Cloud and AI Development Act, the Chips Act 2, an open-source strategy, and a roadmap for digitalisation and AI in energy. European Commission officials have not provided a reason for the latest delay.

The Cloud and AI Development Act is expected to define what constitutes a ‘sovereign’ cloud and simplify rules for building data centres. The proposal is designed to accelerate infrastructure development as Europe seeks to compete in the global AI race.

Chips Act 2 will follow up on the EU’s earlier semiconductor strategy, which struggled to boost domestic chip production significantly. The new proposal is expected to refine industrial policy efforts to reduce reliance on foreign suppliers.

Meanwhile, the planned open source strategy aims to support European software ecosystems and reduce dependence on large US technology firms. By encouraging commercially viable open source projects, the EU hopes to strengthen its long-term digital autonomy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN calls for global action against online scam networks

Online scam networks operating across Southeast Asia are defrauding victims worldwide, using AI, impersonation techniques, and complex cyber tools to steal billions of dollars.

At the Global Fraud Summit in Vienna, the UN Office on Drugs and Crime (UNODC) and INTERPOL brought together governments, law enforcement, and private-sector actors to strengthen international cooperation against these crimes.

Victims include individuals from diverse backgrounds, often highly educated and financially experienced. One Australian couple, Kim and Allan Sawyer, lost more than $2.5 million after engaging with what appeared to be a legitimate investment opportunity. ‘The scammer was extraordinarily believable,’ Kim Sawyer said. ‘He had a British accent, used all the right financial market terms and knew how to induce us by appearing credible every time.’

UNODC officials warn that these operations extend beyond fraud, forming part of a broader criminal ecosystem driven by organised scam networks, involving human trafficking, corruption, and money laundering.

‘We need to be looking into prosecuting high-level criminals, following the money through financial investigations and identifying the giant networks that operate behind these operations,’ said Delphine Schantz, UNODC’s regional representative for Southeast Asia and the Pacific.

Authorities say the scale and complexity of these crimes require a coordinated global response to dismantle scam networks effectively. ‘The complexity of these crimes requires an equally complex, whole-of-government approach and enhanced coordination among governments, financial intelligence units and digital banks,’ Schantz added.

Investigations in countries such as the Philippines and Cambodia have revealed how scam networks operate on the ground. In Manila, a former scam compound uncovered facilities used to control trafficked workers and evidence of corruption linked to local officials. ‘How do you prove a cybercrime in 36 hours? It is not possible,’ said the Philippines’ Presidential Anti-Organised Crime Commission (PAOCC) operations director, recalling the challenges investigators faced during early raids.

In Cambodia, international prosecutors and investigators have focused on improving cooperation mechanisms, including extradition, asset recovery, and the handling of digital evidence. These efforts are seen as critical in addressing the cross-border nature of scam networks.

Despite increased enforcement efforts, these networks continue to adapt and relocate, maintaining a global reach. At recent international meetings, including a summit in Bangkok involving nearly 60 countries and major technology firms, officials agreed on the need for shared intelligence, joint investigations and coordinated prosecutions.

Victims continue to call for stronger responses. ‘The scammer works twice: they take your money, and they take your soul. They really do. They take your self-worth. And then, you feel like you’re being scammed again, by the authorities’ lack of response,’ Sawyer said.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google launches AI skills initiative to support Europe’s workforce transition

At the Future of Work Forum, Google introduced ‘AI Works for Europe’, a programme aimed at strengthening digital skills and supporting workforce adaptation to AI across the region.

Funding of $30 million will be directed through Google.org to expand training opportunities, alongside broader access to AI certification programmes designed to help individuals and businesses adopt new technologies in practical contexts.

A central focus involves preparing workers and students for labour market changes.

Partnerships with organisations such as INCO are supporting the development of targeted training programmes, particularly in sectors where demand for AI-related skills is increasing, including finance, logistics and marketing.

New educational pathways are also being introduced, including an expanded AI Professional Certificate available in multiple European languages. These initiatives aim to improve AI literacy and provide hands-on experience aligned with employer expectations.

Collaboration with local organisations and institutions remains a key element, reflecting a broader strategy to ensure access to training across different regions and communities.

Efforts to expand AI capabilities across Europe highlight the growing importance of skills development as AI becomes more integrated into economic activity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

MIT research highlights embedded and enacted risks in AI

Generative AI offers major productivity and growth opportunities, but also brings new risks as organisations move from experiments to full deployment. MIT research highlights key risk areas, including training data, foundation models, user prompts, and system prompts.

Researchers identify two types of risk.

Embedded risks come from the technology itself, shaped by model behaviour, data quality, and vendor updates, and are mostly outside an organisation’s control.

Enacted risks arise from choices in deploying AI, from prompt design to agent permissions, and require strong governance.

Advanced uses such as retrieval-augmented generation (RAG) and autonomous AI agents increase exposure. RAG uses internal data to improve outputs, but may reveal sensitive information or control gaps. AI agents acting across multiple tools can lead to ‘autonomy creep,’ performing tasks without proper oversight.

To manage AI risk, organisations should map tools, assign ownership, track outputs, and use separate strategies for embedded and enacted risks. Vendor engagement, governance frameworks, and technical controls are essential for safe AI use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-powered MRI previews aim to reduce errors and rescans

Philips is creating AI-driven predictive MRI previews to improve scan planning and reduce operator variability. Using NVIDIA accelerated computing and foundation models, the system creates a pre-scan image to validate protocols, optimise positioning, and spot potential issues.

The technology is based on a dedicated MR foundation model trained on diverse datasets covering anatomies, field strengths, protocols, and artefacts.

When combined with NVIDIA’s NV‑Generate, NV‑Segment, and NV‑Reason models, the platform integrates image generation, segmentation, and interpretation. It creates a single intelligent workflow that supports consistent and efficient MRI procedures.

Predictive previews reduce rescans, enhance image quality, and increase technologist confidence, especially in complex exams or areas with limited expertise. Early guidance helps confirm protocols, optimise positioning, and flag issues that could affect diagnostic outcomes.

Philips envisions autonomous MRI, with AI monitoring image quality, guiding positioning, and assisting radiologists with actionable insights. Predictive imaging boosts consistency, efficiency, and access, improving patient experience and expanding MRI availability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New licensing rules for crypto platforms in Australia

Australia is advancing plans to regulate digital asset platforms under its financial services framework. The Senate committee recommended passing the Digital Assets Framework Bill 2025, bringing Australia closer to licensing crypto exchanges and tokenisation platforms.

Industry groups have raised concerns about definitions such as ‘digital token’ and ‘factual control.’ Broad wording could inadvertently cover infrastructure providers, including multi-party wallet systems, potentially classifying them as financial service operators.

Ripple Labs emphasised the need for precise language to avoid unintended regulation.

The committee supported the Treasury’s approach while planning to refine technical details through future regulations. Coinbase welcomed the progress but noted ongoing banking challenges for crypto firms.

The bill now proceeds to the Senate for debate and a final vote, which could reshape digital asset operations in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot