Rising demand for AI and cloud computing is driving a surge in data centre construction, pushing operators to adopt new security solutions. Companies are increasingly deploying robotic dogs to patrol sites and monitor operations.
These four-legged machines can inspect equipment, detect anomalies and alert staff before issues escalate. Merry Frayne, senior director of product management at Boston Dynamics, noted a sharp increase in interest as investment in data infrastructure continues to grow.
Developed by firms such as Boston Dynamics and Ghost Robotics, the robots are designed to support rather than replace human guards. Their use can reduce costs by requiring fewer personnel while maintaining continuous monitoring.
The machines can travel long distances on a single charge and operate across both external and internal environments. Some facilities already use them on pre-programmed patrols to collect data and flag unusual activity.
At the same time, competition in robotics is intensifying globally, with companies exploring humanoid and AI-powered systems. Advances from firms like Nvidia and Tesla highlight how automation is expanding beyond security into broader industrial use.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK Government has announced up to £2 billion in funding for quantum technologies, including more than £1 billion over the next four years, confirmed by UKRI in December 2025, and a new procurement programme called ProQure designed to support the scaling of quantum computing across the UK.
The announcement is being billed as the country’s ‘Quantum Leap’, positioning the UK as a first mover in quantum commercialisation.
The funding is distributed across several areas: over £500 million for quantum computing to help companies scale and develop applications in pharmaceuticals, financial services, and energy; £125 million for quantum networking; and £205 million for quantum sensing and navigation, with dedicated applications in medical diagnostics, greenhouse gas monitoring, and ultra-secure communications.
A further £13.8 million will be injected into the UK’s five National Quantum Research Hubs, with an additional £90 million for quantum infrastructure and £20 million for skills and commercialisation programmes.
techUK welcomed the announcement, noting that the UK is already home to 11% of the world’s quantum startups and has attracted 12% of global quantum private equity investment.
The trade association highlighted the ProQure procurement programme as a step in the right direction, but cautioned that sustained, long-term private investment will be essential to support deep-tech companies through lengthy development cycles.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Growing concern over AI in filmmaking emerged at a major conference, where veteran director Steven Spielberg rejected its use as a replacement for human creativity. He emphasised that storytelling should remain in human hands rather than being driven by automation.
Rapid advances in AI video tools have unsettled the industry, raising fears among editors and visual effects workers. Joshua Davies, chief innovation officer at a video platform, pointed to concerns over jobs, copyright and future production methods.
Current tools remain limited, particularly when handling complex camera movements or maintaining consistency across scenes. AI is instead being used to support production by filling gaps where footage cannot be filmed due to time or budget limits.
Studios are already exploring how AI can be integrated into production pipelines following recent disruptions. A fast and low-cost Super Bowl advert highlighted its potential, although human creative input remained essential.
Lower production costs are expected, but full automation is still unlikely in the near term. AI could help independent creators compete, while strong storytelling continues to define success.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Calls for an EU-wide digital services tax are growing, as Pasquale Tridico, chair of the European Parliament’s subcommittee on tax matters, urged Brussels to act despite strong opposition from the US. He argued that such a measure would make Europe’s tax system fairer in a market dominated by foreign tech firms.
Tensions have increased as Washington threatens tariffs on countries introducing digital taxes targeting major platforms. Existing national levies in countries like France contrast with the absence of a unified EU approach due to member state control over taxation.
The proposal comes amid wider strain in transatlantic relations, with disputes over trade, regulation and influence on EU policymaking. US criticism has also focused on European rules such as the Digital Services Act and the Digital Markets Act.
Supporters argue that a digital tax would apply equally to global companies, not only US firms, while addressing imbalances between sectors. Digital businesses can generate large profits without the same physical costs faced by traditional industries.
Further proposals include new approaches to taxing wealth, reflecting how digitalisation blurs the line between income and capital. Advocates say such reforms are needed to adapt taxation to the modern economy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft has announced new integrations between Microsoft Purview and Microsoft Fabric, aimed at helping organisations identify AI-driven data risks, prevent sensitive data from being overshared, and strengthen governance across their data estates.
The updates come as enterprises accelerate AI adoption and face growing pressure to ensure that the data powering those systems is both protected and trustworthy.
Key new capabilities include Data Loss Prevention policies for Fabric workloads such as Warehouse and databases, Insider Risk Management tools that can detect risky actions such as unauthorised data exports from Fabric lakehouses, and new preview features for managing AI data exposure, including the ability to identify sensitive data appearing in Copilot prompts and responses.
Data Security Posture Management tools provide risk assessments to surface unprotected assets and recommend corrective action.
On the governance side, updates to Microsoft Purview Unified Catalogue introduce centralised workflows for data owners to control the publication of data products and run quality checks on unmanaged assets, enabling faster validation at scale.
Microsoft describes the combined offering as an ‘integrated and unified foundation’ that allows organisations to innovate with AI whilst keeping their data protected, governed, and trusted.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has delayed a flagship tech sovereignty package for the second time, according to its latest College agenda. The measures are now scheduled for adoption on 27 May, after previously being postponed from March to April.
The tech sovereignty package includes several major initiatives aimed at strengthening EU tech sovereignty, such as the Cloud and AI Development Act, the Chips Act 2, an open-source strategy, and a roadmap for digitalisation and AI in energy. European Commission officials have not provided a reason for the latest delay.
The Cloud and AI Development Act is expected to define what constitutes a ‘sovereign’ cloud and simplify rules for building data centres. The proposal is designed to accelerate infrastructure development as Europe seeks to compete in the global AI race.
Chips Act 2 will follow up on the EU’s earlier semiconductor strategy, which struggled to boost domestic chip production significantly. The new proposal is expected to refine industrial policy efforts to reduce reliance on foreign suppliers.
Meanwhile, the planned open source strategy aims to support European software ecosystems and reduce dependence on large US technology firms. By encouraging commercially viable open source projects, the EU hopes to strengthen its long-term digital autonomy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI agents are rapidly gaining traction, raising questions about whether existing EU rules can keep pace. Unlike chatbots, these systems can act autonomously and interact with digital tools on behalf of users.
Experts warn that AI agents require deeper access to personal data and online services to function effectively. Regulators in Europe are monitoring potential risks as the technology becomes more integrated into daily life.
Lawmakers are examining whether current legislation, such as the AI Act and GDPR, adequately covers agent-based systems. Legal experts highlight challenges around contracts, liability and accountability when AI acts independently.
Despite concerns, many governments remain reluctant to introduce new rules, citing regulatory fatigue. Policymakers may rely on existing frameworks unless major incidents force a reassessment of AI oversight.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Legal pressure is increasing on OpenAI as Encyclopaedia Britannica and Merriam-Webster file a lawsuit accusing the company of large-scale copyright violations.
According to the complaint, nearly 100,000 copyrighted articles were allegedly used without authorisation to train large language models. Publishers also argue that AI-generated outputs can reproduce parts of their content, raising concerns about unauthorised distribution.
Additional claims focus on how AI systems retrieve and present information. The lawsuit argues that retrieval-augmented generation tools may rely on proprietary databases, potentially undermining publishers’ business models by reducing traffic to original sources.
Concerns are also raised about inaccurate outputs attributed to publishers, which could affect trust in established information providers. The case highlights ongoing tensions between AI development and intellectual property protections.
Growing legal disputes involving media organisations, including The New York Times, suggest that courts will play a key role in defining how copyrighted material can be used in AI training.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At the Future of Work Forum, Google introduced ‘AI Works for Europe’, a programme aimed at strengthening digital skills and supporting workforce adaptation to AI across the region.
Funding of $30 million will be directed through Google.org to expand training opportunities, alongside broader access to AI certification programmes designed to help individuals and businesses adopt new technologies in practical contexts.
A central focus involves preparing workers and students for labour market changes.
Partnerships with organisations such as INCO are supporting the development of targeted training programmes, particularly in sectors where demand for AI-related skills is increasing, including finance, logistics and marketing.
New educational pathways are also being introduced, including an expanded AI Professional Certificate available in multiple European languages. These initiatives aim to improve AI literacy and provide hands-on experience aligned with employer expectations.
Collaboration with local organisations and institutions remains a key element, reflecting a broader strategy to ensure access to training across different regions and communities.
Efforts to expand AI capabilities across Europe highlight the growing importance of skills development as AI becomes more integrated into economic activity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Generative AI offers major productivity and growth opportunities, but also brings new risks as organisations move from experiments to full deployment. MIT research highlights key risk areas, including training data, foundation models, user prompts, and system prompts.
Researchers identify two types of risk.
Embedded risks come from the technology itself, shaped by model behaviour, data quality, and vendor updates, and are mostly outside an organisation’s control.
Enacted risks arise from choices in deploying AI, from prompt design to agent permissions, and require strong governance.
Advanced uses such as retrieval-augmented generation (RAG) and autonomous AI agents increase exposure. RAG uses internal data to improve outputs, but may reveal sensitive information or control gaps. AI agents acting across multiple tools can lead to ‘autonomy creep,’ performing tasks without proper oversight.
To manage AI risk, organisations should map tools, assign ownership, track outputs, and use separate strategies for embedded and enacted risks. Vendor engagement, governance frameworks, and technical controls are essential for safe AI use.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!