Plans to accelerate technological leadership have been outlined by the HM Treasury and the Department for Science, Innovation and Technology, with a £2.5 billion investment targeting AI and quantum computing.
Ambition has been reinforced by Rachel Reeves, who positioned AI as a central driver of economic growth, alongside closer European ties and regional development. Strategy aims to secure the fastest adoption of AI across the G7 while supporting domestic innovation ecosystems.
Significant funding in the UK will be directed towards a Sovereign AI initiative, quantum infrastructure and research capacity. Plans include procurement of large-scale quantum systems and targeted investment in startups, helping companies scale while strengthening national capabilities in advanced technologies.
Expectations surrounding quantum computing are framed as transformative, with potential to reshape industries from healthcare to energy. Combined investment reflects a broader effort to align innovation policy with long-term economic growth and global competitiveness.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The University of Cambridge has partnered with the UK Atomic Energy Authority and the Department for Energy Security and Net Zero to deploy a major AI supercomputer for fusion energy. The system, named ‘Sunrise’, is designed to accelerate research into clean and sustainable power.
Developed with support from Dell Technologies, AMD and StackHPC, the GPU-powered machine will operate at 1.4MW capacity. Project marks a significant step in strengthening the UK’s sovereign computing capabilities while supporting the Culham AI Growth Zone initiative.
Focus will centre on solving complex fusion challenges, including plasma turbulence, advanced materials, and fuel development. Advanced simulations and AI modelling are expected to play a key role in bringing fusion energy closer to commercial viability.
Plans aim to support the UK’s long-term goal of delivering fusion power to the national grid in the 2040s. Sunrise is scheduled to become operational in June, forming part of a broader national strategy to expand AI infrastructure and scientific innovation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI systems are increasingly being tested on advanced mathematical problems as researchers assess their reasoning abilities. Competitions such as the Putnam exam have become benchmarks for evaluating performance.
Recent results suggest some AI models can achieve scores comparable to top human participants, whilst other tests face scrutiny. Experts caution that such tests may not reflect real-world mathematical research or practical problem-solving.
Researchers have also explored AI-generated proofs for longstanding mathematical questions. Verification tools are being used to confirm results and reduce errors often produced by AI systems.
Mathematicians say AI can support brainstorming and research, but still requires human oversight. Analysts describe performance as uneven, with strong results in some areas and clear limitations in others.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
NVIDIA has unveiled the Vera CPU, designed specifically for agentic AI and reinforcement learning. It delivers 50% faster performance and double the energy efficiency, already adopted by Alibaba, Meta, ByteDance, Oracle Cloud, CoreWeave, and Lambda.
Vera features 88 Olympus cores, high-bandwidth memory, and advanced multithreading, supporting large-scale AI deployments. Liquid-cooled racks can host over 22,500 concurrent CPU environments, allowing enterprises and research labs to scale agentic AI efficiently.
The CPU integrates with NVIDIA GPUs via NVLink-C2C and includes ConnectX SuperNIC and BlueField-4 DPUs to enhance networking, storage, and security. Early users like Cursor and Redpanda report major gains in AI agent throughput and real-time data processing.
High performance, energy efficiency, and GPU integration make Vera a new standard for faster, scalable, and responsive AI systems. The platform supports coding assistants, reinforcement learning, and large-scale data, making it suitable for enterprise and scientific use.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
India’s $300bn outsourcing industry is facing mounting pressure as AI tools threaten to disrupt traditional business models. A recent sell-off in technology stocks reflects investor concern over automation replacing labour-intensive services.
Fears intensified after new AI tools demonstrated the ability to automate legal, compliance and data processes. Analysts warn such advances could reduce demand for routine IT services and reshape client engagements.
Industry leaders in India argue AI will also create opportunities, particularly in consulting and system modernisation. Firms expect partnerships with AI developers to drive new areas of growth despite near-term disruption.
Revenue growth may slow, and hiring could remain subdued as the sector adapts. Analysts in India expect a gradual shift towards outcome-based services while companies invest in new AI capabilities.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Growing concern over AI in filmmaking emerged at a major conference, where veteran director Steven Spielberg rejected its use as a replacement for human creativity. He emphasised that storytelling should remain in human hands rather than being driven by automation.
Rapid advances in AI video tools have unsettled the industry, raising fears among editors and visual effects workers. Joshua Davies, chief innovation officer at a video platform, pointed to concerns over jobs, copyright and future production methods.
Current tools remain limited, particularly when handling complex camera movements or maintaining consistency across scenes. AI is instead being used to support production by filling gaps where footage cannot be filmed due to time or budget limits.
Studios are already exploring how AI can be integrated into production pipelines following recent disruptions. A fast and low-cost Super Bowl advert highlighted its potential, although human creative input remained essential.
Lower production costs are expected, but full automation is still unlikely in the near term. AI could help independent creators compete, while strong storytelling continues to define success.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI agents are rapidly gaining traction, raising questions about whether existing EU rules can keep pace. Unlike chatbots, these systems can act autonomously and interact with digital tools on behalf of users.
Experts warn that AI agents require deeper access to personal data and online services to function effectively. Regulators in Europe are monitoring potential risks as the technology becomes more integrated into daily life.
Lawmakers are examining whether current legislation, such as the AI Act and GDPR, adequately covers agent-based systems. Legal experts highlight challenges around contracts, liability and accountability when AI acts independently.
Despite concerns, many governments remain reluctant to introduce new rules, citing regulatory fatigue. Policymakers may rely on existing frameworks unless major incidents force a reassessment of AI oversight.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Legal pressure is increasing on OpenAI as Encyclopaedia Britannica and Merriam-Webster file a lawsuit accusing the company of large-scale copyright violations.
According to the complaint, nearly 100,000 copyrighted articles were allegedly used without authorisation to train large language models. Publishers also argue that AI-generated outputs can reproduce parts of their content, raising concerns about unauthorised distribution.
Additional claims focus on how AI systems retrieve and present information. The lawsuit argues that retrieval-augmented generation tools may rely on proprietary databases, potentially undermining publishers’ business models by reducing traffic to original sources.
Concerns are also raised about inaccurate outputs attributed to publishers, which could affect trust in established information providers. The case highlights ongoing tensions between AI development and intellectual property protections.
Growing legal disputes involving media organisations, including The New York Times, suggest that courts will play a key role in defining how copyrighted material can be used in AI training.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At the Future of Work Forum, Google introduced ‘AI Works for Europe’, a programme aimed at strengthening digital skills and supporting workforce adaptation to AI across the region.
Funding of $30 million will be directed through Google.org to expand training opportunities, alongside broader access to AI certification programmes designed to help individuals and businesses adopt new technologies in practical contexts.
A central focus involves preparing workers and students for labour market changes.
Partnerships with organisations such as INCO are supporting the development of targeted training programmes, particularly in sectors where demand for AI-related skills is increasing, including finance, logistics and marketing.
New educational pathways are also being introduced, including an expanded AI Professional Certificate available in multiple European languages. These initiatives aim to improve AI literacy and provide hands-on experience aligned with employer expectations.
Collaboration with local organisations and institutions remains a key element, reflecting a broader strategy to ensure access to training across different regions and communities.
Efforts to expand AI capabilities across Europe highlight the growing importance of skills development as AI becomes more integrated into economic activity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Generative AI offers major productivity and growth opportunities, but also brings new risks as organisations move from experiments to full deployment. MIT research highlights key risk areas, including training data, foundation models, user prompts, and system prompts.
Researchers identify two types of risk.
Embedded risks come from the technology itself, shaped by model behaviour, data quality, and vendor updates, and are mostly outside an organisation’s control.
Enacted risks arise from choices in deploying AI, from prompt design to agent permissions, and require strong governance.
Advanced uses such as retrieval-augmented generation (RAG) and autonomous AI agents increase exposure. RAG uses internal data to improve outputs, but may reveal sensitive information or control gaps. AI agents acting across multiple tools can lead to ‘autonomy creep,’ performing tasks without proper oversight.
To manage AI risk, organisations should map tools, assign ownership, track outputs, and use separate strategies for embedded and enacted risks. Vendor engagement, governance frameworks, and technical controls are essential for safe AI use.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!