UK report quantifies rapid advances in frontier AI capabilities

For the first time, the UK has published a detailed, evidence-based assessment of frontier AI capabilities. The Frontier AI Trends Report draws on two years of structured testing across areas including cybersecurity, software engineering, chemistry, and biology.

The findings show rapid progress in technical performance. Success rates on apprentice-level cyber tasks rose from under 9% in 2023 to around 50% in 2025, while models also completed expert-level cyber challenges previously requiring a decade of experience.

Safeguards designed to limit misuse are also improving, according to the report. Red-team testing found that the time required to identify universal jailbreaks increased from minutes to several hours between model generations, representing an estimated forty-fold improvement in resistance.

The analysis highlights advances beyond cybersecurity. AI systems now complete hour-long software engineering tasks more than 40% of the time, while biology and chemistry models outperform PhD-level researchers in controlled knowledge tests and support non-experts in laboratory-style workflows.

While the report avoids policy recommendations, UK officials say it strengthens transparency around advanced AI systems. The government plans to continue investing in evaluation science through the AI Security Institute, supporting independent testing and international collaboration.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Strong AI memory demand boosts Micron outlook into 2026

Micron Technology reported record first-quarter revenue for fiscal 2026, supported by strong pricing, a favourable product mix and operating leverage. The company said tight supply conditions and robust AI-related demand are expected to continue into 2026.

The Boise-based chipmaker generated $13.64 billion in quarterly revenue, led by record sales across DRAM, NAND, high-bandwidth memory and data centres. Chief executive Sanjay Mehrotra said structural shifts are driving rising demand for advanced memory in AI workloads.

Margins expanded sharply, setting Micron apart from peers such as Broadcom and Oracle, which reported margin pressure in recent earnings. Chief financial officer Mark Murphy said gross margin is expected to rise further in the second quarter, supported by higher prices, lower costs and a favourable revenue mix.

Analysts highlighted improving fundamentals and longer-term visibility. Baird said DRAM and NAND pricing could rise sequentially as Micron finalises long-term supply agreements, while capital expenditure plans for fiscal 2026 were viewed as manageable and focused on expanding high-margin HBM capacity.

Retail sentiment also turned strongly positive following the earnings release, with Micron shares jumping around 8 per cent in after-hours trading. The stock is on track to finish the year as the best-performing semiconductor company in the S&P 500, reinforcing confidence in its AI-driven growth trajectory.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Natural language meets robotics in MIT’s on-demand object creation system

MIT researchers have developed a speech-to-reality system that allows users to create physical objects by describing them aloud, combining generative AI with robotic assembly. The system can produce simple furniture and decorative items in minutes using modular components.

The workflow translates spoken instructions into a digital design using a large language model and 3D generative AI. The design is then broken into voxel-based parts and adapted to real-world fabrication constraints before being assembled by a robotic arm.

Researchers have demonstrated the system by producing stools, shelves, chairs, tables and small sculptures. The approach aims to reduce manufacturing complexity by enabling rapid construction without specialised knowledge of 3D modelling or robotics.

Unlike traditional fabrication methods such as 3D printing, which can take hours or days, the modular assembly process operates quickly and allows objects to be disassembled and reused. The team is exploring stronger connection methods and extensions to larger-scale robotic systems.

The research was presented at the ACM Symposium on Computational Fabrication in November. The team said the work points toward more accessible, flexible and sustainable ways to produce physical objects using natural language and AI-driven design.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

PwC automates AI governance with Agent Mode

The global professional services network, PwC, has expanded its Model Edge platform with the launch of Agent Mode, an AI assistant designed to automate governance, compliance and documentation across enterprise AI model lifecycles.

The capability targets the growing administrative burden faced by organisations as AI model portfolios scale and regulatory expectations intensify.

Agent Mode allows users to describe governance tasks in natural language, instead of manually navigating workflows.

A system that executes actions directly within Model Edge, generates leadership-ready documentation and supports common document and reporting formats, significantly reducing routine compliance effort.

PwC estimates weekly time savings of between 20 and 50 percent for governance and model risk teams.

Behind the interface, a secure orchestration engine interprets user intent, verifies role based permissions and selects appropriate large language models based on task complexity. The design ensures governance guardrails remain intact while enabling faster and more consistent oversight.

PwC positions Agent Mode as a step towards fully automated, agent-driven AI governance, enabling organisations to focus expert attention on risk assessment and regulatory judgement instead of process management as enterprise AI adoption accelerates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The limits of raw computing power in AI

As the global race for AI accelerates, a growing number of experts are questioning whether simply adding more computing power still delivers meaningful results. In a recent blog post, digital policy expert Jovan Kurbalija argues that AI development is approaching a critical plateau, where massive investments in hardware produce only marginal gains in performance.

Despite the dominance of advanced GPUs and ever-larger data centres, improvements in accuracy and reasoning among leading models are slowing, exposing what he describes as an emerging ‘AI Pareto paradox’.

According to Kurbalija, the imbalance is striking: around 80% of AI investment is currently spent on computing infrastructure, yet it accounts for only a fraction of real-world impact. As hardware becomes cheaper and more widely available, he suggests it is no longer the decisive factor.

Instead, the next phase of AI progress will depend on how effectively organisations integrate human knowledge, skills, and processes into AI systems.

That shift places people, not machines, at the centre of AI transformation. Kurbalija highlights the limits of traditional training approaches and points to new models of learning that focus on hands-on development and deep understanding of data.

Building a simple AI tool may now take minutes, but turning it into a reliable, high-precision system requires sustained human effort, from refining data to rethinking internal workflows.

Looking ahead to 2026, the message is clear. Success in AI will not be defined by who owns the most powerful chips, but by who invests most wisely in people.

As Kurbalija concludes, organisations that treat AI as a skill to be cultivated, rather than a product to be purchased, are far more likely to see lasting benefits from the technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes media in North Macedonia with new regulatory guidance

A new analysis examines the impact of AI on North Macedonia’s media sector, offering guidance on ethical standards, human rights, and regulatory approaches.

Prepared in both Macedonian and English, the study benchmarks the country’s practices against European frameworks and provides actionable recommendations for future regulation and self-regulation.

The research, supported by the EU and Council of Europe’s PRO-FREX initiative and in collaboration with the Agency for Audio and Audiovisual Media Services (AVMU), was presented during Media Literacy Days 2025 in Skopje.

It highlights the relevance of EU and Council of Europe guidelines, including the Framework Convention on AI and Human Rights, and guidance on responsible AI in journalism.

AVMU’s involvement underlines its role in ensuring media freedom, fairness, and accountability amid rapid technological change. Participants highlighted the need for careful policymaking to manage AI’s impact, protecting media diversity, journalistic standards, and public trust online.

The analysis forms part of broader efforts under the Council of Europe and the EU’s Horizontal Facility for the Western Balkans and Türkiye, aiming to support North Macedonia in aligning media regulation with European standards while responsibly integrating AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and security trends shape the internet in 2025

Cloudflare released its sixth annual Year in Review, providing a comprehensive snapshot of global Internet trends in 2025. The report highlights rising digital reliance, AI progress, and evolving security threats across Cloudflare’s network and Radar data.

Global Internet traffic rose 19 percent year-on-year, reflecting increased use for personal and professional activities. A key trend was the move from large-scale AI training to continuous AI inference, alongside rapid growth in generative AI platforms.

Google and Meta remained the most popular services, while ChatGPT led in generative AI usage.

Cybersecurity remained a critical concern. Post-quantum encryption now protects 52 percent of Internet traffic, yet record-breaking DDoS attacks underscored rising cyber risks.

Civil society and non-profit organisations were the most targeted sectors for the first time, while government actions caused nearly half of the major Internet outages.

Connectivity varied by region, with Europe leading in speed and quality and Spain ranking highest globally. The report outlines 2025’s Internet challenges and progress, providing insights for governments, businesses, and users aiming for greater resilience and security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Crypto theft soars in 2025 with fewer but bigger attacks

Cryptocurrency theft intensified in 2025, with total stolen funds exceeding $3.4 billion despite fewer large-scale incidents. Losses became increasingly concentrated, with a few major breaches driving most of the annual damage and widening the gap between typical hacks and extreme outliers.

North Korea remained the dominant threat actor, stealing at least $2.02 billion in digital assets during the year, a 51% increase compared with 2024.

Larger thefts were achieved through fewer operations, often relying on insider access, executive impersonation, and long-term infiltration of crypto firms rather than frequent attacks.

Laundering activity linked to North Korean actors followed a distinctive and disciplined pattern. Stolen funds moved in smaller tranches through Chinese-language laundering networks, bridges, and mixing services, usually following a structured 45-day cycle.

Individual wallet attacks surged, impacting tens of thousands of victims, while the total value stolen from personal wallets fell. Decentralised finance remained resilient, with hack losses low despite rising locked capital, indicating stronger security practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Healthcare faces growing compliance pressure from AI adoption

AI is becoming a practical tool across healthcare as providers face rising patient demand, chronic disease and limited resources.

These AI systems increasingly support tasks such as clinical documentation, billing, diagnostics and personalised treatment instead of relying solely on manual processes, allowing clinicians to focus more directly on patient care.

At the same time, AI introduces significant compliance and safety risks. Algorithmic bias, opaque decision-making, and outdated training data can affect clinical outcomes, raising questions about accountability when errors occur.

Regulators are signalling that healthcare organisations cannot delegate responsibility to automated systems and must retain meaningful human oversight over AI-assisted decisions.

Regulatory exposure spans federal and state frameworks, including HIPAA privacy rules, FDA oversight of AI-enabled medical devices and enforcement under the False Claims Act.

Healthcare providers are expected to implement robust procurement checks, continuous monitoring, governance structures and patient consent practices as AI regulation evolves towards a more coordinated national approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US platforms signal political shift in DSA risk reports

Major online platforms have submitted their 2025 systemic risk assessments under the Digital Services Act as the European Commission moves towards issuing its first fine against a Very Large Online Platform.

The reports arrive amid mounting political friction between Brussels and Washington, placing platform compliance under heightened scrutiny on both regulatory and geopolitical fronts.

Several US-based companies adjusted how risks related to hate speech, misinformation and diversity are framed, reflecting political changes in the US while maintaining formal alignment with EU law.

Meta softened enforcement language, reclassified hate speech under broader categories and reduced visibility of civil rights structures, while continuing to emphasise freedom of expression as a guiding principle.

Google and YouTube similarly narrowed references to misinformation, replaced established terminology with less charged language and limited enforcement narratives to cases involving severe harm.

LinkedIn followed comparable patterns, removing references to earlier commitments on health misinformation, civic integrity and EU voluntary codes that have since been integrated into the DSA framework.

X largely retained its prior approach, although its report continues to reference cooperation with governments and civil society that contrasts with the platform’s public positioning.

TikTok diverged from other platforms by expanding disclosures on hate speech, election integrity and fact-checking, likely reflecting its vulnerability to regulatory action in both the EU and the US.

European regulators are expected to assess whether these shifts represent genuine risk mitigation or strategic alignment with US political priorities.

As systemic risk reports increasingly inform enforcement decisions, subtle changes in language, scope and emphasis may carry regulatory consequences well beyond their formal compliance function.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!