New Relic advances AI agents for enterprise observability

The expansion into enterprise AI comes with a no-code platform from New Relic that allows companies to build and supervise their own observability agents.

A system that assembles AI-driven monitors designed to detect bugs and performance problems before they affect users, instead of leaving teams to rely on manual tracking.

It also supports the Model Context Protocol so organisations can link external data sources to the agents and integrate them with existing New Relic tools.

The company stresses that the platform is intended to complement other agent systems rather than replace them.

As AI agent software spreads across the market, enterprises are searching for ways to manage risk when giving automated tools access to internal systems.

Industry players such as Salesforce and OpenAI have already introduced their own agent platforms, and assessments from Gartner describe these frameworks as essential infrastructure for wider AI adoption.

New Relic also introduced new tools for the OpenTelemetry framework to remove friction around observability standards.

Its application performance monitoring agents now support OTel data, allowing enterprises to manage these streams in one place instead of operating separate collectors.

The update aims to reduce fragmentation that has slowed OTel deployment across large organisations and to simplify how engineering teams handle diverse observability pipelines.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

CrowdStrike warns of faster AI driven threats

Cyber adversaries increasingly used AI to accelerate attacks and evade detection in 2025, according to CrowdStrike’s 2026 Global Threat Report. The company described the period as the year of the evasive adversary, marked by subtle and rapid intrusions.

The average time to a financially motivated online crime breakout fell to 29 minutes, with the fastest recorded at 27 seconds. CrowdStrike observed an 89 percent rise in attacks by AI-enabled threat actors compared with 2024.

Attackers also targeted AI systems themselves, exploiting GenAI tools at more than 90 organisations through malicious prompt injection. Supply chain compromises and the abuse of valid credentials enabled intrusions to blend into legitimate activity, with most detections classified as malware-free.

China linked activity rose by 38 percent across sectors, while North Korea linked incidents increased by 130 percent. CrowdStrike tracked more than 281 adversaries in total, warning that speed, credential abuse, and AI fluency now define the modern threat landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK Justice Secretary pushes expanded AI use in courts to tackle backlogs

In a speech at the Microsoft AI Tour in London, Lammy outlined a vision for using AI to help address persistent backlogs in the criminal justice system, which currently stands at tens of thousands of unresolved cases, by automating and streamlining court administration and case progression tasks.

He described how pilot tools have already been used in the probation system to transcribe meetings and save over 25,000 hours of administrative time. He said similar AI transcription and summarisation systems are being tested in courts and tribunals to help judges, magistrates and legal advisers handle paperwork more efficiently.

Lammy also announced plans to invest more in an in-house Justice AI unit, with additional funding, to support pilot AI tools such as an intelligent listing assistant (J-AI) to help schedule and prioritise cases, and to strengthen partnerships with technology firms alongside funding programmes like LawtechUK to support law-tech innovation.

The Ministry of Justice will expand the use of AI tools to assist transcription, case summary generation and legal analysis, aiming to free up human judges and staff to focus on substantive decision-making.

The reforms come amid broader judicial changes, including lifting caps on court sitting days and proposals to reduce the number of jury trials for less serious offences, to alleviate bottlenecks that could otherwise take years to clear.

However, legal industry groups such as The Law Society of England and Wales have expressed reservations, saying AI may help with administrative tasks. Still, they should not replace critical human judgement in decisions with serious consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI preparing kids for careers that don’t exist yet, say education leaders

Education leaders and industry stakeholders in South Africa say the rise of AI is transforming labour-market expectations to the point that tomorrow’s careers may not yet exist.

They argue that traditional curricula, centred on static knowledge and routine tasks, must evolve to prioritise adaptability, problem solving, creativity, ethical reasoning and digital fluency, competencies that complement AI rather than compete with it.

Speakers at recent education forums emphasised that AI will continue to automate routine cognitive and technical work, pushing demand toward roles that require higher-order thinking and human-centred skills.

They described a growing need to integrate AI literacy and data skills into schooling from an early age to reduce future workforce displacement and prepare students to harness AI as a productive partner.

Experts also highlighted equity concerns: without intentional policy and investment to support under-resourced schools and communities, the ‘AI skills gap’ could exacerbate inequality. Some educators recommended stronger partnerships between government, tech industry and educational institutions to co-develop curricula, teacher training and accessible AI tools.

They underscored that competencies such as empathetic communication, cultural awareness and ethical judgement, areas where AI lacks robust capabilities, will remain crucial.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Romania’s job market faces structural change as AI and automation rise

A Think by ING analysis finds that Romania’s recent macroeconomic slowdown reflects more profound structural change than cyclical weakness.

After years of robust consumption-led expansion, fiscal tightening and weak domestic demand have curbed growth, while firms increasingly invest in automation and AI to boost productivity rather than expand headcount.

Industrial employment has declined; for example, manufacturing jobs fell by around 25,000 in late 2025, and labour market hiring has shifted toward defensive, replacement-only patterns.

Firms are integrating robotics, automated assembly lines and intelligent logistics systems, and service-sector work is also being reshaped by AI tools, even where formal adoption is still emerging.

A recent survey suggests that 68% of people in Romania have used AI tools, and 44% rely on them for work tasks such as administrative support and analysis, signalling rising informal use ahead of widespread enterprise deployment.

While automation and AI can raise productivity and output without proportional employment growth, they also tilt the labour market: high-skill specialised roles (e.g. AI, engineering, advanced management) are expected to remain resilient or grow, while routine roles, including some entry-level tech positions, call-centre jobs and administrative tasks, face stagnation or decline.

However, this can create a ‘barbell’ labour market with growth chiefly at the high and low ends, and limited opportunities in mid-skill roles.

Real wage erosion, tight hiring and demographic trends (including a shrinking workforce) add to short-term challenges. In the near term, employment may remain subdued even as economic output recovers modestly by 2027.

Over the longer term, the economy’s shift toward capital-intensive, productivity-driven growth could support stronger output without generating broad employment, underscoring the need for education, reskilling and policy strategies that help workers adapt to AI-driven labour demand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated film removed from cinemas after public backlash

A prize-winning AI-generated short film has been pulled from cinemas following criticism from audiences. Thanksgiving Day, created by filmmaker Igor Alferov, was due to screen in selected theatres before feature presentations.

Concerns emerged after news of the screening spread online, prompting complaints directed at AMC Theatres. The chain stated it had not programmed the film and that pre-show advertising partner Screenvision Media had arranged the placement.

AMC confirmed it would not participate in the initiative, meaning the AI film will no longer appear in its locations. The animated short, produced using Google’s Gemini 3.1 and Nano Banana Pro tools, had recently won an AI film festival award.

The episode comes amid broader debate about artificial intelligence in Hollywood. Industry insiders suggest studios are quietly increasing AI use in production, even as concerns grow over job losses and economic uncertainty within Los Angeles’ entertainment sector.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Commission delays high risk AI guidance

The European Commission has confirmed it will again delay publishing guidance on high-risk AI systems under the EU AI Act. The guidelines were due by 2 February 2026, but will now follow a revised timeline.

According to Euractiv, the document is intended to clarify which AI systems fall into the high-risk category and therefore face stricter obligations. Officials said more time is needed to incorporate significant stakeholder feedback.

The delay marks the second missed deadline and adds to broader implementation setbacks surrounding the EU AI Act. Several member states have yet to designate national enforcement bodies, complicating oversight preparations.

Brussels is also considering postponing the application of high-risk rules through a digital simplification package. Parliament and Council appear supportive of moving the August deadline back by more than a year, easing pressure on companies awaiting guidance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA drives a new era of industrial AI cybersecurity

AI-driven defences are moving deeper into operational technology as NVIDIA leads a shift toward embedded cybersecurity across critical infrastructure.

The company is partnering with firms such as Akamai Technologies, Forescout, Palo Alto Networks, Siemens and Xage Security to protect energy, manufacturing and transport systems that increasingly operate through cloud-linked environments.

Modernisation has expanded capabilities across these sectors, yet it has widened the gap between evolving threats and ageing industrial defences.

Zero-trust adoption in operational environments is gaining momentum as Forescout and NVIDIA develop real-time verification models tailored to legacy devices and safety-critical processes.

Security workloads run on NVIDIA BlueField hardware to keep protection isolated from industrial systems and avoid any interference with essential operations. That approach enables more precise control over lateral movement across networks without disrupting performance.

Industrial automation is also adapting through Siemens and Palo Alto Networks, which are moving security enforcement closer to workloads at the edge. AI-enabled inspection via BlueField enhances visibility in highly time-sensitive environments, improving reliability and uptime.

Akamai and Xage are extending similar models to energy infrastructure and large-scale operational networks, embedding segmentation and identity-based controls where resilience is most critical.

A coordinated architecture is now emerging in which edge-generated operational data feeds central AI analysis, while enforcement remains local to maintain continuity.

The result is a security model designed to meet the pressures of cyber-physical systems, enabling operators to detect threats faster, reinforce operational stability and protect infrastructure that supports global AI expansion.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Global privacy regulators warn of rising AI deepfake harms

Privacy regulators from around the world have issued a joint warning about the rise of AI-generated deepfakes, arguing that the spread of non-consensual images poses a global risk instead of remaining a problem confined to individual countries.

Sixty-one authorities endorsed a declaration that draws attention to AI images and videos depicting real people without their knowledge or consent.

The signatories highlight the rapid growth of intimate deepfakes, particularly those targeting children and individuals from vulnerable communities. They note that such material often circulates widely on social platforms and may fuel exploitation or cyberbullying.

The declaration argues that the scale of the threat requires coordinated action rather than isolated national responses.

European authorities, including the European Data Protection Board and the European Data Protection Supervisor, support the effort to build global cooperation.

Regulators say that only joint oversight can limit the harms caused by AI systems that generate false depictions, rather than protecting individuals’ privacy as required under frameworks such as the General Data Protection Regulation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI data centre surge pushes electricity demand in the UK to new heights

The UK faces rising pressure on its electricity system as about 140 new data centre projects could demand more power than the country’s current peak consumption, according to Ofgem.

The regulator said developers are seeking about 50 gigawatts of capacity, a level driven by rapid growth in AI and far beyond earlier forecasts.

Connection requests have surged since late 2024, placing strain on a grid already struggling to support vital renewable projects that are key to national climate targets.

Work needed to connect expanding data centre capacity could delay schemes considered essential for decarbonisation and economic growth, instead of supporting the transition at the required pace.

The growing electricity footprint of AI infrastructure also threatens the aim of creating a virtually carbon-free power system by 2030, particularly as high costs and slow grid integration continue to hinder progress.

A proposed data centre in Lincolnshire has already raised concerns by projecting emissions greater than those of several international airports combined.

Ofgem now warns that speculative grid applications are blocking more viable projects, including those tied to government AI growth zones.

The regulator is considering more stringent financial requirements and new fees for access to grid connections, arguing that developers may need to build their own routes to the network rather than rely entirely on existing infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!