MIT develops method to detect overconfident AI

Researchers at MIT have introduced a new method to assess the reliability of large language models more accurately. Many LLMs can produce confident yet incorrect responses, posing risks in high-stakes applications such as healthcare or finance.

The team combined self-consistency checks with an ensemble approach, comparing a model’s outputs to similar LLMs. This total uncertainty (TU) metric more accurately identifies overconfident predictions and can flag hallucinations that simpler methods may miss.

Experiments on ten common tasks- including question-answering, translation, summarisation, and math reasoning- showed that TU outperformed individual uncertainty measures.

The ensemble approach relies on models from different developers to ensure diversity and credibility, offering a practical and energy-efficient way to gauge AI confidence.

Researchers suggest TU could also help reinforce correct answers during training, improving overall model performance. Future developments aim to enhance the metric’s accuracy for open-ended tasks and explore additional forms of uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI safety push sees Anthropic and OpenAI recruit explosives specialists

Anthropic and OpenAI are recruiting chemical and explosives experts to strengthen safeguards for their AI systems, reflecting growing concern about the potential misuse of advanced models.

Anthropic is seeking a policy specialist to design and monitor guardrails governing how its systems respond to prompts involving chemical weapons and explosives. The role includes assessing high-risk scenarios and responding to potential escalation signals in real time.

OpenAI is expanding its Preparedness team, hiring researchers and a threat modeller to identify and forecast risks linked to frontier AI systems. The positions focus on evaluating catastrophic risks and aligning technical, policy, and governance responses.

The recruitment drive comes amid heightened scrutiny of AI safety and national security implications. Anthropic is currently challenging a US government designation that labels it a supply-chain risk, while tensions have emerged over restrictions on the military use of AI systems.

At the same time, OpenAI has secured agreements to deploy its technology in classified environments under defined constraints. The parallel developments highlight how AI firms are balancing commercial expansion with increasing pressure to implement robust safety controls.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Data centres drive LG’s integrated AI infrastructure push

AI infrastructure is becoming a central battleground for growth, with LG Group accelerating its push into AI data centres and energy storage systems under its ‘One LG’ strategy.

The initiative brings together key affiliates to deliver integrated solutions for AI data centres. LG Electronics provides cooling systems, LG Energy Solution handles power infrastructure, including ESS and UPS, while LG Uplus and LG CNS oversee design, construction, and operations.

The strategy comes as global demand for AI data centres surges, driven by energy-intensive workloads and rising electricity constraints. Expanding storage capacity has become critical, with the US expected to add over 24 gigawatts of energy storage capacity in 2026 alone.

LG Electronics is focusing on advanced cooling technologies, including large air-cooled chillers and liquid-cooling systems, to manage the intense heat generated by GPU-intensive AI workloads. The company has also expanded into immersion cooling through partnerships, aiming to achieve efficiency gains in next-generation facilities.

Meanwhile, LG Energy Solution is strengthening its role in power infrastructure, scaling ESS production across North America, and securing major contracts. Through integrated battery and software solutions, the company is positioning itself to meet growing demand for stable, high-capacity energy systems supporting AI operations.

On the networking side, LG Uplus is developing low-latency infrastructure and AI-driven data centre management systems to optimise performance and energy use in real time. Together, these efforts highlight LG’s ambition to become a full-stack provider in the rapidly expanding AI data centre ecosystem.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA Isaac powers generalist specialist robots at scale

A new class of robots is emerging, combining broad adaptability with task-specific precision as developers move toward generalist specialist systems. Within this shift, NVIDIA Isaac is enabling integrated workflows that connect data generation, simulation, training, and deployment across robotics pipelines.

NVIDIA Isaac unifies robotics development across these stages, integrating cloud-to-robot workflows that allow developers to build, test, and scale systems more efficiently across both real and simulated environments.

A key driver is the growing reliance on synthetic data, which allows developers to simulate rare or hazardous scenarios that are difficult to capture in the real world. NVIDIA Isaac supports this through tools such as Omniverse-based simulation and teleoperation pipelines, helping convert real-world signals into scalable training datasets and accelerating development cycles.

The platform also enables advanced robot training using reasoning vision-language-action models, which allow machines to perceive, interpret, and act across complex environments. With frameworks like Isaac Lab and integrated physics engines, NVIDIA Isaac enables robots to train across thousands of parallel simulations, significantly reducing time, cost, and risk compared to real-world training.

Once trained, NVIDIA Isaac supports deployment across edge AI systems, including the Jetson platform, while maintaining consistency between simulation and real-world performance. Combined with modular workflows and open frameworks, the platform is positioning itself as a core foundation for scalable, next-generation robotics.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU advances AI simplification effort ahead of further negotiations

A committee within the European Parliament has approved a proposal to simplify aspects of AI regulation, marking a step forward in efforts to refine the implementation of the AI Act.

An initiative that seeks to adjust certain requirements to support clearer compliance, particularly for industry stakeholders.

The proposal focuses on technical and procedural elements linked to how AI rules are applied in practice.

Lawmakers aim to ensure that regulatory obligations remain proportionate while maintaining existing safeguards. Part of the discussion includes how specific categories of AI systems should be addressed within the broader framework.

Some elements of the proposal may require further discussion in upcoming negotiations with the Council of the European Union. Areas under consideration include the treatment of sensitive AI applications and the balance between regulatory clarity and enforcement effectiveness.

The development reflects ongoing efforts within the EU to refine its approach to AI governance. As discussions continue, policymakers are expected to assess how adjustments can support innovation while maintaining consistency with existing legal principles.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta to end Instagram private message encryption after May 8

The US tech giant, Meta, has announced that end-to-end encryption for private messages on Instagram will no longer be supported after 8 May.

Previously, such a technology ensured that only intended recipients could read messages, preventing even Meta from accessing their contents.

The decision follows concerns from law enforcement and child protection organisations, which argued that encrypted messages can make it harder to identify harmful content involving children.

Meta has stated that the update allows the platform to monitor messages while maintaining standard privacy safeguards.

End-to-end encryption had been the default for several messaging platforms, including WhatsApp, Messenger, and other Meta services.

The company first signalled its intent to expand encryption across Instagram and Messenger in 2019, implementing it in 2023. The plan was met with objections from organisations such as the Internet Watch Foundation and the Virtual Global Taskforce.

These groups highlighted potential risks in preventing the timely detection of harmful content, particularly child sexual abuse material.

Meta’s shift reflects a compromise between privacy, platform security, and online child safety. The company has not provided further details on changes to encryption policies beyond Instagram’s private messaging service.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Workplace adoption of AI varies widely in the EU

Generative AI is becoming increasingly common in Europe, with around a third of people using the tools in 2025. Fewer than half of these users apply AI professionally, leaving workplace adoption at just 15%.

Usage varies greatly across the continent. Norway recorded the highest rate at 35.4%, followed closely by Switzerland at 34.4%. Northern and Western European nations generally lead, while Eastern and Southeastern countries report much lower rates, with Hungary at only 1.3%.

Among the EU’s largest economies, France and Spain have the highest workplace AI use, at 18.4% and 17.9%, respectively, while Germany is slightly above average at 15.8%, and Italy lags at 8%. Experts note that adoption depends on skills, trust, governance, and the structure of national economies.

The gap between personal and professional AI use highlights growth potential. As AI agents continue spreading across workplaces, adoption rates are expected to rise, particularly in industries suited to generative AI, such as ICT, research, media, and knowledge-based sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta’s metaverse collapses as Horizon Worlds shuts down on Quest

Meta will shut down Horizon Worlds on its Quest headsets, ending its flagship virtual reality (VR) platform and marking a clear retreat from its metaverse ambitions. The app will be removed from the Quest store on 31 March and discontinued in VR by 15 June, continuing only as a mobile service.

Horizon Worlds, launched in 2021, was central to Meta’s rebranding from Facebook and its vision of a fully immersive virtual environment. Despite billions in investment and high-profile partnerships, the platform failed to attract a large user base and struggled with design limitations and weak engagement.

Reality Labs, the division behind the metaverse push, has accumulated nearly $80 billion in losses since 2020, including more than $6 billion in a single quarter. Recent layoffs affecting around 10 percent of the VR workforce, along with the shutdown of related projects, underscore a broader pullback.

Competition and shifting priorities have accelerated the decline. Rival platforms such as VRChat maintained stronger communities, while Meta increasingly redirected resources toward AI and hardware, including its Ray-Ban smart glasses.

Although Meta says it remains committed to VR, the closure of Horizon Worlds signals a strategic reset. The company is repositioning its future around AI-driven products, marking a decisive shift away from its earlier metaverse vision.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google responds to UK digital market rules and CMA proposals

Debate over proposed UK digital market rules is intensifying, with Google outlining its position and emphasising the need to balance competition with user experience and platform integrity. The company said it supports the objectives of the Competition and Markets Authority but warned that some proposals could introduce risks for users.

Google argued that maintaining fair and relevant search results remains a priority, stating that its ranking systems are designed to prioritise quality rather than favour its own services. It cautioned that certain third-party proposals could expose its systems to manipulation, potentially weakening protections against spam and reducing the pace of product improvements.

The company also addressed user choice on Android devices, noting that existing options already allow users to select preferred services. It suggested that adding frequent mandatory choice screens could disrupt user experience, proposing instead a permanent settings-based option to change defaults without repeated prompts.

Regarding publisher relations, Google highlighted efforts to increase control over how content is used, particularly with generative AI features such as AI Overviews. It said new tools are being developed to allow publishers to opt out of specific AI functionalities while maintaining visibility in search results.

Google said it would continue engaging with UK regulators to shape rules that support users, publishers, and businesses, while ensuring that innovation and service quality are not compromised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Growing investment and energy plans reshape Armenia’s AI future

Armenia’s recent technology announcements are helping to form a clearer national AI strategy with stronger coordination. A memorandum with the US on semiconductors and AI now appears to be moving beyond symbolic commitment into action.

Momentum has accelerated with plans to expand a large-scale AI factory backed by significant investment. The project is estimated at around $4 billion and includes tens of thousands of advanced GPUs to support large-scale development.

The initiative is already entering construction, marking a shift from concept to execution in a short timeframe. Officials have described a broader vision of building a network of AI factories across the country.

Energy planning is becoming central, with discussions around deploying a small modular nuclear reactor to meet demand. Stable and scalable power is considered essential for sustaining long-term AI infrastructure growth.

Efforts are also targeting the wider ecosystem through a Virtual AI Institute and planned GPU access for startups. These steps aim to strengthen research capacity and ensure local participation in the country’s AI expansion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot