AI safety push sees Anthropic and OpenAI recruit explosives specialists

Anthropic and OpenAI are recruiting chemical and explosives experts to strengthen safeguards for their AI systems, reflecting growing concern about the potential misuse of advanced models.

Anthropic is seeking a policy specialist to design and monitor guardrails governing how its systems respond to prompts involving chemical weapons and explosives. The role includes assessing high-risk scenarios and responding to potential escalation signals in real time.

OpenAI is expanding its Preparedness team, hiring researchers and a threat modeller to identify and forecast risks linked to frontier AI systems. The positions focus on evaluating catastrophic risks and aligning technical, policy, and governance responses.

The recruitment drive comes amid heightened scrutiny of AI safety and national security implications. Anthropic is currently challenging a US government designation that labels it a supply-chain risk, while tensions have emerged over restrictions on the military use of AI systems.

At the same time, OpenAI has secured agreements to deploy its technology in classified environments under defined constraints. The parallel developments highlight how AI firms are balancing commercial expansion with increasing pressure to implement robust safety controls.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Data centres drive LG’s integrated AI infrastructure push

AI infrastructure is becoming a central battleground for growth, with LG Group accelerating its push into AI data centres and energy storage systems under its ‘One LG’ strategy.

The initiative brings together key affiliates to deliver integrated solutions for AI data centres. LG Electronics provides cooling systems, LG Energy Solution handles power infrastructure, including ESS and UPS, while LG Uplus and LG CNS oversee design, construction, and operations.

The strategy comes as global demand for AI data centres surges, driven by energy-intensive workloads and rising electricity constraints. Expanding storage capacity has become critical, with the US expected to add over 24 gigawatts of energy storage capacity in 2026 alone.

LG Electronics is focusing on advanced cooling technologies, including large air-cooled chillers and liquid-cooling systems, to manage the intense heat generated by GPU-intensive AI workloads. The company has also expanded into immersion cooling through partnerships, aiming to achieve efficiency gains in next-generation facilities.

Meanwhile, LG Energy Solution is strengthening its role in power infrastructure, scaling ESS production across North America, and securing major contracts. Through integrated battery and software solutions, the company is positioning itself to meet growing demand for stable, high-capacity energy systems supporting AI operations.

On the networking side, LG Uplus is developing low-latency infrastructure and AI-driven data centre management systems to optimise performance and energy use in real time. Together, these efforts highlight LG’s ambition to become a full-stack provider in the rapidly expanding AI data centre ecosystem.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA Isaac powers generalist specialist robots at scale

A new class of robots is emerging, combining broad adaptability with task-specific precision as developers move toward generalist specialist systems. Within this shift, NVIDIA Isaac is enabling integrated workflows that connect data generation, simulation, training, and deployment across robotics pipelines.

NVIDIA Isaac unifies robotics development across these stages, integrating cloud-to-robot workflows that allow developers to build, test, and scale systems more efficiently across both real and simulated environments.

A key driver is the growing reliance on synthetic data, which allows developers to simulate rare or hazardous scenarios that are difficult to capture in the real world. NVIDIA Isaac supports this through tools such as Omniverse-based simulation and teleoperation pipelines, helping convert real-world signals into scalable training datasets and accelerating development cycles.

The platform also enables advanced robot training using reasoning vision-language-action models, which allow machines to perceive, interpret, and act across complex environments. With frameworks like Isaac Lab and integrated physics engines, NVIDIA Isaac enables robots to train across thousands of parallel simulations, significantly reducing time, cost, and risk compared to real-world training.

Once trained, NVIDIA Isaac supports deployment across edge AI systems, including the Jetson platform, while maintaining consistency between simulation and real-world performance. Combined with modular workflows and open frameworks, the platform is positioning itself as a core foundation for scalable, next-generation robotics.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Parents underestimate how teenagers use AI in daily life

Parents often believe they understand how their children use AI tools in daily life, but recent studies suggest a clear and growing disconnect. Teenagers are using AI more frequently and in more complex ways than most adults realise.

Research indicates that 64% of teens use AI, while only 51% of parents think their children do. A large share of families have never discussed AI, leaving teenagers to navigate its role without guidance.

Teenagers commonly use AI for schoolwork, research and entertainment as part of their routine activities. However, a notable number also rely on it for advice, conversation and even emotional support in personal situations.

Experts warn that this awareness gap can increase risks linked to misuse and emotional dependence on AI tools. Limited parental understanding means many overlook how strongly AI is influencing behaviour and decision-making.

Despite these concerns, many teenagers feel confident using AI and see it as a helpful tool. Specialists emphasise that open conversations are essential to ensure more responsible and balanced use at home.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI standards and regulation struggle to keep pace with global innovation

Global efforts to regulate AI are accelerating, but innovation continues to outpace formal rules. Policymakers and industry leaders are increasingly turning to standards to help bridge compliance gaps.

At the AI Standards Hub Global Summit, experts highlighted how technical standards support responsible AI development. These tools are seen as essential for scaling AI safely while regulatory frameworks continue to evolve.

Differences across regions remain significant, with the EU relying on formal regulation and the US leaning on flexible standards. This fragmented landscape is raising concerns over compliance costs and barriers to cross-border deployment.

Experts stress that standards must evolve alongside AI while aligning with global frameworks and enforcement efforts. Without coordination, inconsistencies could limit innovation and weaken trust in AI systems.

Calls are growing for shared definitions, measurable benchmarks and stronger international cooperation. Stakeholders argue that aligning standards with regulation will be critical for future AI governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU advances AI simplification effort ahead of further negotiations

A committee within the European Parliament has approved a proposal to simplify aspects of AI regulation, marking a step forward in efforts to refine the implementation of the AI Act.

An initiative that seeks to adjust certain requirements to support clearer compliance, particularly for industry stakeholders.

The proposal focuses on technical and procedural elements linked to how AI rules are applied in practice.

Lawmakers aim to ensure that regulatory obligations remain proportionate while maintaining existing safeguards. Part of the discussion includes how specific categories of AI systems should be addressed within the broader framework.

Some elements of the proposal may require further discussion in upcoming negotiations with the Council of the European Union. Areas under consideration include the treatment of sensitive AI applications and the balance between regulatory clarity and enforcement effectiveness.

The development reflects ongoing efforts within the EU to refine its approach to AI governance. As discussions continue, policymakers are expected to assess how adjustments can support innovation while maintaining consistency with existing legal principles.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta to end Instagram private message encryption after May 8

The US tech giant, Meta, has announced that end-to-end encryption for private messages on Instagram will no longer be supported after 8 May.

Previously, such a technology ensured that only intended recipients could read messages, preventing even Meta from accessing their contents.

The decision follows concerns from law enforcement and child protection organisations, which argued that encrypted messages can make it harder to identify harmful content involving children.

Meta has stated that the update allows the platform to monitor messages while maintaining standard privacy safeguards.

End-to-end encryption had been the default for several messaging platforms, including WhatsApp, Messenger, and other Meta services.

The company first signalled its intent to expand encryption across Instagram and Messenger in 2019, implementing it in 2023. The plan was met with objections from organisations such as the Internet Watch Foundation and the Virtual Global Taskforce.

These groups highlighted potential risks in preventing the timely detection of harmful content, particularly child sexual abuse material.

Meta’s shift reflects a compromise between privacy, platform security, and online child safety. The company has not provided further details on changes to encryption policies beyond Instagram’s private messaging service.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Workplace adoption of AI varies widely in the EU

Generative AI is becoming increasingly common in Europe, with around a third of people using the tools in 2025. Fewer than half of these users apply AI professionally, leaving workplace adoption at just 15%.

Usage varies greatly across the continent. Norway recorded the highest rate at 35.4%, followed closely by Switzerland at 34.4%. Northern and Western European nations generally lead, while Eastern and Southeastern countries report much lower rates, with Hungary at only 1.3%.

Among the EU’s largest economies, France and Spain have the highest workplace AI use, at 18.4% and 17.9%, respectively, while Germany is slightly above average at 15.8%, and Italy lags at 8%. Experts note that adoption depends on skills, trust, governance, and the structure of national economies.

The gap between personal and professional AI use highlights growth potential. As AI agents continue spreading across workplaces, adoption rates are expected to rise, particularly in industries suited to generative AI, such as ICT, research, media, and knowledge-based sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta’s metaverse collapses as Horizon Worlds shuts down on Quest

Meta will shut down Horizon Worlds on its Quest headsets, ending its flagship virtual reality (VR) platform and marking a clear retreat from its metaverse ambitions. The app will be removed from the Quest store on 31 March and discontinued in VR by 15 June, continuing only as a mobile service.

Horizon Worlds, launched in 2021, was central to Meta’s rebranding from Facebook and its vision of a fully immersive virtual environment. Despite billions in investment and high-profile partnerships, the platform failed to attract a large user base and struggled with design limitations and weak engagement.

Reality Labs, the division behind the metaverse push, has accumulated nearly $80 billion in losses since 2020, including more than $6 billion in a single quarter. Recent layoffs affecting around 10 percent of the VR workforce, along with the shutdown of related projects, underscore a broader pullback.

Competition and shifting priorities have accelerated the decline. Rival platforms such as VRChat maintained stronger communities, while Meta increasingly redirected resources toward AI and hardware, including its Ray-Ban smart glasses.

Although Meta says it remains committed to VR, the closure of Horizon Worlds signals a strategic reset. The company is repositioning its future around AI-driven products, marking a decisive shift away from its earlier metaverse vision.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google responds to UK digital market rules and CMA proposals

Debate over proposed UK digital market rules is intensifying, with Google outlining its position and emphasising the need to balance competition with user experience and platform integrity. The company said it supports the objectives of the Competition and Markets Authority but warned that some proposals could introduce risks for users.

Google argued that maintaining fair and relevant search results remains a priority, stating that its ranking systems are designed to prioritise quality rather than favour its own services. It cautioned that certain third-party proposals could expose its systems to manipulation, potentially weakening protections against spam and reducing the pace of product improvements.

The company also addressed user choice on Android devices, noting that existing options already allow users to select preferred services. It suggested that adding frequent mandatory choice screens could disrupt user experience, proposing instead a permanent settings-based option to change defaults without repeated prompts.

Regarding publisher relations, Google highlighted efforts to increase control over how content is used, particularly with generative AI features such as AI Overviews. It said new tools are being developed to allow publishers to opt out of specific AI functionalities while maintaining visibility in search results.

Google said it would continue engaging with UK regulators to shape rules that support users, publishers, and businesses, while ensuring that innovation and service quality are not compromised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!