UNESCO promotes safe AI use and gender equality in Caribbean workshop

A regional workshop in Kingston has been organised by UNESCO to explore the relationship between AI, gender equality and online safety, reflecting wider efforts to support inclusive digital governance across the Caribbean.

Discussions examined the impact of technology-facilitated gender-based violence, including harassment, impersonation and image-based abuse, which continue to affect women and girls disproportionately.

Generative AI was presented as both an opportunity and a risk, with concerns linked to bias, deepfakes, misinformation and non-consensual content.

More than 50 participants from government, civil society and youth organisations engaged in practical sessions aimed at strengthening awareness and digital skills. A participatory approach encouraged peer learning and critical thinking, aligning with UNESCO’s ethical AI principles.

Technology reflects the hands that build it and the society that feeds it data. If we are not careful, AI will not just mirror our existing inequalities; it will magnify them.

The Honourable Olivia Grange, Minister of Culture, Gender, Entertainment and Sport of Jamaica.

The pursuit of equality must extend into every space where women live, work, and where they connect and express themselves – including the digital world,

For Eric Falt, Regional Director and Representative of UNESCO.

The initiative forms part of broader efforts to ensure that digital transformation supports inclusion rather than reinforcing existing disparities, while equipping stakeholders with tools for safe and responsible AI use.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Bitcoin moves closer to quantum resistance with BIP-360

BTQ Technologies has deployed Bitcoin Improvement Proposal BIP-360 on its Bitcoin Quantum Testnet v0.3.0, marking the first live test of the proposal. The upgrade introduces a quantum-resistant transaction model, Pay-to-Merkle-Root, designed to strengthen Bitcoin’s long-term security.

BIP-360 focuses on mitigating a vulnerability linked to Taproot’s key-path spending mechanism, which can expose public keys on-chain. Such exposure may become a risk if future quantum computers are capable of exploiting cryptographic weaknesses using advanced algorithms.

The testnet adds new consensus rules, post-quantum signatures, and full transaction lifecycle testing. Faster one-minute block times and adjusted fee structures have been introduced to accommodate larger and more complex signatures.

Growing global attention on quantum threats adds urgency to the development. US, EU, and Canadian authorities are setting timelines for post-quantum cryptography to protect future system security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU scrutiny intensifies over Broadcom VMware licensing dispute

Broadcom is facing increased regulatory pressure in the EU following a formal antitrust complaint concerning changes to VMware licensing practices.

The complaint highlights growing tensions between large technology providers and European cloud infrastructure firms.

The filing, submitted by Cloud Infrastructure Services Providers in Europe, raises concerns that revised licensing models could significantly alter market dynamics.

European providers argue that the changes may limit flexibility, increase costs, and affect their ability to compete effectively in the cloud services sector.

At the centre of the dispute lies the broader issue of market concentration and control over critical digital infrastructure.

Industry stakeholders suggest that restrictive licensing conditions could reshape access to essential virtualisation technologies, which underpin a wide range of cloud and enterprise services across the EU.

Regulatory attention is expected to focus on whether such practices align with the EU competition rules, particularly regarding fair access and market neutrality.

The case emerges at a time when European policymakers are intensifying oversight of dominant technology firms and seeking to strengthen digital sovereignty across strategic sectors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

MIT develops method to detect overconfident AI

Researchers at MIT have introduced a new method to assess the reliability of large language models more accurately. Many LLMs can produce confident yet incorrect responses, posing risks in high-stakes applications such as healthcare or finance.

The team combined self-consistency checks with an ensemble approach, comparing a model’s outputs to similar LLMs. This total uncertainty (TU) metric more accurately identifies overconfident predictions and can flag hallucinations that simpler methods may miss.

Experiments on ten common tasks- including question-answering, translation, summarisation, and math reasoning- showed that TU outperformed individual uncertainty measures.

The ensemble approach relies on models from different developers to ensure diversity and credibility, offering a practical and energy-efficient way to gauge AI confidence.

Researchers suggest TU could also help reinforce correct answers during training, improving overall model performance. Future developments aim to enhance the metric’s accuracy for open-ended tasks and explore additional forms of uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI safety push sees Anthropic and OpenAI recruit explosives specialists

Anthropic and OpenAI are recruiting chemical and explosives experts to strengthen safeguards for their AI systems, reflecting growing concern about the potential misuse of advanced models.

Anthropic is seeking a policy specialist to design and monitor guardrails governing how its systems respond to prompts involving chemical weapons and explosives. The role includes assessing high-risk scenarios and responding to potential escalation signals in real time.

OpenAI is expanding its Preparedness team, hiring researchers and a threat modeller to identify and forecast risks linked to frontier AI systems. The positions focus on evaluating catastrophic risks and aligning technical, policy, and governance responses.

The recruitment drive comes amid heightened scrutiny of AI safety and national security implications. Anthropic is currently challenging a US government designation that labels it a supply-chain risk, while tensions have emerged over restrictions on the military use of AI systems.

At the same time, OpenAI has secured agreements to deploy its technology in classified environments under defined constraints. The parallel developments highlight how AI firms are balancing commercial expansion with increasing pressure to implement robust safety controls.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Data centres drive LG’s integrated AI infrastructure push

AI infrastructure is becoming a central battleground for growth, with LG Group accelerating its push into AI data centres and energy storage systems under its ‘One LG’ strategy.

The initiative brings together key affiliates to deliver integrated solutions for AI data centres. LG Electronics provides cooling systems, LG Energy Solution handles power infrastructure, including ESS and UPS, while LG Uplus and LG CNS oversee design, construction, and operations.

The strategy comes as global demand for AI data centres surges, driven by energy-intensive workloads and rising electricity constraints. Expanding storage capacity has become critical, with the US expected to add over 24 gigawatts of energy storage capacity in 2026 alone.

LG Electronics is focusing on advanced cooling technologies, including large air-cooled chillers and liquid-cooling systems, to manage the intense heat generated by GPU-intensive AI workloads. The company has also expanded into immersion cooling through partnerships, aiming to achieve efficiency gains in next-generation facilities.

Meanwhile, LG Energy Solution is strengthening its role in power infrastructure, scaling ESS production across North America, and securing major contracts. Through integrated battery and software solutions, the company is positioning itself to meet growing demand for stable, high-capacity energy systems supporting AI operations.

On the networking side, LG Uplus is developing low-latency infrastructure and AI-driven data centre management systems to optimise performance and energy use in real time. Together, these efforts highlight LG’s ambition to become a full-stack provider in the rapidly expanding AI data centre ecosystem.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA Isaac powers generalist specialist robots at scale

A new class of robots is emerging, combining broad adaptability with task-specific precision as developers move toward generalist specialist systems. Within this shift, NVIDIA Isaac is enabling integrated workflows that connect data generation, simulation, training, and deployment across robotics pipelines.

NVIDIA Isaac unifies robotics development across these stages, integrating cloud-to-robot workflows that allow developers to build, test, and scale systems more efficiently across both real and simulated environments.

A key driver is the growing reliance on synthetic data, which allows developers to simulate rare or hazardous scenarios that are difficult to capture in the real world. NVIDIA Isaac supports this through tools such as Omniverse-based simulation and teleoperation pipelines, helping convert real-world signals into scalable training datasets and accelerating development cycles.

The platform also enables advanced robot training using reasoning vision-language-action models, which allow machines to perceive, interpret, and act across complex environments. With frameworks like Isaac Lab and integrated physics engines, NVIDIA Isaac enables robots to train across thousands of parallel simulations, significantly reducing time, cost, and risk compared to real-world training.

Once trained, NVIDIA Isaac supports deployment across edge AI systems, including the Jetson platform, while maintaining consistency between simulation and real-world performance. Combined with modular workflows and open frameworks, the platform is positioning itself as a core foundation for scalable, next-generation robotics.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI standards and regulation struggle to keep pace with global innovation

Global efforts to regulate AI are accelerating, but innovation continues to outpace formal rules. Policymakers and industry leaders are increasingly turning to standards to help bridge compliance gaps.

At the AI Standards Hub Global Summit, experts highlighted how technical standards support responsible AI development. These tools are seen as essential for scaling AI safely while regulatory frameworks continue to evolve.

Differences across regions remain significant, with the EU relying on formal regulation and the US leaning on flexible standards. This fragmented landscape is raising concerns over compliance costs and barriers to cross-border deployment.

Experts stress that standards must evolve alongside AI while aligning with global frameworks and enforcement efforts. Without coordination, inconsistencies could limit innovation and weaken trust in AI systems.

Calls are growing for shared definitions, measurable benchmarks and stronger international cooperation. Stakeholders argue that aligning standards with regulation will be critical for future AI governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Workplace adoption of AI varies widely in the EU

Generative AI is becoming increasingly common in Europe, with around a third of people using the tools in 2025. Fewer than half of these users apply AI professionally, leaving workplace adoption at just 15%.

Usage varies greatly across the continent. Norway recorded the highest rate at 35.4%, followed closely by Switzerland at 34.4%. Northern and Western European nations generally lead, while Eastern and Southeastern countries report much lower rates, with Hungary at only 1.3%.

Among the EU’s largest economies, France and Spain have the highest workplace AI use, at 18.4% and 17.9%, respectively, while Germany is slightly above average at 15.8%, and Italy lags at 8%. Experts note that adoption depends on skills, trust, governance, and the structure of national economies.

The gap between personal and professional AI use highlights growth potential. As AI agents continue spreading across workplaces, adoption rates are expected to rise, particularly in industries suited to generative AI, such as ICT, research, media, and knowledge-based sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta’s metaverse collapses as Horizon Worlds shuts down on Quest

Meta will shut down Horizon Worlds on its Quest headsets, ending its flagship virtual reality (VR) platform and marking a clear retreat from its metaverse ambitions. The app will be removed from the Quest store on 31 March and discontinued in VR by 15 June, continuing only as a mobile service.

Horizon Worlds, launched in 2021, was central to Meta’s rebranding from Facebook and its vision of a fully immersive virtual environment. Despite billions in investment and high-profile partnerships, the platform failed to attract a large user base and struggled with design limitations and weak engagement.

Reality Labs, the division behind the metaverse push, has accumulated nearly $80 billion in losses since 2020, including more than $6 billion in a single quarter. Recent layoffs affecting around 10 percent of the VR workforce, along with the shutdown of related projects, underscore a broader pullback.

Competition and shifting priorities have accelerated the decline. Rival platforms such as VRChat maintained stronger communities, while Meta increasingly redirected resources toward AI and hardware, including its Ray-Ban smart glasses.

Although Meta says it remains committed to VR, the closure of Horizon Worlds signals a strategic reset. The company is repositioning its future around AI-driven products, marking a decisive shift away from its earlier metaverse vision.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!