UK drops AI copyright opt-out plan amid growing industry divide

The UK Government has abandoned its previous preference for an AI copyright opt-out model, signalling a shift in policy following strong opposition from creative industries.

Ministers now acknowledge that there is no clear consensus on how AI developers should access copyrighted material.

Concerns from writers, artists and rights holders focused on the use of their work in training AI systems without permission.

Liz Kendall confirmed that extensive consultation exposed significant disagreement, prompting the government to step back from its earlier position that would have allowed the use of copyrighted content unless creators opted out.

A joint report from the Department for Science, Innovation and Technology and the Department for Culture, Media and Sport states that further evidence is required before any legislative change.

Policymakers in the UK will assess how copyright frameworks influence AI development, while also examining international regulation, licensing models and ongoing legal disputes.

Government strategy now centres on balancing innovation with fair compensation.

Officials emphasise that creators must retain control over how their work is used, while AI developers require access to high-quality data to remain competitive. Potential measures include labelling AI-generated content to reduce risks linked to disinformation and deepfakes.

No timeline has been set for reform, reflecting the complexity of aligning economic growth with intellectual property protection.

The debate unfolds alongside broader ambitions outlined by Rachel Reeves, who has identified AI as a central driver of future economic expansion, with the UK aiming to lead adoption across the G7.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Amazon upgrades Alexa with AI features

Amazon is rolling out an AI upgrade to its Alexa assistant, aiming to make interactions more conversational and responsive. The new version is designed to follow the context and respond more naturally.

The update comes as Amazon seeks to compete with advanced AI chatbots that have gained popularity in recent years. Critics have argued that smart speakers have fallen behind newer AI tools.

Users in the UK are expected to notice more personalised and proactive responses from the upgraded assistant. This will be based on user and customer personal data. The service will be included with Prime subscriptions or offered as a standalone monthly option.

Analysts say the update could help Amazon gather even more user data and improve engagement by picking up on customers’ habits through conversations. However, questions remain about whether the changes will drive revenue or revive interest in smart speakers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI safety push sees Anthropic and OpenAI recruit explosives specialists

Anthropic and OpenAI are recruiting chemical and explosives experts to strengthen safeguards for their AI systems, reflecting growing concern about the potential misuse of advanced models.

Anthropic is seeking a policy specialist to design and monitor guardrails governing how its systems respond to prompts involving chemical weapons and explosives. The role includes assessing high-risk scenarios and responding to potential escalation signals in real time.

OpenAI is expanding its Preparedness team, hiring researchers and a threat modeller to identify and forecast risks linked to frontier AI systems. The positions focus on evaluating catastrophic risks and aligning technical, policy, and governance responses.

The recruitment drive comes amid heightened scrutiny of AI safety and national security implications. Anthropic is currently challenging a US government designation that labels it a supply-chain risk, while tensions have emerged over restrictions on the military use of AI systems.

At the same time, OpenAI has secured agreements to deploy its technology in classified environments under defined constraints. The parallel developments highlight how AI firms are balancing commercial expansion with increasing pressure to implement robust safety controls.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Data centres drive LG’s integrated AI infrastructure push

AI infrastructure is becoming a central battleground for growth, with LG Group accelerating its push into AI data centres and energy storage systems under its ‘One LG’ strategy.

The initiative brings together key affiliates to deliver integrated solutions for AI data centres. LG Electronics provides cooling systems, LG Energy Solution handles power infrastructure, including ESS and UPS, while LG Uplus and LG CNS oversee design, construction, and operations.

The strategy comes as global demand for AI data centres surges, driven by energy-intensive workloads and rising electricity constraints. Expanding storage capacity has become critical, with the US expected to add over 24 gigawatts of energy storage capacity in 2026 alone.

LG Electronics is focusing on advanced cooling technologies, including large air-cooled chillers and liquid-cooling systems, to manage the intense heat generated by GPU-intensive AI workloads. The company has also expanded into immersion cooling through partnerships, aiming to achieve efficiency gains in next-generation facilities.

Meanwhile, LG Energy Solution is strengthening its role in power infrastructure, scaling ESS production across North America, and securing major contracts. Through integrated battery and software solutions, the company is positioning itself to meet growing demand for stable, high-capacity energy systems supporting AI operations.

On the networking side, LG Uplus is developing low-latency infrastructure and AI-driven data centre management systems to optimise performance and energy use in real time. Together, these efforts highlight LG’s ambition to become a full-stack provider in the rapidly expanding AI data centre ecosystem.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA Isaac powers generalist specialist robots at scale

A new class of robots is emerging, combining broad adaptability with task-specific precision as developers move toward generalist specialist systems. Within this shift, NVIDIA Isaac is enabling integrated workflows that connect data generation, simulation, training, and deployment across robotics pipelines.

NVIDIA Isaac unifies robotics development across these stages, integrating cloud-to-robot workflows that allow developers to build, test, and scale systems more efficiently across both real and simulated environments.

A key driver is the growing reliance on synthetic data, which allows developers to simulate rare or hazardous scenarios that are difficult to capture in the real world. NVIDIA Isaac supports this through tools such as Omniverse-based simulation and teleoperation pipelines, helping convert real-world signals into scalable training datasets and accelerating development cycles.

The platform also enables advanced robot training using reasoning vision-language-action models, which allow machines to perceive, interpret, and act across complex environments. With frameworks like Isaac Lab and integrated physics engines, NVIDIA Isaac enables robots to train across thousands of parallel simulations, significantly reducing time, cost, and risk compared to real-world training.

Once trained, NVIDIA Isaac supports deployment across edge AI systems, including the Jetson platform, while maintaining consistency between simulation and real-world performance. Combined with modular workflows and open frameworks, the platform is positioning itself as a core foundation for scalable, next-generation robotics.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI standards and regulation struggle to keep pace with global innovation

Global efforts to regulate AI are accelerating, but innovation continues to outpace formal rules. Policymakers and industry leaders are increasingly turning to standards to help bridge compliance gaps.

At the AI Standards Hub Global Summit, experts highlighted how technical standards support responsible AI development. These tools are seen as essential for scaling AI safely while regulatory frameworks continue to evolve.

Differences across regions remain significant, with the EU relying on formal regulation and the US leaning on flexible standards. This fragmented landscape is raising concerns over compliance costs and barriers to cross-border deployment.

Experts stress that standards must evolve alongside AI while aligning with global frameworks and enforcement efforts. Without coordination, inconsistencies could limit innovation and weaken trust in AI systems.

Calls are growing for shared definitions, measurable benchmarks and stronger international cooperation. Stakeholders argue that aligning standards with regulation will be critical for future AI governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quantum cryptography pioneers win top computing prize

Two researchers have been awarded the Turing Award for pioneering work in quantum cryptography. Their research laid the foundations for a new form of secure communication based on quantum physics.

The method, developed in the 1980s, enables encryption keys that cannot be copied without detection. Any attempt to intercept the data alters its physical properties, revealing interference.

Experts say the approach could become vital as quantum computing advances. Traditional encryption methods may become vulnerable as computing power increases.

The award highlights the growing importance of secure data transmission in a digital world. Researchers believe quantum cryptography could play a central role in encrypting and protecting future communications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta data processing ruled unlawful in Germany

A Berlin court has ruled that Meta unlawfully processed personal data through its Facebook platform, including information belonging to non-users. Judges found the ‘Find Friends’ feature lacked a valid legal basis for handling third-party data.

The court determined that Meta acted as a data controller and could not rely on consent, contract or legitimate interests to justify the processing. Non-users had no reasonable expectation that their data would be collected or stored.

The German judges also ruled that personalised advertising based on platform data breached GDPR rules. The processing was not considered necessary for providing a social media service and lacked a lawful basis.

However, the court accepted that sensitive personal data entered by users could be processed with explicit consent under the GDPR. The ruling is under appeal and may shape future enforcement of the EU data protection law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU advances AI simplification effort ahead of further negotiations

A committee within the European Parliament has approved a proposal to simplify aspects of AI regulation, marking a step forward in efforts to refine the implementation of the AI Act.

An initiative that seeks to adjust certain requirements to support clearer compliance, particularly for industry stakeholders.

The proposal focuses on technical and procedural elements linked to how AI rules are applied in practice.

Lawmakers aim to ensure that regulatory obligations remain proportionate while maintaining existing safeguards. Part of the discussion includes how specific categories of AI systems should be addressed within the broader framework.

Some elements of the proposal may require further discussion in upcoming negotiations with the Council of the European Union. Areas under consideration include the treatment of sensitive AI applications and the balance between regulatory clarity and enforcement effectiveness.

The development reflects ongoing efforts within the EU to refine its approach to AI governance. As discussions continue, policymakers are expected to assess how adjustments can support innovation while maintaining consistency with existing legal principles.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta to end Instagram private message encryption after May 8

The US tech giant, Meta, has announced that end-to-end encryption for private messages on Instagram will no longer be supported after 8 May.

Previously, such a technology ensured that only intended recipients could read messages, preventing even Meta from accessing their contents.

The decision follows concerns from law enforcement and child protection organisations, which argued that encrypted messages can make it harder to identify harmful content involving children.

Meta has stated that the update allows the platform to monitor messages while maintaining standard privacy safeguards.

End-to-end encryption had been the default for several messaging platforms, including WhatsApp, Messenger, and other Meta services.

The company first signalled its intent to expand encryption across Instagram and Messenger in 2019, implementing it in 2023. The plan was met with objections from organisations such as the Internet Watch Foundation and the Virtual Global Taskforce.

These groups highlighted potential risks in preventing the timely detection of harmful content, particularly child sexual abuse material.

Meta’s shift reflects a compromise between privacy, platform security, and online child safety. The company has not provided further details on changes to encryption policies beyond Instagram’s private messaging service.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!