OpenAI loses German copyright lawsuit over song lyrics reproduction

A Munich regional court has ruled that OpenAI infringed copyright in a landmark case brought by the German rights society GEMA. The court held OpenAI liable for reproducing and memorising copyrighted lyrics without authorisation, rejecting its claim to operate as a non-profit research institute.

The judgement found that OpenAI had violated copyright even in a 15-word passage, setting a low threshold for infringement. Additionally, the court dismissed arguments about accidental reproduction and technical errors, emphasising that both reproduction and memorisation require a licence.

It also denied OpenAI’s request for a grace period to make compliance changes, citing negligence.

Judges concluded that the company could not rely on proportionality defences, noting that licences were available and alternative AI models exist.

OpenAI’s claim that EU copyright law failed to foresee large language models was rejected, as the court reaffirmed that European law ensures a high level of protection for intellectual property.

The ruling marks a significant step for copyright enforcement in the age of generative AI and could shape future litigation across Europe. It also challenges technology companies to adapt their training and licensing practices to comply with existing legal frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK strengthens AI safeguards to protect children online

The UK government is introducing landmark legislation to prevent AI from being exploited to generate child sexual abuse material. The new law empowers authorised bodies, such as the Internet Watch Foundation, to test AI models and ensure safeguards prevent misuse.

Reports of AI-generated child abuse imagery have surged, with the IWF recording 426 cases in 2025, more than double the 199 cases reported in 2024. The data also reveals a sharp rise in images depicting infants, increasing from five in 2024 to 92 in 2025.

Officials say the measures will enable experts to identify vulnerabilities within AI systems, making it more difficult for offenders to exploit the technology.

The legislation will also require AI developers to build protections against non-consensual intimate images and extreme content. A group of experts in AI and child safety will be established to oversee secure testing and ensure the well-being of researchers.

Ministers emphasised that child safety must be built into AI systems from the start, not added as an afterthought.

By collaborating with the AI sector and child protection groups, the government aims to make the UK the safest place for children to be online. The approach strikes a balance between innovation and strong protections, thereby reinforcing public trust in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judges in Asia join UNESCO-led training on ethical AI in justice

Judges and justice officials from 11 countries across Asia are gathering in Bangkok for a regional training focused on AI and the rule of law. The event, held from 12 November to 14, 2025, is jointly organised by UNESCO, UNDP, and the Thailand Institute of Justice.

Participants will examine how AI can enhance judicial efficiency while upholding human rights and ethical standards.

The training, based on UNESCO’s Global Toolkit on AI and the Rule of Law for the Justice Sector, will help participants assess both the benefits and challenges of AI in judicial processes. Officials will address algorithmic bias, transparency, and accountability to ensure AI tools uphold justice.

AI technologies are already transforming case management, legal research, and court administration. However, experts warn that unchecked use may amplify bias or weaken judicial independence.

The workshop aims to strengthen regional cooperation and train officials to assess AI systems using legal and ethical principles. The initiative supports UN SDG 16 and advances UNESCO’s mission to promote moral, inclusive, and trustworthy governance of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN calls for safeguards around emerging neuro-technologies

In a recent statement, the UN highlighted the growing field of neuro-technology, which encompasses devices and software that can measure, access, or manipulate the nervous system, as posing new risks to human rights.

The UN highlighted how such technologies could challenge fundamental concepts like ‘mental integrity’, autonomy and personal identity by enabling unprecedented access to brain data.

It warned that without robust regulation, the benefits of neuro-technology may come with costs such as privacy violations, unequal access and intrusive commercial uses.

The concerns align with broader debates about how advanced technologies, such as AI, are reshaping society, ethics, and international governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Winning the AI race means winning developers in China, says Huang of Nvidia

Nvidia CEO Jensen Huang said China is ‘nanoseconds’ behind the US in AI and urged Washington to lead by accelerating innovation and courting developers globally. He argued that excluding China would weaken the reach of US technology and risk splintering the ecosystem into incompatible stacks.

Huang’s remarks came amid ongoing export controls that bar Nvidia’s most advanced processors from the Chinese market. He acknowledged national security concerns but cautioned that strict limits can slow the spread of American tools that underpin AI research, deployment, and scaling.

Hardware remains central, Huang said, citing advanced accelerators and data-centre capacity as the substrate for training frontier models. Yet diffusion matters: widespread adoption of US platforms by global developers amplifies influence, reduces fragmentation, and accelerates innovation.

With sales of top-end chips restricted, Huang warned that Chinese firms will continue to innovate on domestic alternatives, increasing the likelihood of parallel systems. He called for policies that enable US leadership while preserving channels to the developer community in China.

Huang framed the objective as keeping America ahead, maintaining the world’s reliance on an American tech stack, and avoiding strategies that would push away half the world’s AI talent.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan develops system to measure and share physical and mental pain

Japanese mobile carrier NTT Docomo has developed a system that measures physical and mental pain and translates it into a format others can understand.

The technology utilises brainwave analysis to convert subjective sensations, such as injuries, stomachaches, spiciness, or emotional distress, into quantifiable levels.

The system, created in collaboration with startup Pamela Inc., allows recipients to understand what a specific pain score represents and even experience it through a device.

Docomo sees potential applications in medical diagnosis, rehabilitation, immersive gaming, and support for individuals who have been exposed to psychological or social harm.

Officials said the platform could be introduced for practical use alongside sixth-generation cellular networks, which are expected to be available in the 2030s.

The innovation aims to overcome the challenge of pain being experienced differently by each person, creating a shared understanding of physical and emotional discomfort.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants offer free premium AI in India

In a move that signals a significant shift in global AI strategy, companies such as OpenAI, Google and Perplexity AI are partnering with Indian telecoms and service providers to offer premium AI tools, for example, advanced chatbot access and large-model features, free for millions of users in India.

The offers are not merely promotional but part of a long-term play to dominate the AI ecosystem.

Market analysts quoted by the BBC note that the objective is to ‘get Indians hooked on to generative AI before asking them to pay for it’. The size of India’s digital ecosystem, with its young, mobile-first population and relatively less restrictive regulation, makes it a key battleground for AI firms aiming for global scale.

However, there are risks: free access may raise concerns around privacy and data protection, algorithmic control and whether users are fully informed about how their data is used and when free offers will convert into paid services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Salesforce strengthens Agentforce with planned Spindle AI acquisition

Salesforce has signed a definitive agreement to acquire Spindle AI, a company specialising in agentic analytics and machine learning. The deal aims to strengthen Salesforce’s Agentforce platform by integrating Spindle’s advanced data modelling and forecasting technologies.

Spindle AI has developed neuro-symbolic AI agents capable of autonomously generating and optimising scenario models. Its analytics tools enable businesses to simulate and assess complex decisions, from pricing strategies to go-to-market plans, using AI-driven insights.

Salesforce said the acquisition will enhance its focus on Agent Observability and Self-Improvement within Agentforce 360. Executives described Spindle AI’s expertise as critical to building more transparent and reliable agentic systems capable of explaining and refining their own reasoning.

The acquisition, subject to customary closing conditions, is expected to be completed in Salesforce’s fourth fiscal quarter of 2026. Once finalised, Spindle AI will join Agentforce to expand AI-powered analytics, continuous optimisation, and ROI forecasting for enterprise customers worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google flags adaptive malware that rewrites itself with AI

Hackers are experimenting with malware that taps large language models to morph in real time, according to Google’s Threat Intelligence Group. An experimental family dubbed PROMPTFLUX can rewrite and obfuscate its own code as it executes, aiming to sidestep static, signature-based detection.

PROMPTFLUX interacts with Gemini’s API to request on-demand functions and ‘just-in-time’ evasion techniques, rather than hard-coding behaviours. GTIG describes the approach as a step toward more adaptive, partially autonomous malware that dynamically generates scripts and changes its footprint.

Investigators say the current samples appear to be in development or testing, with incomplete features and limited Gemini API access. Google says it has disabled associated assets and has not observed a successful compromise, yet warns that financially motivated actors are exploring such tooling.

Researchers point to a maturing underground market for illicit AI utilities that lowers barriers for less-skilled offenders. State-linked operators in North Korea, Iran, and China are reportedly experimenting with AI to enhance reconnaissance, influence, and intrusion workflows.

Defenders are turning to AI, using security frameworks and agents like ‘Big Sleep’ to find flaws. Teams should expect AI-assisted obfuscation, emphasise behaviour-based detection, watch model-API abuse, and lock down developer and automation credentials.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Courts signal limits on AI in legal proceedings

A High Court judge warned that a solicitor who pushed an expert to accept an AI-generated draft breached their duty. Mr Justice Waksman called it a gross breach and cited a case from the latest survey.
He noted 14% of experts would accept such terms, which is unacceptable.

Updated guidance clarifies what limited judicial AI use is permissible. Judges may use a private ChatGPT 365 for summaries with confidential prompts. There is no duty to disclose, but the judgment must be the judge’s own.

Waksman cautioned against legal research or analysis done by AI. Hallucinated authorities and fake citations have already appeared. Experts must not let AI answer the questions they are retained to decide.

Survey findings show wider use of AI for drafting and summaries. Waksman drew a bright line between back-office aids and core duties. Convenience cannot trump independence, accuracy and accountability.

For practitioners, two rules follow. Solicitors must not foist AI-drafted expert opinions, and experts should refuse. Within courts, limited, non-determinative AI may assist, but outcomes must be human.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!