Penguin Random House sues OpenAI for copyright infringement over ‘Coconut the Little Dragon’ series in Germany

Penguin Random House has filed a lawsuit against OpenAI, alleging that its chatbot, ChatGPT, infringed copyright by imitating content from the ‘Coconut the Little Dragon’ series by German author Ingo Siegner. Filed in a Munich court, the complaint targets OpenAI’s European subsidiary, citing the chatbot’s creation of text, a book cover, and a promotional blurb as evidence of unauthorised ‘memorisation’ of Siegner’s work.

This issue highlights the challenge of distinguishing between algorithmic learning and direct copying, as AI models like OpenAI’s large language model (LLM) can retain extensive portions of their training data and reproduce them, raising legal and ethical dilemmas.

Penguin Random House insists that protecting human creativity is central to its mission. Carina Mathern, a representative, stressed the importance of safeguarding intellectual property, even as the company acknowledges the potential benefits of AI.

That reflects a broader industry tension between embracing technological innovation and protecting authors’ rights. The lawsuit’s implications could set a precedent affecting how AI-generated content is treated under intellectual property laws, posing significant questions for the publishing and creative industries.

The case against OpenAI is not isolated. A Munich court previously ruled against the company for using lyrics from popular musicians without permission, underscoring ongoing legal challenges around AI-generated content in Germany.

Bertelsmann, the parent company of Penguin Random House, had a prior agreement with OpenAI but did not allow access to its media archives, illustrating the complexities of AI collaboration while safeguarding proprietary content. OpenAI responded by stating that they are reviewing the allegations, reiterating their respect for creators and maintaining dialogue with publishers worldwide.

Why does it matter?

The resolution of this lawsuit could mark a pivotal moment in defining AI’s role in creative industries, shaping future regulations and enforcement strategies for AI-driven content creation and its impact on intellectual property rights globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI safety may hinge on missing human body awareness

A study from UCLA Health suggests that modern AI systems lack a fundamental aspect of human cognition linked to bodily experience, a gap that may have implications for safety and alignment with human behaviour.

Researchers describe this missing element as the absence of ‘internal embodiment’, where humans continuously regulate behaviour through bodily signals. While current AI systems can process and describe the physical world, they do not experience internal states such as fatigue, uncertainty, or physical need.

According to the study published in Neuron, this absence limits how AI systems interpret and respond to situations compared with humans, whose cognition is shaped by continuous interaction between brain and body.

The research distinguishes between external interaction and internal self-monitoring, arguing that most AI development focuses only on the former. Without internal regulatory signals, systems may lack natural constraints that guide consistency, caution, and awareness of uncertainty in decision-making.

Researchers propose a ‘dual-embodiment’ framework introducing internal state tracking in AI systems, alongside new benchmarks to assess stability and uncertainty.

AI safety may require more than improved external performance, highlighting the importance of internal regulatory mechanisms that could help systems behave more consistently, predictably, and in line with human expectations in real-world use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI chatbots are reshaping classroom debates, raising concerns over homogenised discussion

Generative AI chatbots are becoming embedded in university learning at Yale, students and academics told CNN, not only for essays and homework but also for real-time seminar participation. Students described classmates uploading readings and PDFs into chatbots before class, and even typing a professor’s question into AI during discussion to produce an immediate response to repeat aloud.

While this can make contributions sound more polished and prepared, some students said seminar conversations increasingly stall or feel flatter, with fewer personal interpretations and less exploratory debate. One student, ‘Amanda’, said she has noticed many classmates arriving with slick talking points but then offering near-identical arguments and phrasing, making discussions feel less distinctive than in earlier years.

Students gave several reasons for leaning on AI. ‘Jessica’, a senior, said she uses it daily, particularly in an economics seminar where the professor cold-calls students, both to digest readings quickly and to help her translate ideas into cohesive sentences when she struggles to phrase her comments.

‘Sophia’, a junior, said some students appear to use AI to draft ‘scripts’ for what to say in class, driven by insecurity about gaps in their understanding. She believes this weakens creativity and the ability to make original connections, replacing genuine engagement with impressive-sounding language.

A Yale spokesperson said the university is aware students are experimenting with AI in the classroom and noted a wider faculty trend towards limiting or banning laptops, using print-based materials, and prioritising direct engagement and original thinking.

The article links these observations to a March paper in ‘Trends in Cognitive Sciences’, which argues that large language models can systematically homogenise human expression and thought across language, perspective and reasoning. The paper’s authors say LLMs predict statistically likely next words based on training data that overrepresents dominant languages and ideas, potentially narrowing the ‘conceptual space’ for how people write and argue.

They warn that models tend to reproduce ‘WEIRD’ viewpoints, Western, educated, industrialised, rich and democratic, even when prompted otherwise, which may make those styles seem more credible and socially correct while marginalising other perspectives.

Researchers also describe a compounding feedback loop. As AI-generated outputs circulate in human discourse and eventually re-enter training data, sameness can intensify over time. Co-author Morteza Dehghani said offloading reasoning to AI risks intellectual laziness and could have broader social consequences, from weakened innovation to greater susceptibility to persuasion.

Educators quoted described both benefits and risks, and outlined practical responses. Thomas Chatterton Williams, a visiting professor and Bard College fellow, said AI can ‘raise the floor’ of discussion for difficult material but may suppress eccentric or truly original ideas, leaving students without a voice of their own or a sense of authorship.

Former teacher Daniel Buck called AI a ‘supercharged SparkNotes’ that can answer virtually any question, making it harder to detect shortcuts and easier for students to bypass the ‘boring minutiae’ where learning takes hold.

He worries that this also undermines relationships with professors and sustained cognitive work. Yale philosophy professor Sun-Joo Shin said model improvements forced her to redesign the assessment. Problem sets now earn completion credit and feedback, while in-class exams, oral tests and presentations carry more weight.

Williams said he has moved from writing to spontaneous, in-class, handwritten work and uses oral exit exams. Students who avoid AI argued that they are still affected by classmates’ reliance on it because it reduces the value and variety of seminar time, while others urged a middle path in which AI is treated as a collaborator, used to critique ideas rather than as a substitute for generating them or doing the reasoning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

South Korea advances energy transition strategy to strengthen resilience and green industry

An expansive energy transition strategy has been outlined by South Korea aimed at reshaping its national energy system around renewables, electrification and industrial transformation.

The plan responds directly to heightened geopolitical risks and supply vulnerabilities, signalling a shift from import-dependent energy security towards domestic resilience.

Central targets include exceeding a 20% renewable energy share and deploying 100GW of capacity by 2030, alongside accelerating the adoption of electric and hydrogen vehicles across both public and commercial fleets.

The strategy by South Korea reflects structural change, combining large-scale renewable expansion with the phased retirement of the 60 currently operating coal-fired power plants by 2040 and the introduction of a ‘just transition’ framework to mitigate regional and labour impacts.

Industrial policy plays a central role, with support directed towards green manufacturing ecosystems, hydrogen-based steel production, carbon capture technologies and electrified industrial processes.

Rising electricity demand, driven in part by AI infrastructure and data centres, reinforces the need for grid modernisation, including decentralised and bidirectional systems designed to balance regional supply and demand more efficiently.

Governance mechanisms extend beyond infrastructure, incorporating market reforms, green finance instruments and subsidy reallocation away from fossil fuels.

Citizen participation is also embedded through ‘energy income’ models, enabling local investment in renewable projects.

South Korea positions energy transition not only as a climate objective but as a broader economic and social restructuring agenda centred on resilience, competitiveness and public engagement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UN warns of urgency in shaping responsible AI governance

UN Secretary-General António Guterres has told the inaugural meeting of a newly formed Independent International Scientific Panel on Artificial Intelligence that its members have a major responsibility to help shape how the technology is used “for the benefit of humanity”.

The new 40-member panel brings together experts from different regions and disciplines and is expected to help close what Guterres described as ‘the AI knowledge gap’. Its role is to assess the real impact AI will have across economies and societies so that countries can act with the same “clarity” on a more level playing field.

Addressing the scientists at the panel’s first meeting, Guterres said: “Individually, you come from diverse regions and disciplines, bringing outstanding expertise in AI and related fields. Collectively, you represent something the world has never seen before.”

He stressed that the group would provide scientific assessments independently of governments, companies, and institutions, including the UN itself. “AI is advancing at lightning speed… no country, no company, and no field of research can see the full picture alone,” he said, adding that “the world urgently needs a shared, global understanding of artificial intelligence; grounded not in ideology, but in science.”

Guterres also linked the panel’s work to a much broader global agenda, warning that AI will shape peace and security, human rights, and sustainable development for decades to come. He cautioned that misunderstanding around the technology could deepen political and social divisions, saying: “I have seen how quickly fear can take hold when facts are missing or distorted – how trust breaks down and division deepens.”

At a time when “geopolitical tensions are rising and conflicts are raging,” he said, the need for shared understanding and “safe and responsible AI could not be greater.”

He also framed the panel’s task as urgent, arguing that governance efforts are struggling to keep pace with the speed of technological change. “Never in the future will we move as slow as we are moving now. We are indeed in a high level of acceleration,” he said, while warning that the panel is also “in a race against time.”

Referring to earlier UN work through the High-Level Advisory Body on AI, Guterres said the panel does not “start from zero”, before concluding: “I can think of no more important assignment for our world today.”

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Power hardware shortages are delaying AI data centre expansion, despite record investment

US AI data-centre expansion is increasingly being constrained not by chips, servers or funding, but by the electrical hardware needed to connect new facilities to reliable power, Bloomberg reports. While the US–China trade war has pushed many server makers to move production out of China, the deeper dependency remains in power-delivery equipment.

China is still the world’s largest producer of electrical gear used to build and upgrade power infrastructure, both inside data centres and across the wider grid. Shortages of key components, especially transformers, switchgear and batteries, sourced from China and elsewhere, are now slowing project timelines.

The scale of planned build-outs is colliding with these supply limits. Bloomberg cites forecasts that Alphabet, Amazon, Meta and Microsoft will spend more than $650bn in 2026 to expand AI capacity, yet close to half of the planned US data-centre builds this year are expected to be delayed or cancelled.

The problem extends beyond the data-centre fence line. Companies must also fund and coordinate grid upgrades to supply enough electricity, competing for the same scarce equipment as utilities coping with rising demand from electric vehicles and electrified heating.

Sightline Climate data cited by Bloomberg suggests about 12GW of US data-centre capacity is expected to come online in 2026, but only around a third of that capacity is currently under active construction due to multiple constraints. Electrical infrastructure may represent less than 10% of total data-centre cost, but it is schedule-critical, because delays in any link of the power chain can halt an entire project.

Lead times for high-power transformers, in particular, have deteriorated sharply, typically 24 to 30 months before 2020, but now stretching to as long as five years, clashing with AI deployment cycles that can be under 18 months.

To cope, developers are turning to global suppliers, with Canada, Mexico and South Korea becoming major sources of high-power transformers. Even so, US imports of Chinese high-power transformers have surged from fewer than 1,500 units in 2022 to more than 8,000 units through October 2025, according to Wood Mackenzie data cited by Bloomberg. China also supplies over 40% of US battery imports and remains near 30% in some transformer and switchgear categories, underscoring continued reliance despite tariffs and security concerns.

Why does it matter?

Bloomberg’s central warning is that without easing bottlenecks in transformers, switchgear and batteries, and expanding US manufacturing capacity, trillions of dollars of AI investment may not translate into delivered AI capacity, because power infrastructure, not compute, is becoming the limiting factor.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft markets Copilot as a productivity boost but warns it is ‘for entertainment purposes only’

Microsoft has spent the past year pushing Copilot as a mainstream productivity tool, baking it into Windows 11 and promoting new hardware such as Copilot+ PCs, yet its own legal language urges caution. In Microsoft’s Copilot Terms of Use, updated in October last year, the company states Copilot is ‘for entertainment purposes only’, may ‘make mistakes’, and ‘may not work as intended’.

The terms warn users not to rely on Copilot for important advice and to ‘use Copilot at your own risk’, a caveat that sits uneasily alongside the product’s business-focused marketing.

The Tom’s Hardware article argues Microsoft is not unique in issuing such warnings. Similar disclaimers are common across the generative AI industry. It points to xAI’s guidance that AI is ‘probabilistic in nature’ and may produce ‘hallucinations’, generate offensive or objectionable content, or fail to reflect real people, places or facts.

While these limitations are well known to those familiar with large language models, the piece notes that many users still treat AI output as authoritative, even in professional settings where scepticism should be standard.

To underline the risks of overreliance, the text cites reports of Amazon-related incidents allegedly linked to ‘Gen-AI assisted changes’. It says some AWS outages were reportedly caused after engineers let an AI coding bot address an issue without sufficient oversight, and that Amazon’s website experienced ‘high blast radius’ problems that required senior engineers to step in. These examples are used to illustrate how AI-generated errors can propagate quickly in complex systems when humans fail to verify the output.

Why does it matter?

Overall, the article acknowledges that generative AI can boost productivity, but stresses it remains a tool with no accountability for mistakes, making verification essential. It warns that automation bias, people trusting machine outputs over contradictory evidence, can be intensified by AI systems that produce plausible-sounding answers that pass casual inspection.

While such disclaimers help companies limit legal liability, the piece suggests aggressive marketing of AI as a productivity ‘hack’ may downplay real-world risks, particularly as firms seek returns on the billions invested in AI hardware and talent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Will AI turn novel-writing into a collaborative process

The article argues that a novel’s value cannot be judged solely by the quality of its prose, because many readers respond to other elements such as premise, ideas and character. It points to Amazon reviews of ‘Shy Girl’, which holds a four-out-of-five-star rating based on hundreds of reviews, with many praising its hook despite awareness of ‘the controversy’ around it. One reviewer writes, ‘The premise sucked me in.’

The broader point is that plenty of novels are poorly written yet still succeed, because fiction, like music, is forgiving: a song may have an irresistible beat even with a predictable melody, and a book can move readers through suspense, beauty, realism, fantasy, or a protagonist they recognise in themselves.

From that premise, the piece asks whether fiction’s ‘layers’ (premise, plot, style and voice) must all come from a single person. It notes that collaborative creation is already normal in many fields, even if audiences rarely state their expectations explicitly: readers tend to assume a Booker Prize-winning novel is written entirely by the named author, while journalism is understood to be shaped by both writers and editors, and television and film are widely accepted as writers’ room and revision-heavy processes.

The article uses James Patterson as an example of industrial-scale collaboration in publishing, describing how he supplies collaborators with outlines and treatments and oversees many projects at once, an approach likened to a ‘novel factory’ that some argue distances him from ‘literary fiction’, yet may be the only practical way to sustain a decades-long series.

The author suggests AI will make such factories easier to create, citing a New York Times report on ‘Coral Hart’, a pseudonymous romance writer who uses AI to generate drafts in about 45 minutes, then revises them before self-publishing hundreds of books under dozens of names. Although not a bestseller, she reportedly earns ‘six figures’ and teaches others to do the same.

This points to a future in which authors act more like showrunners supervising AI-powered writers’ rooms, while raising a central risk: readers may not know who, or what, produced what they are reading, especially if AI use is not consistently disclosed despite platforms such as Amazon asking for it.

The piece ends by questioning whether AI necessarily implies high-volume, depersonalised production. Using a personal analogy from music-making, the author notes that technology can enable rapid output, but can also serve a more artistic purpose: helping a creator overcome technical limits and ‘realise a vision’.

Why does it matter?

The underlying argument is not that AI guarantees either shallow churn or genuine creativity, but that the most consequential issues may lie in intent, authorial expectations, and honest disclosure to readers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US Supreme Court narrows ISP copyright liability, sharpening focus on intent with potential implications for generative AI

A unanimous 9–0 US Supreme Court ruling this week has narrowed the circumstances under which an internet service provider (ISP) can be held liable for users’ copyright infringement by focusing on a deceptively simple question: intent. Writing for the Court, Justice Clarence Thomas said an ISP is liable only if its service was designed for unlawful activity or if it actively induced infringement; merely providing a service to the public while knowing some users will infringe is not enough.

Applying that standard, the Court found Cox Communications did neither, shielding it from a potential $1bn exposure following a long-running dispute that included a jury verdict later vacated.

The decision is now being read for its possible implications beyond ISPs, particularly in the escalating copyright battle between publishers/authors and generative AI firms. The key distinction raised is that broadband networks function as neutral conduits, whereas large language models are built specifically to produce fluent, human-like writing, including prose, poetry and dialogue, that can resemble the work of human authors.

In the article’s framing, that resemblance is not incidental but central to the product’s purpose: if a subscriber uses broadband to pirate a novel, the ISP did not build its network to enable that outcome, but an AI model prompted to write in a specific author’s style is designed to fulfil that request.

That contrast could open a new line of argument in AI litigation. While major US cases, such as suits brought by the Authors Guild and individual authors against OpenAI, Meta and others, have largely centred on whether training on copyrighted books is itself infringing, the Cox ruling highlights a second front: whether the systems’ purpose and optimisation for author-like output could be characterised as being ‘tailored for’ infringement or as purposeful inducement under an intent-based standard.

Publishers, who are simultaneously watching the lawsuits and negotiating licensing deals with AI companies, have so far been more cautious than the music industry was in its costly fight against Cox, an effort that ultimately produced a Supreme Court ruling that narrowed, rather than expanded, leverage.

Why does it matter?

The broader takeaway is that copyright enforcement may increasingly turn not only on what was copied, but what the copying was for, an approach that could prove consequential for AI companies whose commercial proposition is generating human-quality creative work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Advocates push for transparency rules in student AI systems

Consumer protection advocates have introduced a Student AI Bill of Rights, calling on higher education institutions to formalise safeguards as AI becomes increasingly embedded in academic systems.

The proposal, launched by the National Student Legal Defense Network under its SHAPE AI programme, highlights the growing use of AI across admissions, classroom instruction, and student support services.

The initiative argues that students must not be reduced to data points or treated as subjects for experimental technologies. It warns that while these tools may enable personalised learning, they also introduce risks linked to privacy, bias, and automated decision-making.

The framework sets out five core principles, including transparency in AI use, human oversight for high-stakes decisions, protection of student data and intellectual property, and safeguards against algorithmic bias. It also calls for equitable access to AI tools and education on their use.

Advocates are urging universities to adopt the principles to ensure accountability as AI becomes more deeply integrated into academic environments.

The development reflects a broader shift in higher education, where clear standards are seen as key to building trust, ensuring consistency, and enabling responsible AI integration in academic decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot