China sets standards for AI ethics review and algorithm accountability

The introduction of new AI ethics guidelines by China signals a structured attempt to formalise governance frameworks for rapidly expanding AI systems.

Coordinated by the Ministry of Industry and Information Technology of the People’s Republic of China and multiple state bodies, the policy integrates ethical oversight directly into technological development processes.

A central feature of the framework is the emphasis on operationalising ethical principles such as fairness, accountability, and human well-being through technical review mechanisms.

By focusing on data selection, algorithmic design, and system architecture, the guidelines move towards embedding ethical safeguards at the development stage and protecting intellectual property rights in AI ethics review technologies.

Such an approach reflects a broader shift towards anticipatory governance, where risks such as bias, discrimination, and algorithmic manipulation are addressed before deployment.

A policy by China that also highlights the role of infrastructure in ethical governance, including the development of auditing tools, risk assessment systems, and curated datasets.

Scenario-based evaluation mechanisms indicate an effort to tailor oversight to specific use cases, recognising that AI risks vary significantly across sectors. Instead of relying solely on static compliance rules, the framework promotes adaptive governance aligned with technological complexity.

Ultimately, the outcome is a governance model that seeks to maintain technological competitiveness while addressing societal risks, contributing to wider global debates on how states can regulate AI systems without constraining their development.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Penguin Random House sues OpenAI for copyright infringement over ‘Coconut the Little Dragon’ series in Germany

Penguin Random House has filed a lawsuit against OpenAI, alleging that its chatbot, ChatGPT, infringed copyright by imitating content from the ‘Coconut the Little Dragon’ series by German author Ingo Siegner. Filed in a Munich court, the complaint targets OpenAI’s European subsidiary, citing the chatbot’s creation of text, a book cover, and a promotional blurb as evidence of unauthorised ‘memorisation’ of Siegner’s work.

This issue highlights the challenge of distinguishing between algorithmic learning and direct copying, as AI models like OpenAI’s large language model (LLM) can retain extensive portions of their training data and reproduce them, raising legal and ethical dilemmas.

Penguin Random House insists that protecting human creativity is central to its mission. Carina Mathern, a representative, stressed the importance of safeguarding intellectual property, even as the company acknowledges the potential benefits of AI.

That reflects a broader industry tension between embracing technological innovation and protecting authors’ rights. The lawsuit’s implications could set a precedent affecting how AI-generated content is treated under intellectual property laws, posing significant questions for the publishing and creative industries.

The case against OpenAI is not isolated. A Munich court previously ruled against the company for using lyrics from popular musicians without permission, underscoring ongoing legal challenges around AI-generated content in Germany.

Bertelsmann, the parent company of Penguin Random House, had a prior agreement with OpenAI but did not allow access to its media archives, illustrating the complexities of AI collaboration while safeguarding proprietary content. OpenAI responded by stating that they are reviewing the allegations, reiterating their respect for creators and maintaining dialogue with publishers worldwide.

Why does it matter?

The resolution of this lawsuit could mark a pivotal moment in defining AI’s role in creative industries, shaping future regulations and enforcement strategies for AI-driven content creation and its impact on intellectual property rights globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Will AI turn novel-writing into a collaborative process

The article argues that a novel’s value cannot be judged solely by the quality of its prose, because many readers respond to other elements such as premise, ideas and character. It points to Amazon reviews of ‘Shy Girl’, which holds a four-out-of-five-star rating based on hundreds of reviews, with many praising its hook despite awareness of ‘the controversy’ around it. One reviewer writes, ‘The premise sucked me in.’

The broader point is that plenty of novels are poorly written yet still succeed, because fiction, like music, is forgiving: a song may have an irresistible beat even with a predictable melody, and a book can move readers through suspense, beauty, realism, fantasy, or a protagonist they recognise in themselves.

From that premise, the piece asks whether fiction’s ‘layers’ (premise, plot, style and voice) must all come from a single person. It notes that collaborative creation is already normal in many fields, even if audiences rarely state their expectations explicitly: readers tend to assume a Booker Prize-winning novel is written entirely by the named author, while journalism is understood to be shaped by both writers and editors, and television and film are widely accepted as writers’ room and revision-heavy processes.

The article uses James Patterson as an example of industrial-scale collaboration in publishing, describing how he supplies collaborators with outlines and treatments and oversees many projects at once, an approach likened to a ‘novel factory’ that some argue distances him from ‘literary fiction’, yet may be the only practical way to sustain a decades-long series.

The author suggests AI will make such factories easier to create, citing a New York Times report on ‘Coral Hart’, a pseudonymous romance writer who uses AI to generate drafts in about 45 minutes, then revises them before self-publishing hundreds of books under dozens of names. Although not a bestseller, she reportedly earns ‘six figures’ and teaches others to do the same.

This points to a future in which authors act more like showrunners supervising AI-powered writers’ rooms, while raising a central risk: readers may not know who, or what, produced what they are reading, especially if AI use is not consistently disclosed despite platforms such as Amazon asking for it.

The piece ends by questioning whether AI necessarily implies high-volume, depersonalised production. Using a personal analogy from music-making, the author notes that technology can enable rapid output, but can also serve a more artistic purpose: helping a creator overcome technical limits and ‘realise a vision’.

Why does it matter?

The underlying argument is not that AI guarantees either shallow churn or genuine creativity, but that the most consequential issues may lie in intent, authorial expectations, and honest disclosure to readers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital Services Act agreement links European Commission and EUIPO on online IP enforcement

The European Commission and the European Union Intellectual Property Office (EUIPO) have signed a five-year agreement under which the latter will provide technical support and intellectual property expertise for work under the Digital Services Act. The cooperation focuses on online infringements of intellectual property rights, in particular the sale of counterfeit goods and the distribution of pirated content.

The EUIPO will support the oversight of the European Commission’s Very Large Online Platforms and Very Large Online Search Engines. That work will include analysing internal reports submitted by those services on how they address online intellectual property infringement.

An agreement with the European Commission includes training for national authorities that enforce the Digital Services Act. It also supports the European Board for Digital Services by contributing to discussions in its working groups on intellectual property.

The EUIPO will also help build expertise among judicial authorities, intellectual property right holders, and smaller online intermediaries, and contribute to a shared collection of best practices and tools.

However, this agreement is linked to the Digital Services Act framework, under which online intermediaries are required to provide notice-and-action mechanisms for illegal content, and Very Large Online Platforms and Very Large Online Search Engines are subject to additional risk-assessment and mitigation obligations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US Supreme Court narrows ISP copyright liability, sharpening focus on intent with potential implications for generative AI

A unanimous 9–0 US Supreme Court ruling this week has narrowed the circumstances under which an internet service provider (ISP) can be held liable for users’ copyright infringement by focusing on a deceptively simple question: intent. Writing for the Court, Justice Clarence Thomas said an ISP is liable only if its service was designed for unlawful activity or if it actively induced infringement; merely providing a service to the public while knowing some users will infringe is not enough.

Applying that standard, the Court found Cox Communications did neither, shielding it from a potential $1bn exposure following a long-running dispute that included a jury verdict later vacated.

The decision is now being read for its possible implications beyond ISPs, particularly in the escalating copyright battle between publishers/authors and generative AI firms. The key distinction raised is that broadband networks function as neutral conduits, whereas large language models are built specifically to produce fluent, human-like writing, including prose, poetry and dialogue, that can resemble the work of human authors.

In the article’s framing, that resemblance is not incidental but central to the product’s purpose: if a subscriber uses broadband to pirate a novel, the ISP did not build its network to enable that outcome, but an AI model prompted to write in a specific author’s style is designed to fulfil that request.

That contrast could open a new line of argument in AI litigation. While major US cases, such as suits brought by the Authors Guild and individual authors against OpenAI, Meta and others, have largely centred on whether training on copyrighted books is itself infringing, the Cox ruling highlights a second front: whether the systems’ purpose and optimisation for author-like output could be characterised as being ‘tailored for’ infringement or as purposeful inducement under an intent-based standard.

Publishers, who are simultaneously watching the lawsuits and negotiating licensing deals with AI companies, have so far been more cautious than the music industry was in its costly fight against Cox, an effort that ultimately produced a Supreme Court ruling that narrowed, rather than expanded, leverage.

Why does it matter?

The broader takeaway is that copyright enforcement may increasingly turn not only on what was copied, but what the copying was for, an approach that could prove consequential for AI companies whose commercial proposition is generating human-quality creative work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Advocates push for transparency rules in student AI systems

Consumer protection advocates have introduced a Student AI Bill of Rights, calling on higher education institutions to formalise safeguards as AI becomes increasingly embedded in academic systems.

The proposal, launched by the National Student Legal Defense Network under its SHAPE AI programme, highlights the growing use of AI across admissions, classroom instruction, and student support services.

The initiative argues that students must not be reduced to data points or treated as subjects for experimental technologies. It warns that while these tools may enable personalised learning, they also introduce risks linked to privacy, bias, and automated decision-making.

The framework sets out five core principles, including transparency in AI use, human oversight for high-stakes decisions, protection of student data and intellectual property, and safeguards against algorithmic bias. It also calls for equitable access to AI tools and education on their use.

Advocates are urging universities to adopt the principles to ensure accountability as AI becomes more deeply integrated into academic environments.

The development reflects a broader shift in higher education, where clear standards are seen as key to building trust, ensuring consistency, and enabling responsible AI integration in academic decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU strengthens IP enforcement under Digital Services Act

The European Commission has signed an agreement with the European Union Intellectual Property Office to support enforcement of the Digital Services Act in relation to intellectual property rights.

The agreement takes effect immediately and focuses on strengthening the Commission’s enforcement capacity.

Cooperation will target systemic risks linked to very large online platforms and search engines, particularly the spread of intellectual property-infringing content. Such risks include counterfeit goods and online piracy, which fall within the scope of the DSA’s oversight framework.

The EUIPO is expected to expand its activities to support judicial and enforcement authorities, as well as online intermediaries that are not classified as very large platforms. Intellectual property rights holders are also included in the broader effort to address infringement risks.

The Digital Services Act establishes rules aimed at creating a safer and more transparent online environment across the European Union. Cooperation between the EU institutions and specialised bodies is presented as a key element in safeguarding users’ rights, including those linked to intellectual property.

Strengthening enforcement mechanisms in areas such as intellectual property links platform governance with broader policy objectives, including user protection, accountability of online intermediaries, and the functioning of the digital single market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EPO accelerates digital patent shift with paperless system by 2027

The European Patent Office (EPO) is accelerating its transition towards a fully digital patent system, with plans to implement a paperless patent-granting process by 2027.

Discussions at the latest eSACEPO meeting highlighted steady progress and broad stakeholder support for modernising patent workflows.

Electronic filing and communication are set to become the default, with paper-based processes limited to exceptional cases. The shift aims to improve efficiency and accessibility, supported by legal adjustments and the gradual introduction of structured data formats to enhance processing accuracy.

Digital tools continue to evolve, with the MyEPO platform expanding its functionality through interface upgrades, self-service features and new capabilities such as colour drawing submissions.

The rollout of DOCX filing, alongside optional PDF backups, reflects a cautious approach designed to balance innovation with reliability.

AI is increasingly integrated into patent examination processes, supporting tasks such as search and documentation.

However, the EPO maintains a human-centric model, ensuring that decision-making authority remains with patent examiners while AI enhances productivity and consistency.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EPO strengthens industry collaboration on European patent innovation

The European Patent Office (EPO) has reinforced cooperation with industry stakeholders through discussions with the German Association of Industry IP Experts, focusing on strengthening the European patent system and supporting innovation.

A meeting that brought together representatives from major industrial actors to align priorities and explore future collaboration.

Discussions between the EPO and the stakeholders centred on enhancing technology transfer, empowering startups and fostering economic growth across Europe.

Participants emphasised the importance of inclusive engagement among patent system users instead of fragmented approaches, ensuring that innovation strategies reflect both industrial and societal needs.

The Unitary Patent system was highlighted as gaining traction, particularly among smaller entities such as SMEs, individual inventors and research organisations. Such a trend reflects broader efforts to improve accessibility and scalability within the European innovation ecosystem.

AI also featured prominently, with both sides recognising its growing role in improving efficiency and quality in patent processes.

A human-centric approach remains essential, ensuring that AI deployment supports responsible innovation while maintaining high standards in patent examination and services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Wikipedia limits generative AI use in article creation

Wikipedia has strengthened its approach to AI use, introducing new restrictions on the use of generative AI in article creation and editing. The changes reflect growing concerns about accuracy, sourcing and editorial standards.

Guidance issued in January 2026 warned contributors against copying and pasting outputs from generative AI into articles. Editors were advised to avoid using such tools to create new entries, as the content often fails verification against reliable sources.

In March 2026, stricter rules were introduced, prohibiting the use of AI to generate or rewrite article content. Limited exceptions allow AI to copyedit one’s own writing or translate material from other Wikipedia language versions.

The updated framework highlights concerns that AI-generated text may include fabricated references, bias and non-encyclopaedic language. Wikipedia continues to allow AI for support tasks such as identifying gaps and locating sources, while maintaining human oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot