Will AI turn novel-writing into a collaborative process

The article argues that a novel’s value cannot be judged solely by the quality of its prose, because many readers respond to other elements such as premise, ideas and character. It points to Amazon reviews of ‘Shy Girl’, which holds a four-out-of-five-star rating based on hundreds of reviews, with many praising its hook despite awareness of ‘the controversy’ around it. One reviewer writes, ‘The premise sucked me in.’

The broader point is that plenty of novels are poorly written yet still succeed, because fiction, like music, is forgiving: a song may have an irresistible beat even with a predictable melody, and a book can move readers through suspense, beauty, realism, fantasy, or a protagonist they recognise in themselves.

From that premise, the piece asks whether fiction’s ‘layers’ (premise, plot, style and voice) must all come from a single person. It notes that collaborative creation is already normal in many fields, even if audiences rarely state their expectations explicitly: readers tend to assume a Booker Prize-winning novel is written entirely by the named author, while journalism is understood to be shaped by both writers and editors, and television and film are widely accepted as writers’ room and revision-heavy processes.

The article uses James Patterson as an example of industrial-scale collaboration in publishing, describing how he supplies collaborators with outlines and treatments and oversees many projects at once, an approach likened to a ‘novel factory’ that some argue distances him from ‘literary fiction’, yet may be the only practical way to sustain a decades-long series.

The author suggests AI will make such factories easier to create, citing a New York Times report on ‘Coral Hart’, a pseudonymous romance writer who uses AI to generate drafts in about 45 minutes, then revises them before self-publishing hundreds of books under dozens of names. Although not a bestseller, she reportedly earns ‘six figures’ and teaches others to do the same.

This points to a future in which authors act more like showrunners supervising AI-powered writers’ rooms, while raising a central risk: readers may not know who, or what, produced what they are reading, especially if AI use is not consistently disclosed despite platforms such as Amazon asking for it.

The piece ends by questioning whether AI necessarily implies high-volume, depersonalised production. Using a personal analogy from music-making, the author notes that technology can enable rapid output, but can also serve a more artistic purpose: helping a creator overcome technical limits and ‘realise a vision’.

Why does it matter?

The underlying argument is not that AI guarantees either shallow churn or genuine creativity, but that the most consequential issues may lie in intent, authorial expectations, and honest disclosure to readers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital Services Act agreement links European Commission and EUIPO on online IP enforcement

The European Commission and the European Union Intellectual Property Office (EUIPO) have signed a five-year agreement under which the latter will provide technical support and intellectual property expertise for work under the Digital Services Act. The cooperation focuses on online infringements of intellectual property rights, in particular the sale of counterfeit goods and the distribution of pirated content.

The EUIPO will support the oversight of the European Commission’s Very Large Online Platforms and Very Large Online Search Engines. That work will include analysing internal reports submitted by those services on how they address online intellectual property infringement.

An agreement with the European Commission includes training for national authorities that enforce the Digital Services Act. It also supports the European Board for Digital Services by contributing to discussions in its working groups on intellectual property.

The EUIPO will also help build expertise among judicial authorities, intellectual property right holders, and smaller online intermediaries, and contribute to a shared collection of best practices and tools.

However, this agreement is linked to the Digital Services Act framework, under which online intermediaries are required to provide notice-and-action mechanisms for illegal content, and Very Large Online Platforms and Very Large Online Search Engines are subject to additional risk-assessment and mitigation obligations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US Supreme Court narrows ISP copyright liability, sharpening focus on intent with potential implications for generative AI

A unanimous 9–0 US Supreme Court ruling this week has narrowed the circumstances under which an internet service provider (ISP) can be held liable for users’ copyright infringement by focusing on a deceptively simple question: intent. Writing for the Court, Justice Clarence Thomas said an ISP is liable only if its service was designed for unlawful activity or if it actively induced infringement; merely providing a service to the public while knowing some users will infringe is not enough.

Applying that standard, the Court found Cox Communications did neither, shielding it from a potential $1bn exposure following a long-running dispute that included a jury verdict later vacated.

The decision is now being read for its possible implications beyond ISPs, particularly in the escalating copyright battle between publishers/authors and generative AI firms. The key distinction raised is that broadband networks function as neutral conduits, whereas large language models are built specifically to produce fluent, human-like writing, including prose, poetry and dialogue, that can resemble the work of human authors.

In the article’s framing, that resemblance is not incidental but central to the product’s purpose: if a subscriber uses broadband to pirate a novel, the ISP did not build its network to enable that outcome, but an AI model prompted to write in a specific author’s style is designed to fulfil that request.

That contrast could open a new line of argument in AI litigation. While major US cases, such as suits brought by the Authors Guild and individual authors against OpenAI, Meta and others, have largely centred on whether training on copyrighted books is itself infringing, the Cox ruling highlights a second front: whether the systems’ purpose and optimisation for author-like output could be characterised as being ‘tailored for’ infringement or as purposeful inducement under an intent-based standard.

Publishers, who are simultaneously watching the lawsuits and negotiating licensing deals with AI companies, have so far been more cautious than the music industry was in its costly fight against Cox, an effort that ultimately produced a Supreme Court ruling that narrowed, rather than expanded, leverage.

Why does it matter?

The broader takeaway is that copyright enforcement may increasingly turn not only on what was copied, but what the copying was for, an approach that could prove consequential for AI companies whose commercial proposition is generating human-quality creative work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Advocates push for transparency rules in student AI systems

Consumer protection advocates have introduced a Student AI Bill of Rights, calling on higher education institutions to formalise safeguards as AI becomes increasingly embedded in academic systems.

The proposal, launched by the National Student Legal Defense Network under its SHAPE AI programme, highlights the growing use of AI across admissions, classroom instruction, and student support services.

The initiative argues that students must not be reduced to data points or treated as subjects for experimental technologies. It warns that while these tools may enable personalised learning, they also introduce risks linked to privacy, bias, and automated decision-making.

The framework sets out five core principles, including transparency in AI use, human oversight for high-stakes decisions, protection of student data and intellectual property, and safeguards against algorithmic bias. It also calls for equitable access to AI tools and education on their use.

Advocates are urging universities to adopt the principles to ensure accountability as AI becomes more deeply integrated into academic environments.

The development reflects a broader shift in higher education, where clear standards are seen as key to building trust, ensuring consistency, and enabling responsible AI integration in academic decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU strengthens IP enforcement under Digital Services Act

The European Commission has signed an agreement with the European Union Intellectual Property Office to support enforcement of the Digital Services Act in relation to intellectual property rights.

The agreement takes effect immediately and focuses on strengthening the Commission’s enforcement capacity.

Cooperation will target systemic risks linked to very large online platforms and search engines, particularly the spread of intellectual property-infringing content. Such risks include counterfeit goods and online piracy, which fall within the scope of the DSA’s oversight framework.

The EUIPO is expected to expand its activities to support judicial and enforcement authorities, as well as online intermediaries that are not classified as very large platforms. Intellectual property rights holders are also included in the broader effort to address infringement risks.

The Digital Services Act establishes rules aimed at creating a safer and more transparent online environment across the European Union. Cooperation between the EU institutions and specialised bodies is presented as a key element in safeguarding users’ rights, including those linked to intellectual property.

Strengthening enforcement mechanisms in areas such as intellectual property links platform governance with broader policy objectives, including user protection, accountability of online intermediaries, and the functioning of the digital single market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EPO accelerates digital patent shift with paperless system by 2027

The European Patent Office (EPO) is accelerating its transition towards a fully digital patent system, with plans to implement a paperless patent-granting process by 2027.

Discussions at the latest eSACEPO meeting highlighted steady progress and broad stakeholder support for modernising patent workflows.

Electronic filing and communication are set to become the default, with paper-based processes limited to exceptional cases. The shift aims to improve efficiency and accessibility, supported by legal adjustments and the gradual introduction of structured data formats to enhance processing accuracy.

Digital tools continue to evolve, with the MyEPO platform expanding its functionality through interface upgrades, self-service features and new capabilities such as colour drawing submissions.

The rollout of DOCX filing, alongside optional PDF backups, reflects a cautious approach designed to balance innovation with reliability.

AI is increasingly integrated into patent examination processes, supporting tasks such as search and documentation.

However, the EPO maintains a human-centric model, ensuring that decision-making authority remains with patent examiners while AI enhances productivity and consistency.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EPO strengthens industry collaboration on European patent innovation

The European Patent Office (EPO) has reinforced cooperation with industry stakeholders through discussions with the German Association of Industry IP Experts, focusing on strengthening the European patent system and supporting innovation.

A meeting that brought together representatives from major industrial actors to align priorities and explore future collaboration.

Discussions between the EPO and the stakeholders centred on enhancing technology transfer, empowering startups and fostering economic growth across Europe.

Participants emphasised the importance of inclusive engagement among patent system users instead of fragmented approaches, ensuring that innovation strategies reflect both industrial and societal needs.

The Unitary Patent system was highlighted as gaining traction, particularly among smaller entities such as SMEs, individual inventors and research organisations. Such a trend reflects broader efforts to improve accessibility and scalability within the European innovation ecosystem.

AI also featured prominently, with both sides recognising its growing role in improving efficiency and quality in patent processes.

A human-centric approach remains essential, ensuring that AI deployment supports responsible innovation while maintaining high standards in patent examination and services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Wikipedia limits generative AI use in article creation

Wikipedia has strengthened its approach to AI use, introducing new restrictions on the use of generative AI in article creation and editing. The changes reflect growing concerns about accuracy, sourcing and editorial standards.

Guidance issued in January 2026 warned contributors against copying and pasting outputs from generative AI into articles. Editors were advised to avoid using such tools to create new entries, as the content often fails verification against reliable sources.

In March 2026, stricter rules were introduced, prohibiting the use of AI to generate or rewrite article content. Limited exceptions allow AI to copyedit one’s own writing or translate material from other Wikipedia language versions.

The updated framework highlights concerns that AI-generated text may include fabricated references, bias and non-encyclopaedic language. Wikipedia continues to allow AI for support tasks such as identifying gaps and locating sources, while maintaining human oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

India AI governance faces court, privacy and cyber pressures

An opinion article published by the International Association of Privacy Professionals says India’s data protection and AI governance environment is facing growing pressure as compliance work around the Digital Personal Data Protection Act (DPDPA) unfolds, court challenges continue, and regulators widen oversight into new sectors. The piece, published on 26 March, is labelled as an opinion article and includes an editor’s note stating that the IAPP is policy neutral and publishes contributed opinion pieces to reflect a broad spectrum of views.

The article says several legal and regulatory developments are unfolding simultaneously. One example cited is a public interest litigation filed before India’s Supreme Court by journalist Geeta Seshu and the Software Freedom Law Centre, India, challenging parts of the DPDPA on constitutional and rights-related grounds. According to the piece, the Supreme Court later issued a notice to the Government of India on 12 March.

Concerns outlined in the article include the absence of journalistic exemptions, the lack of compensation for data breach victims when penalties are imposed to the government, broad state powers to exempt departments from the law, and questions about the independence of the Data Protection Board given the government’s control over appointments. The article notes that similar petitions had already been filed, but says this was the first time the court issued notice to the government.

The article also turns to proceedings before the Kerala High Court involving privacy concerns about biometric and personal data collected through Digi Yatra, a not-for-profit foundation that operates airport passenger-processing infrastructure in India. According to the piece, a public interest litigation filed by C R Neelakandan asked for a temporary restraint on the sharing of collected personal data and its commercial use without proper authorisation.

The article says the Kerala High Court issued notice to the Digi Yatra Foundation and sought clarification from the government on whether the Data Protection Board had been established to oversee such matters.

Alongside the litigation, the opinion piece points to government efforts to show legal preparedness for AI-related risks. It says Electronics and Information Technology Minister Ashwini Vaishnaw outlined existing safeguards during the ongoing parliamentary session, referring to the Information Technology Act, the DPDPA, and subordinate rules, along with published guidelines on AI governance, toy safety, harmful content, awareness-building measures, and cyber safety.

Cybersecurity developments also feature in the article. It says the Indian Computer Emergency Response Team, working with the SatCom Industry Association, issued guidelines on 26 February for space, including satellite communications. According to the piece, the framework is intended to strengthen resilience in India’s space ecosystem.

It applies to covered entities, including government agencies, satellite service providers, ground station operators, terminal equipment vendors, and private space entities. Incident reporting within six hours and annual audits are among the measures described.

A further section of the article draws on Thales’ 2026 Data Threat Report. The piece says 64% of surveyed organisations in India identified AI-driven transformation as their biggest security risk, while 55% said they had to deal with reputational damage caused by AI-generated misinformation. It also says 65% reported deepfake-driven attacks, 35% had a complete view of their data, and 36% could fully classify their data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI details Sora 2 safeguards for likeness, audio, and harmful content

OpenAI has published a new overview of the safety measures built into Sora 2 and the Sora app, setting out how the company says it is approaching provenance, likeness protection, teen safeguards, harmful-content filtering, audio controls, and user reporting tools. The Sora team published the note on 23 March 2026.

OpenAI says every video generated with Sora includes visible and invisible provenance signals, and that all videos also embed C2PA metadata. The company adds that many outputs feature visible moving watermarks that include the creator’s name, while internal reverse-image and audio search tools are used to trace videos back to Sora.

A substantial part of the update focuses on likeness and consent. OpenAI says users can upload images of people to generate videos, but only after attesting that they have consent from the people featured and the right to upload the media. OpenAI also says image-to-video generations involving people are subject to stricter safeguards than Sora Characters, and that images including children and young-looking persons face stricter moderation. Shared videos generated from such images will always carry watermarks, according to the company.

OpenAI also sets out controls linked to its characters feature, which it says is intended to give users stronger control over their likeness, including both appearance and voice. According to the company, users can decide who can use their characters, revoke access at any time, and review, delete, or report videos featuring their characters. OpenAI says it also applies additional restrictions designed to limit major changes to a person’s appearance, avoid embarrassing uses, and maintain broadly consistent identity presentation.

Protections for younger users form another part of the update. OpenAI says teen accounts are subject to stronger limitations on mature output, that age-inappropriate or harmful content is filtered from teen feeds, and that adult users cannot initiate direct messages with teens. Parental controls in ChatGPT can also be used to manage teen messaging permissions and to select a non-personalised feed in the app, while default limits apply to continuous scrolling for teens.

OpenAI says harmful-content controls operate at both creation and distribution stages. Prompt and output checks are used across multiple video frames and audio transcripts to block content including sexual material, terrorist propaganda, and self-harm promotion. OpenAI also says it has tightened policies for video generation compared with image generation because of added realism, motion, and audio, while automated systems and human review are used to monitor feed content against its global usage policies.

Audio generation is treated separately in the note. OpenAI says generated speech transcripts are automatically scanned for possible policy violations, and that prompts intended to imitate living artists or existing works are blocked. The company also says it honours takedown requests from creators who believe an output infringes their work.

User controls and recourse are presented as the final layer. OpenAI says users can choose whether to share videos to the feed, remove published content, and report videos, profiles, direct messages, comments, and characters for abuse. Blocking tools are also available, according to the company, to stop other users from viewing a profile or posts, using a character, or contacting someone through direct message.

OpenAI’s post is framed as a product-safety explanation rather than an independent assessment of the effectiveness of the measures in practice. Much of the note describes controls that the company says it has built into Sora 2, but it does not provide external evaluation data in the published summary.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!