Will AI turn novel-writing into a collaborative process

The article argues that a novel’s value cannot be judged solely by the quality of its prose, because many readers respond to other elements such as premise, ideas and character. It points to Amazon reviews of ‘Shy Girl’, which holds a four-out-of-five-star rating based on hundreds of reviews, with many praising its hook despite awareness of ‘the controversy’ around it. One reviewer writes, ‘The premise sucked me in.’

The broader point is that plenty of novels are poorly written yet still succeed, because fiction, like music, is forgiving: a song may have an irresistible beat even with a predictable melody, and a book can move readers through suspense, beauty, realism, fantasy, or a protagonist they recognise in themselves.

From that premise, the piece asks whether fiction’s ‘layers’ (premise, plot, style and voice) must all come from a single person. It notes that collaborative creation is already normal in many fields, even if audiences rarely state their expectations explicitly: readers tend to assume a Booker Prize-winning novel is written entirely by the named author, while journalism is understood to be shaped by both writers and editors, and television and film are widely accepted as writers’ room and revision-heavy processes.

The article uses James Patterson as an example of industrial-scale collaboration in publishing, describing how he supplies collaborators with outlines and treatments and oversees many projects at once, an approach likened to a ‘novel factory’ that some argue distances him from ‘literary fiction’, yet may be the only practical way to sustain a decades-long series.

The author suggests AI will make such factories easier to create, citing a New York Times report on ‘Coral Hart’, a pseudonymous romance writer who uses AI to generate drafts in about 45 minutes, then revises them before self-publishing hundreds of books under dozens of names. Although not a bestseller, she reportedly earns ‘six figures’ and teaches others to do the same.

This points to a future in which authors act more like showrunners supervising AI-powered writers’ rooms, while raising a central risk: readers may not know who, or what, produced what they are reading, especially if AI use is not consistently disclosed despite platforms such as Amazon asking for it.

The piece ends by questioning whether AI necessarily implies high-volume, depersonalised production. Using a personal analogy from music-making, the author notes that technology can enable rapid output, but can also serve a more artistic purpose: helping a creator overcome technical limits and ‘realise a vision’.

Why does it matter?

The underlying argument is not that AI guarantees either shallow churn or genuine creativity, but that the most consequential issues may lie in intent, authorial expectations, and honest disclosure to readers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital Services Act agreement links European Commission and EUIPO on online IP enforcement

The European Commission and the European Union Intellectual Property Office (EUIPO) have signed a five-year agreement under which the latter will provide technical support and intellectual property expertise for work under the Digital Services Act. The cooperation focuses on online infringements of intellectual property rights, in particular the sale of counterfeit goods and the distribution of pirated content.

The EUIPO will support the oversight of the European Commission’s Very Large Online Platforms and Very Large Online Search Engines. That work will include analysing internal reports submitted by those services on how they address online intellectual property infringement.

An agreement with the European Commission includes training for national authorities that enforce the Digital Services Act. It also supports the European Board for Digital Services by contributing to discussions in its working groups on intellectual property.

The EUIPO will also help build expertise among judicial authorities, intellectual property right holders, and smaller online intermediaries, and contribute to a shared collection of best practices and tools.

However, this agreement is linked to the Digital Services Act framework, under which online intermediaries are required to provide notice-and-action mechanisms for illegal content, and Very Large Online Platforms and Very Large Online Search Engines are subject to additional risk-assessment and mitigation obligations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Commission invests in fact-checking to combat disinformation

The European Commission has awarded a €5 million grant to strengthen independent fact-checking capacity across the European Union and associated countries. The initiative will establish a comprehensive support network for fact-checkers working in all the EU languages.

The European Fact-Checking Standards Network will lead the project alongside seven partner organisations. The scheme will provide fact-checkers with protection covering legal support, cybersecurity assistance, psychological support and access to an independent European repository of fact-checks.

By expanding Europe’s independent fact-checking community, the initiative will improve the Union’s ability to detect and analyse disinformation threats. The announcement reflects the Commission’s commitment to safeguarding information integrity and democratic resilience across Brussels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU strengthens IP enforcement under Digital Services Act

The European Commission has signed an agreement with the European Union Intellectual Property Office to support enforcement of the Digital Services Act in relation to intellectual property rights.

The agreement takes effect immediately and focuses on strengthening the Commission’s enforcement capacity.

Cooperation will target systemic risks linked to very large online platforms and search engines, particularly the spread of intellectual property-infringing content. Such risks include counterfeit goods and online piracy, which fall within the scope of the DSA’s oversight framework.

The EUIPO is expected to expand its activities to support judicial and enforcement authorities, as well as online intermediaries that are not classified as very large platforms. Intellectual property rights holders are also included in the broader effort to address infringement risks.

The Digital Services Act establishes rules aimed at creating a safer and more transparent online environment across the European Union. Cooperation between the EU institutions and specialised bodies is presented as a key element in safeguarding users’ rights, including those linked to intellectual property.

Strengthening enforcement mechanisms in areas such as intellectual property links platform governance with broader policy objectives, including user protection, accountability of online intermediaries, and the functioning of the digital single market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK’s Ofcom report reveals evolving online habits and growing AI reliance

New Ofcom research suggests that UK adults are becoming more cautious and passive in their use of social media, even as interest in AI tools grows, pointing to a wider shift in how people experience digital life.

While social media remains widely used, the report indicates that users are participating less actively and becoming more selective about what they share and how visible they are online.

That shift is tied in part to growing unease about digital well-being. Concerns about screen time and the wider effects of online platforms are rising, with fewer adults convinced that the benefits of being online outweigh the risks. Many say they are actively trying to limit their usage, reflecting broader anxieties about the impact of digital media on mental health and everyday life.

At the same time, AI adoption is accelerating, especially among younger users. Ofcom’s findings suggest that people are using AI not only for productivity and creative tasks, but also, in some cases, for conversational and emotional support, pointing to a changing relationship between users and digital tools.

Other findings reinforce the sense of a more fragmented digital environment. Trust in news remains uneven, mainstream sources still hold a central place but face growing scepticism, and confidence in digital skills does not always translate into an ability to identify misinformation, scams, or other online risks.

Taken together, the findings suggest that the UK’s digital habits are not simply expanding but changing in character. Users appear to be growing more wary of social platforms, more alert to digital harms, and more open to new forms of interaction through AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK regulator orders revised safety assessments under Online Safety Act

Ofcom has ordered more than 40 online services to submit revised risk assessments under the UK’s Online Safety Act, increasing pressure on platforms to show how they identify and reduce illegal content and other user harms.

The move marks a tougher phase in the UK’s online safety regime, with the regulator signalling that incomplete or delayed submissions could trigger enforcement action.

Ofcom said earlier reviews had identified weaknesses in several assessments, prompting companies to strengthen their approach and improve safeguards.

The requirement is especially significant for services likely to be accessed by children, which must also examine the risk of exposure to harmful content and demonstrate what protective measures they have in place. In that sense, the regulator is pushing platforms to treat safety not as a reactive moderation issue, but as a design and compliance obligation.

Ofcom has also indicated that major platforms will eventually have to publish summaries of their risk assessments, adding a transparency layer to the regime.

The latest demands suggest that the UK is moving beyond setting out online safety expectations and into a more interventionist stage focused on supervision, accountability, and enforcement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

MIT develops AI framework to test ethics in autonomous systems

Researchers at MIT have introduced a new framework designed to evaluate the ethical impact of autonomous systems used in high-stakes environments. The approach aims to identify cases where AI-driven decisions may be technically efficient but fail to meet fairness expectations.

Growing reliance on AI in areas such as energy distribution and traffic management has raised concerns about unintended bias. Cost-optimised systems can still disadvantage communities, especially when ethical factors are hard to measure.

The framework, known as SEED-SET, separates objective performance metrics from subjective human values. A large language model is used to simulate stakeholder preferences, enabling the system to compare scenarios and detect where outcomes diverge from ethical expectations.

Testing shows the method generates more relevant scenarios while reducing manual analysis. Findings highlight its potential to improve transparency and support more balanced decision-making before AI systems are deployed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Experts warn YouTube AI slop harms children and demand action

Fairplay and more than 200 experts have urged YouTube to address the spread of ‘AI slop’ targeting children. The letter was sent to Sundar Pichai and Neal Mohan, along with a petition.

The signatories state that AI-generated videos harm children’s development by distorting reality and overwhelming learning processes. They also warn that such content captures attention and is being recommended to young users, including infants and toddlers.

The letter cites findings that 40% of videos following shows like Cocomelon contained AI-generated content. It also states that 21% of Shorts recommendations included similar material, and misleading science videos were shown to older children.

Fairplay and its partners propose measures, including labelling AI content and banning it from YouTube Kids. They also call for restrictions on recommendations to under-18s and for tools that allow parents to turn off such content.

The initiative was organised by Fairplay and supported by organisations and experts, including Jonathan Haidt. The group says platforms must ensure content is safe and appropriate for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

UK expands efforts to boost digital inclusion

More than one million people have been helped to get online through a national digital inclusion plan led by the Department for Science, Innovation and Technology. The initiative targets groups including older people, jobseekers and rural communities.

The programme has delivered over 22,000 donated devices and funded more than 80 local projects with £11.9 million. Support includes improved connectivity, access to affordable services and training to build essential digital skills.

Efforts also focus on strengthening long-term capabilities, with the government taking control of the national digital skills framework. Updates will reflect changing needs, such as online safety and the growing role of AI in everyday life.

British officials say the plan is helping people find work, manage finances and access services more easily. Further expansion is expected as authorities work with industry and charities to reach more communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

France moves toward social media restrictions for children under 15

Legislative efforts in France signal a shift toward stricter governance of youth access to digital platforms, with policymakers preparing to debate a ban on social media use for children under 15.

A proposal that forms part of a broader strategy to address concerns over online harms and excessive screen exposure among adolescents.

The draft law in France extends beyond access restrictions, proposing a digital curfew for older teenagers and expanding existing school phone bans to include high schools.

These measures reflect increasing reliance on regulatory intervention instead of voluntary platform safeguards, as evidence links prolonged digital engagement with risks such as cyberbullying, disrupted sleep patterns and exposure to harmful content.

Political backing for the initiative has emerged from figures aligned with Emmanuel Macron, reinforcing the government’s position that stronger oversight of digital environments is necessary. The proposal also mirrors developments in Australia, where similar restrictions have already entered into force.

A debate that is further influenced by legal actions targeting major platforms, including TikTok and Meta, amid allegations that algorithmic systems contribute to harmful user experiences.

The outcome of the parliamentary discussions in France is expected to shape future approaches to child safety, platform accountability and digital rights governance across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!