Will AI turn novel-writing into a collaborative process

The article argues that a novel’s value cannot be judged solely by the quality of its prose, because many readers respond to other elements such as premise, ideas and character. It points to Amazon reviews of ‘Shy Girl’, which holds a four-out-of-five-star rating based on hundreds of reviews, with many praising its hook despite awareness of ‘the controversy’ around it. One reviewer writes, ‘The premise sucked me in.’

The broader point is that plenty of novels are poorly written yet still succeed, because fiction, like music, is forgiving: a song may have an irresistible beat even with a predictable melody, and a book can move readers through suspense, beauty, realism, fantasy, or a protagonist they recognise in themselves.

From that premise, the piece asks whether fiction’s ‘layers’ (premise, plot, style and voice) must all come from a single person. It notes that collaborative creation is already normal in many fields, even if audiences rarely state their expectations explicitly: readers tend to assume a Booker Prize-winning novel is written entirely by the named author, while journalism is understood to be shaped by both writers and editors, and television and film are widely accepted as writers’ room and revision-heavy processes.

The article uses James Patterson as an example of industrial-scale collaboration in publishing, describing how he supplies collaborators with outlines and treatments and oversees many projects at once, an approach likened to a ‘novel factory’ that some argue distances him from ‘literary fiction’, yet may be the only practical way to sustain a decades-long series.

The author suggests AI will make such factories easier to create, citing a New York Times report on ‘Coral Hart’, a pseudonymous romance writer who uses AI to generate drafts in about 45 minutes, then revises them before self-publishing hundreds of books under dozens of names. Although not a bestseller, she reportedly earns ‘six figures’ and teaches others to do the same.

This points to a future in which authors act more like showrunners supervising AI-powered writers’ rooms, while raising a central risk: readers may not know who, or what, produced what they are reading, especially if AI use is not consistently disclosed despite platforms such as Amazon asking for it.

The piece ends by questioning whether AI necessarily implies high-volume, depersonalised production. Using a personal analogy from music-making, the author notes that technology can enable rapid output, but can also serve a more artistic purpose: helping a creator overcome technical limits and ‘realise a vision’.

Why does it matter?

The underlying argument is not that AI guarantees either shallow churn or genuine creativity, but that the most consequential issues may lie in intent, authorial expectations, and honest disclosure to readers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US Supreme Court narrows ISP copyright liability, sharpening focus on intent with potential implications for generative AI

A unanimous 9–0 US Supreme Court ruling this week has narrowed the circumstances under which an internet service provider (ISP) can be held liable for users’ copyright infringement by focusing on a deceptively simple question: intent. Writing for the Court, Justice Clarence Thomas said an ISP is liable only if its service was designed for unlawful activity or if it actively induced infringement; merely providing a service to the public while knowing some users will infringe is not enough.

Applying that standard, the Court found Cox Communications did neither, shielding it from a potential $1bn exposure following a long-running dispute that included a jury verdict later vacated.

The decision is now being read for its possible implications beyond ISPs, particularly in the escalating copyright battle between publishers/authors and generative AI firms. The key distinction raised is that broadband networks function as neutral conduits, whereas large language models are built specifically to produce fluent, human-like writing, including prose, poetry and dialogue, that can resemble the work of human authors.

In the article’s framing, that resemblance is not incidental but central to the product’s purpose: if a subscriber uses broadband to pirate a novel, the ISP did not build its network to enable that outcome, but an AI model prompted to write in a specific author’s style is designed to fulfil that request.

That contrast could open a new line of argument in AI litigation. While major US cases, such as suits brought by the Authors Guild and individual authors against OpenAI, Meta and others, have largely centred on whether training on copyrighted books is itself infringing, the Cox ruling highlights a second front: whether the systems’ purpose and optimisation for author-like output could be characterised as being ‘tailored for’ infringement or as purposeful inducement under an intent-based standard.

Publishers, who are simultaneously watching the lawsuits and negotiating licensing deals with AI companies, have so far been more cautious than the music industry was in its costly fight against Cox, an effort that ultimately produced a Supreme Court ruling that narrowed, rather than expanded, leverage.

Why does it matter?

The broader takeaway is that copyright enforcement may increasingly turn not only on what was copied, but what the copying was for, an approach that could prove consequential for AI companies whose commercial proposition is generating human-quality creative work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and 6G strategy drives South Korea’s digital transformation agenda

South Korea has outlined an ambitious national strategy to position itself among the world’s leading AI powers, linking technological advancement with broader economic and societal transformation.

Instead of isolated innovation efforts, the plan adopts a systemic approach, combining infrastructure development, data governance, and industrial policy to accelerate digital transition.

Central to South Korea’s strategy is the evolution of network infrastructure, with a shift from 5G to next-generation 6G technology targeted by 2030. The emphasis on connectivity and speed is complemented by efforts to strengthen cybersecurity frameworks and establish a national data integration platform.

Such measures aim to create a more resilient and competitive digital environment capable of supporting large-scale AI deployment.

The policy also prioritises the integration of AI across multiple sectors, including healthcare, manufacturing, agriculture, and disaster management.

By embedding intelligent systems into critical industries, South Korean authorities seek to enhance productivity, improve public service delivery, and strengthen national resilience.

Workforce development is positioned as a key pillar, with phased training initiatives designed to build expertise in advanced technologies such as semiconductors and quantum computing.

In parallel, the strategy incorporates digital inclusion measures to ensure broader societal participation. Expansion of AI learning centres and assistive technologies reflects an effort to reduce digital divides while supporting vulnerable groups.

Long-term success will depend on effective coordination across government bodies and to balancing rapid technological deployment with equitable access and robust governance frameworks, rather than purely growth-driven objectives.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

University of South Wales becomes the first in the UK to AI qualification as part of a degree

University of South Wales will become the first university in the UK to embed an AI qualification within a Business and Management degree. The programme was developed with the Institute of Enterprise and Entrepreneurs and will begin in September 2026.

Students will receive an IOEE award after their first year and may obtain a diploma upon graduation. The course is the first in the UK to combine both certifications within a single degree.

The qualification includes six units covering AI literacy, prompting, evaluation, application, ethics and reflective practice. These elements are assessed through existing coursework rather than separate examinations.

First-year students will take a module that includes weekly AI sessions linked to building a business. They will use AI for financial projections, marketing strategies, pitch materials and competitor analysis.

Final year students will create digital products using AI, including chatbots and business plans. Liam Newton, course leader for the BA Business and Management programme at the University of South Wales, said the programme aims to support employability and to develop informed use of AI tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Kazakhstan positions AI at heart of industrial strategy

Addressing the Digital Qazaqstan 2026 forum on 27 March, Kazakhstan’s Prime Minister Olzhas Bektenov positioned AI as foundational infrastructure comparable to energy and transport networks, with three priorities centring on institutional foundations, digital infrastructure and human capital.

The government plans to develop sector-specific datasets and specialised AI language models for energy, mining, agriculture and logistics industries throughout 2026.

Kazakhstan is establishing a dedicated university focused on AI and rolling out the national AI-Sana programme to build an education ecosystem spanning schools, professional training and tech entrepreneurship.

Prime Minister Bektenov concluded by highlighting Kazakhstan’s competitive advantages, including affordable electricity and low latency for high-performance computing systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Oracle expands AI options for US government agencies

The US government is set to gain expanded AI capabilities through new infrastructure and model deployment options in Oracle Cloud.

These developments aim to improve agencies’ ability to manage critical tasks, from situational awareness to cybersecurity, while maintaining strict security and compliance standards.

High-performance GPUs and AI models will support faster, more reliable inference and training, helping agencies respond more effectively to public needs.

The focus is on enabling secure deployment in environments with sensitive data and complex regulatory requirements, ensuring AI use aligns with public interest and safety.

Such an expansion builds on existing government AI frameworks, offering capabilities for retrieval-augmented generation, secure inference, and operational analytics.

By integrating AI in a controlled, compliant environment, US agencies can improve efficiency, decision-making, and public service delivery without compromising security.

Ultimately, these advancements by Oracle aim to ensure that government AI adoption benefits citizens directly, supporting transparency, accountability, and effective public administration in high-stakes contexts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Microsoft commits $10 billion to Japan’s AI future

Microsoft Corporation announced a $10 billion investment in Japan over four years to expand AI infrastructure and strengthen cybersecurity partnerships with the government. The investment aligns with Prime Minister Sanae Takaichi’s strategy for economic growth through advanced technologies.

The company will collaborate with Japanese firms SoftBank and Sakura Internet to develop domestically-based AI computing capacity, allowing Japanese businesses and government agencies to store sensitive data locally whilst accessing Microsoft Azure services.

Why does it matter?

Microsoft plans to train 1 million engineers and developers by 2030 as part of the initiative to build Japan’s digital workforce in AI and emerging technologies. The investment addresses Japan’s growing demand for cloud and AI services as part of the company’s Asia-wide expansion strategy.

The announcement, made on 3 April, reflects Microsoft’s commitment to supporting Japanese technological advancement whilst maintaining data security. Sakura Internet’s share price jumped 20 percent following the news, signalling strong market confidence in the partnership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nova Scotia launches five person AI team to support government operations

Nova Scotia will recruit a five-person team to help integrate AI into provincial government operations, marking a more structured push to introduce AI tools into public service work across Canada. Jennifer LaPlante, deputy minister of cybersecurity and digital solutions, said the group will develop protocols for staff across departments as the province expands its use of AI.

The team is expected to identify tools that could improve productivity and efficiency in government work, including systems such as Microsoft Copilot for tasks like drafting documents and summarising information. The move suggests that Nova Scotia is shifting from limited experimentation towards a more organised approach to AI adoption in public administration.

Officials say existing rules already govern the use of some AI meeting tools and virtual assistants, while a broader responsible-use policy is still being developed. That places the province’s AI push within a wider effort to balance innovation with security, oversight, and system protection.

Funding will come from a C$4.4 million investment to establish AI capabilities during the current fiscal year. Part of that budget will go towards licences and software, with room for the team to grow over time.

The department has also launched an AI chatbot, Scottie, to answer public questions about government services. According to officials, the tool retrieves information from existing government sources rather than generating new content, suggesting an effort to limit risk while expanding AI use in public-facing services.

Taken together, the measures point to a broader effort to embed AI more formally into provincial government operations, not only through tools and staffing but also through internal rules governing its use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Amnesty International warns EU tech law reforms could weaken GDPR and AI Act protections

Amnesty International has warned that proposed EU reforms presented as a way to simplify digital regulation and boost competitiveness could weaken core safeguards for privacy and fundamental rights.
At the centre of the concern is the European Commission’s ‘Digital Omnibus’ initiative, which would affect major pieces of legislation, including the General Data Protection Regulation and the AI Act.

Amnesty and other civil society groups argue that the package risks reopening key protections in the EU’s digital rulebook under the banner of regulatory simplification.

Among the most controversial proposals are changes to how personal data is defined, along with exceptions that could make it easier for companies to retain or reuse data for AI systems. Critics say that such changes would weaken safeguards intended to limit excessive data collection and to preserve accountability in how personal information is processed.

Concerns also extend to the AI Act, where proposed adjustments could reduce obligations for high-risk systems. According to Amnesty, companies may be given greater discretion in how they assess and disclose risks, potentially lowering transparency and limiting external scrutiny.

Delays in implementation, the organisation argues, could also allow harmful systems to remain in use without full regulatory oversight.

The broader reform agenda may reach beyond privacy and AI rules. Future ‘fitness checks’ could also affect frameworks such as the Digital Services Act and the Digital Markets Act, raising wider concerns about whether the EU’s digital regulatory model is being softened in the name of competitiveness.

For critics, the cumulative risk is that the balance of the EU digital framework could begin to shift away from rights protection and public accountability, and towards greater corporate flexibility in areas linked to surveillance, discrimination, and market power.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK’s Ofcom report reveals evolving online habits and growing AI reliance

New Ofcom research suggests that UK adults are becoming more cautious and passive in their use of social media, even as interest in AI tools grows, pointing to a wider shift in how people experience digital life.

While social media remains widely used, the report indicates that users are participating less actively and becoming more selective about what they share and how visible they are online.

That shift is tied in part to growing unease about digital well-being. Concerns about screen time and the wider effects of online platforms are rising, with fewer adults convinced that the benefits of being online outweigh the risks. Many say they are actively trying to limit their usage, reflecting broader anxieties about the impact of digital media on mental health and everyday life.

At the same time, AI adoption is accelerating, especially among younger users. Ofcom’s findings suggest that people are using AI not only for productivity and creative tasks, but also, in some cases, for conversational and emotional support, pointing to a changing relationship between users and digital tools.

Other findings reinforce the sense of a more fragmented digital environment. Trust in news remains uneven, mainstream sources still hold a central place but face growing scepticism, and confidence in digital skills does not always translate into an ability to identify misinformation, scams, or other online risks.

Taken together, the findings suggest that the UK’s digital habits are not simply expanding but changing in character. Users appear to be growing more wary of social platforms, more alert to digital harms, and more open to new forms of interaction through AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!