National Crime Agency to receive CSEA reports under UK Online Safety Act rules

UK regulations under the Online Safety Act 2023 are now in force, requiring certain regulated user-to-user services to register with the National Crime Agency and report detected and unreported child sexual exploitation and abuse content.

Under the Online Safety (CSEA Content Reporting by Regulated User-to-User Service Providers) Regulations 2026, providers subject to the reporting duty, and any third-party providers acting on their behalf, must register with the National Crime Agency through an online portal. They must also appoint an organisation administrator as a point of contact.

Reports submitted to the National Crime Agency must contain specified information, including details about the content, the time it was uploaded, relevant IP addresses, and user account data. The regulations also require providers to classify reports into three priority levels and submit them within the corresponding timeframes.

Record-keeping duties are also set out in the regulations. Providers must retain the report reference number for five years and keep the associated content and user data for one year from the reporting date.

The rules form part of the reporting framework under the Online Safety Act 2023 for child sexual exploitation and abuse content on regulated user-to-user services in the UK. Non-compliance may result in a penalty of up to 10% of qualifying worldwide revenue or £18 million, whichever is greater.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots are reshaping classroom debates, raising concerns over homogenised discussion

Generative AI chatbots are becoming embedded in university learning at Yale, students and academics told CNN, not only for essays and homework but also for real-time seminar participation. Students described classmates uploading readings and PDFs into chatbots before class, and even typing a professor’s question into AI during discussion to produce an immediate response to repeat aloud.

While this can make contributions sound more polished and prepared, some students said seminar conversations increasingly stall or feel flatter, with fewer personal interpretations and less exploratory debate. One student, ‘Amanda’, said she has noticed many classmates arriving with slick talking points but then offering near-identical arguments and phrasing, making discussions feel less distinctive than in earlier years.

Students gave several reasons for leaning on AI. ‘Jessica’, a senior, said she uses it daily, particularly in an economics seminar where the professor cold-calls students, both to digest readings quickly and to help her translate ideas into cohesive sentences when she struggles to phrase her comments.

‘Sophia’, a junior, said some students appear to use AI to draft ‘scripts’ for what to say in class, driven by insecurity about gaps in their understanding. She believes this weakens creativity and the ability to make original connections, replacing genuine engagement with impressive-sounding language.

A Yale spokesperson said the university is aware students are experimenting with AI in the classroom and noted a wider faculty trend towards limiting or banning laptops, using print-based materials, and prioritising direct engagement and original thinking.

The article links these observations to a March paper in ‘Trends in Cognitive Sciences’, which argues that large language models can systematically homogenise human expression and thought across language, perspective and reasoning. The paper’s authors say LLMs predict statistically likely next words based on training data that overrepresents dominant languages and ideas, potentially narrowing the ‘conceptual space’ for how people write and argue.

They warn that models tend to reproduce ‘WEIRD’ viewpoints, Western, educated, industrialised, rich and democratic, even when prompted otherwise, which may make those styles seem more credible and socially correct while marginalising other perspectives.

Researchers also describe a compounding feedback loop. As AI-generated outputs circulate in human discourse and eventually re-enter training data, sameness can intensify over time. Co-author Morteza Dehghani said offloading reasoning to AI risks intellectual laziness and could have broader social consequences, from weakened innovation to greater susceptibility to persuasion.

Educators quoted described both benefits and risks, and outlined practical responses. Thomas Chatterton Williams, a visiting professor and Bard College fellow, said AI can ‘raise the floor’ of discussion for difficult material but may suppress eccentric or truly original ideas, leaving students without a voice of their own or a sense of authorship.

Former teacher Daniel Buck called AI a ‘supercharged SparkNotes’ that can answer virtually any question, making it harder to detect shortcuts and easier for students to bypass the ‘boring minutiae’ where learning takes hold.

He worries that this also undermines relationships with professors and sustained cognitive work. Yale philosophy professor Sun-Joo Shin said model improvements forced her to redesign the assessment. Problem sets now earn completion credit and feedback, while in-class exams, oral tests and presentations carry more weight.

Williams said he has moved from writing to spontaneous, in-class, handwritten work and uses oral exit exams. Students who avoid AI argued that they are still affected by classmates’ reliance on it because it reduces the value and variety of seminar time, while others urged a middle path in which AI is treated as a collaborator, used to critique ideas rather than as a substitute for generating them or doing the reasoning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Will AI turn novel-writing into a collaborative process

The article argues that a novel’s value cannot be judged solely by the quality of its prose, because many readers respond to other elements such as premise, ideas and character. It points to Amazon reviews of ‘Shy Girl’, which holds a four-out-of-five-star rating based on hundreds of reviews, with many praising its hook despite awareness of ‘the controversy’ around it. One reviewer writes, ‘The premise sucked me in.’

The broader point is that plenty of novels are poorly written yet still succeed, because fiction, like music, is forgiving: a song may have an irresistible beat even with a predictable melody, and a book can move readers through suspense, beauty, realism, fantasy, or a protagonist they recognise in themselves.

From that premise, the piece asks whether fiction’s ‘layers’ (premise, plot, style and voice) must all come from a single person. It notes that collaborative creation is already normal in many fields, even if audiences rarely state their expectations explicitly: readers tend to assume a Booker Prize-winning novel is written entirely by the named author, while journalism is understood to be shaped by both writers and editors, and television and film are widely accepted as writers’ room and revision-heavy processes.

The article uses James Patterson as an example of industrial-scale collaboration in publishing, describing how he supplies collaborators with outlines and treatments and oversees many projects at once, an approach likened to a ‘novel factory’ that some argue distances him from ‘literary fiction’, yet may be the only practical way to sustain a decades-long series.

The author suggests AI will make such factories easier to create, citing a New York Times report on ‘Coral Hart’, a pseudonymous romance writer who uses AI to generate drafts in about 45 minutes, then revises them before self-publishing hundreds of books under dozens of names. Although not a bestseller, she reportedly earns ‘six figures’ and teaches others to do the same.

This points to a future in which authors act more like showrunners supervising AI-powered writers’ rooms, while raising a central risk: readers may not know who, or what, produced what they are reading, especially if AI use is not consistently disclosed despite platforms such as Amazon asking for it.

The piece ends by questioning whether AI necessarily implies high-volume, depersonalised production. Using a personal analogy from music-making, the author notes that technology can enable rapid output, but can also serve a more artistic purpose: helping a creator overcome technical limits and ‘realise a vision’.

Why does it matter?

The underlying argument is not that AI guarantees either shallow churn or genuine creativity, but that the most consequential issues may lie in intent, authorial expectations, and honest disclosure to readers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital Services Act agreement links European Commission and EUIPO on online IP enforcement

The European Commission and the European Union Intellectual Property Office (EUIPO) have signed a five-year agreement under which the latter will provide technical support and intellectual property expertise for work under the Digital Services Act. The cooperation focuses on online infringements of intellectual property rights, in particular the sale of counterfeit goods and the distribution of pirated content.

The EUIPO will support the oversight of the European Commission’s Very Large Online Platforms and Very Large Online Search Engines. That work will include analysing internal reports submitted by those services on how they address online intellectual property infringement.

An agreement with the European Commission includes training for national authorities that enforce the Digital Services Act. It also supports the European Board for Digital Services by contributing to discussions in its working groups on intellectual property.

The EUIPO will also help build expertise among judicial authorities, intellectual property right holders, and smaller online intermediaries, and contribute to a shared collection of best practices and tools.

However, this agreement is linked to the Digital Services Act framework, under which online intermediaries are required to provide notice-and-action mechanisms for illegal content, and Very Large Online Platforms and Very Large Online Search Engines are subject to additional risk-assessment and mitigation obligations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Commission invests in fact-checking to combat disinformation

The European Commission has awarded a €5 million grant to strengthen independent fact-checking capacity across the European Union and associated countries. The initiative will establish a comprehensive support network for fact-checkers working in all the EU languages.

The European Fact-Checking Standards Network will lead the project alongside seven partner organisations. The scheme will provide fact-checkers with protection covering legal support, cybersecurity assistance, psychological support and access to an independent European repository of fact-checks.

By expanding Europe’s independent fact-checking community, the initiative will improve the Union’s ability to detect and analyse disinformation threats. The announcement reflects the Commission’s commitment to safeguarding information integrity and democratic resilience across Brussels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU strengthens IP enforcement under Digital Services Act

The European Commission has signed an agreement with the European Union Intellectual Property Office to support enforcement of the Digital Services Act in relation to intellectual property rights.

The agreement takes effect immediately and focuses on strengthening the Commission’s enforcement capacity.

Cooperation will target systemic risks linked to very large online platforms and search engines, particularly the spread of intellectual property-infringing content. Such risks include counterfeit goods and online piracy, which fall within the scope of the DSA’s oversight framework.

The EUIPO is expected to expand its activities to support judicial and enforcement authorities, as well as online intermediaries that are not classified as very large platforms. Intellectual property rights holders are also included in the broader effort to address infringement risks.

The Digital Services Act establishes rules aimed at creating a safer and more transparent online environment across the European Union. Cooperation between the EU institutions and specialised bodies is presented as a key element in safeguarding users’ rights, including those linked to intellectual property.

Strengthening enforcement mechanisms in areas such as intellectual property links platform governance with broader policy objectives, including user protection, accountability of online intermediaries, and the functioning of the digital single market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK’s Ofcom report reveals evolving online habits and growing AI reliance

New Ofcom research suggests that UK adults are becoming more cautious and passive in their use of social media, even as interest in AI tools grows, pointing to a wider shift in how people experience digital life.

While social media remains widely used, the report indicates that users are participating less actively and becoming more selective about what they share and how visible they are online.

That shift is tied in part to growing unease about digital well-being. Concerns about screen time and the wider effects of online platforms are rising, with fewer adults convinced that the benefits of being online outweigh the risks. Many say they are actively trying to limit their usage, reflecting broader anxieties about the impact of digital media on mental health and everyday life.

At the same time, AI adoption is accelerating, especially among younger users. Ofcom’s findings suggest that people are using AI not only for productivity and creative tasks, but also, in some cases, for conversational and emotional support, pointing to a changing relationship between users and digital tools.

Other findings reinforce the sense of a more fragmented digital environment. Trust in news remains uneven, mainstream sources still hold a central place but face growing scepticism, and confidence in digital skills does not always translate into an ability to identify misinformation, scams, or other online risks.

Taken together, the findings suggest that the UK’s digital habits are not simply expanding but changing in character. Users appear to be growing more wary of social platforms, more alert to digital harms, and more open to new forms of interaction through AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK regulator orders revised safety assessments under Online Safety Act

Ofcom has ordered more than 40 online services to submit revised risk assessments under the UK’s Online Safety Act, increasing pressure on platforms to show how they identify and reduce illegal content and other user harms.

The move marks a tougher phase in the UK’s online safety regime, with the regulator signalling that incomplete or delayed submissions could trigger enforcement action.

Ofcom said earlier reviews had identified weaknesses in several assessments, prompting companies to strengthen their approach and improve safeguards.

The requirement is especially significant for services likely to be accessed by children, which must also examine the risk of exposure to harmful content and demonstrate what protective measures they have in place. In that sense, the regulator is pushing platforms to treat safety not as a reactive moderation issue, but as a design and compliance obligation.

Ofcom has also indicated that major platforms will eventually have to publish summaries of their risk assessments, adding a transparency layer to the regime.

The latest demands suggest that the UK is moving beyond setting out online safety expectations and into a more interventionist stage focused on supervision, accountability, and enforcement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

MIT develops AI framework to test ethics in autonomous systems

Researchers at MIT have introduced a new framework designed to evaluate the ethical impact of autonomous systems used in high-stakes environments. The approach aims to identify cases where AI-driven decisions may be technically efficient but fail to meet fairness expectations.

Growing reliance on AI in areas such as energy distribution and traffic management has raised concerns about unintended bias. Cost-optimised systems can still disadvantage communities, especially when ethical factors are hard to measure.

The framework, known as SEED-SET, separates objective performance metrics from subjective human values. A large language model is used to simulate stakeholder preferences, enabling the system to compare scenarios and detect where outcomes diverge from ethical expectations.

Testing shows the method generates more relevant scenarios while reducing manual analysis. Findings highlight its potential to improve transparency and support more balanced decision-making before AI systems are deployed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Experts warn YouTube AI slop harms children and demand action

Fairplay and more than 200 experts have urged YouTube to address the spread of ‘AI slop’ targeting children. The letter was sent to Sundar Pichai and Neal Mohan, along with a petition.

The signatories state that AI-generated videos harm children’s development by distorting reality and overwhelming learning processes. They also warn that such content captures attention and is being recommended to young users, including infants and toddlers.

The letter cites findings that 40% of videos following shows like Cocomelon contained AI-generated content. It also states that 21% of Shorts recommendations included similar material, and misleading science videos were shown to older children.

Fairplay and its partners propose measures, including labelling AI content and banning it from YouTube Kids. They also call for restrictions on recommendations to under-18s and for tools that allow parents to turn off such content.

The initiative was organised by Fairplay and supported by organisations and experts, including Jonathan Haidt. The group says platforms must ensure content is safe and appropriate for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot