Ofcom steps up child safety enforcement with Telegram and chat site investigations

The UK’s online safety regime has entered a more confrontational phase, with Ofcom opening new investigations into Telegram and two chat platforms over suspected failures to protect children from serious harm. The move signals a shift from broad compliance warnings to more direct enforcement against services deemed to pose acute risks under the Online Safety Act.

Ofcom said it is investigating Telegram to determine whether the platform is doing enough to prevent child sexual abuse material from being shared. Separate probes have also been opened into Teen Chat and Chat Avenue, where the regulator says there are concerns that chat functions may be facilitating grooming and other harms to children. According to Ofcom, the providers have not demonstrated sufficient safeguards for UK users despite earlier engagement.

The cases are part of a wider enforcement drive rather than isolated actions. Ofcom has already been pressing file-sharing and file-storage services over child sexual abuse risks, and says some platforms have since introduced automated detection tools, blocked access for UK users, or otherwise changed their systems in response to regulatory pressure. In other cases, investigations have been closed after providers took corrective steps.

That broader context matters. Since the first online safety duties became enforceable, Ofcom has been moving from rule-setting into operational enforcement, testing whether platforms are actually putting in place the systems and processes needed to reduce illegal harms.

In the child safety area, that increasingly means proactive risk management, technical detection measures, and design choices that make it harder for offenders to share abusive material or contact children in the first place.

Ofcom has also made clear that services available in the UK cannot treat these duties as optional. Under the Online Safety Act, companies can face significant financial penalties for failing to comply, and the regulator can ask courts to impose business disruption measures or restrict access where necessary. That gives the current investigations weight beyond the individual platforms involved.

The bigger significance of the latest action is that platform accountability is being judged less on stated policies and more on demonstrable safeguards. The Telegram case in particular shows that even large, globally used platforms are now exposed to direct scrutiny if UK regulators believe child safety risks are not being properly addressed.

Taken together, the investigations suggest that Ofcom is trying to establish a more interventionist model of online safety enforcement, one in which companies are expected to anticipate and reduce harm rather than respond only after it has spread.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Philippines presses Meta for faster action on online disinformation

The Philippine government is intensifying pressure on Meta to act more quickly to address harmful online disinformation, arguing that the company’s current enforcement approach is insufficient to address rapidly spreading false content that can affect public order, economic confidence, and national security. The latest move comes in the form of a formal response from the Department of Information and Communications Technology, following an earlier joint request involving the Presidential Communications Office and the Department of Justice.

Officials acknowledged Meta’s willingness to engage and its existing moderation policies, but said broad descriptions of enforcement mechanisms fall short of what the situation requires. According to the DICT, the government is seeking clear commitments, faster intervention processes, and measurable outcomes rather than general assurances about existing platform rules.

The pressure campaign is tied to concerns that false and misleading online content can trigger real-world harm, especially during politically and economically sensitive periods. Government statements have linked the problem to panic-inducing disinformation that could affect fuel prices, economic stability, and public trust, and have warned that inadequate action from Meta could lead to legal and regulatory consequences.

The latest DICT response sharpens that message. While recognising Meta’s engagement, the agency said general explanations of moderation policies were not enough, arguing that what is needed now are faster enforcement processes, concrete commitments, and measurable results. The government has tied that position to its wider ‘Kontra Fake News’ campaign, which it says is intended to protect access to accurate information while holding those who deliberately spread falsehoods accountable.

The dispute is also part of a broader institutional shift. The DICT, Presidential Communications Office, and Department of Justice have moved towards a more coordinated response to digital disinformation, including a memorandum of agreement aimed at a whole-of-government approach to false content and related threats such as deepfakes. That makes the Meta case more than a platform-specific complaint: it is becoming part of a wider governance and enforcement strategy.

In the meantime, officials of the Philippines have tried to draw a line between legitimate expression and harmful manipulation. The government says freedom of expression remains protected, but that protection does not extend to coordinated or deliberately harmful disinformation that can trigger panic or erode confidence in public institutions. That distinction is likely to become more important if talks with Meta fail and the government moves towards tougher intervention.

The broader significance of the case lies in what it says about platform governance. Rather than accepting general assurances about moderation systems, governments are increasingly demanding faster, more transparent, and more locally responsive enforcement from major technology companies. In the Philippine case, that pressure is now being expressed through a formal inter-agency effort that could test how far states are willing to go when platforms are seen as too slow to respond to politically and economically sensitive disinformation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

YouTube expands AI deepfake detection tools for celebrities

The expansion of its likeness detection technology to the entertainment industry has been announced by YouTube, extending access beyond content creators to talent agencies, management companies and the individuals they represent.

The move is part of a broader effort by the platform to address the growing misuse of AI to generate misleading or unauthorised videos of public figures. By extending the tool to entertainment industry stakeholders, YouTube is signalling that AI-driven impersonation is no longer treated as a niche creator issue but as a broader identity and rights problem.

The system works in a way broadly comparable to Content ID, allowing eligible users to identify videos that use AI to replicate a person’s face or likeness. Once such content is detected, individuals can request its removal through YouTube’s existing privacy complaint process.

The rollout has been developed with input from major industry players, including Creative Artists Agency, United Talent Agency, William Morris Endeavor, and Untitled Management. Those partnerships are intended to help YouTube refine how the system works in practice and ensure it reflects the needs of artists and rights holders dealing with synthetic media.

Importantly, access to the tool is not limited to people who actively run YouTube channels. Celebrities and public figures can use it even without a direct creator presence on the platform, extending its reach across a much broader part of the entertainment ecosystem.

The significance of the update lies in how platforms are beginning to treat AI impersonation as a governance issue rather than merely a content-moderation problem.

As synthetic media tools become easier to use and more convincing, technology companies are under growing pressure to provide faster and more credible mechanisms for detecting misuse, protecting identity rights, and limiting deceptive content.

YouTube’s latest move shows that platform responses are becoming more structured and rights-based, especially in sectors where a person’s likeness is closely tied to reputation, image, and commercial value. The bigger question now is whether such tools will prove effective enough to keep pace with the scale and speed of AI-generated impersonation online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

South Korea warns on AI fake news risks

Reporting by The Korea Herald states that South Korean Prime Minister Kim Min-seok has warned of the risks of AI-generated fake news ahead of an upcoming election. Authorities are urging greater vigilance as digital content becomes harder to verify.

According to the report, AI technologies are increasingly capable of producing realistic false information, including manipulated images and videos. This raises concerns about their potential impact on public opinion and trust.

The government has called for precautionary measures to limit the spread of misinformation and protect the integrity of democratic processes. This includes encouraging awareness and responsible use of AI tools.

The warning reflects broader concerns about the influence of AI driven disinformation during election cycles in South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU monitoring highlights platform performance under revised hate speech code

The European Commission has published the first monitoring results under the revised Code of Conduct on Countering Illegal Hate Speech Online+, providing insight into how major platforms handle reported content.

The assessment combines independent monitoring with self-reported data from participating companies.

Findings indicate that most platforms reviewed a majority of notifications within 24 hours, in line with their commitments.

However, a significant share of reported cases was either disputed or classified as erroneous, with inaccuracies partly attributed to monitoring bodies’ misuse of reporting channels.

The monitoring exercise functions as a structured stress test within the framework of the Digital Services Act (DSA), assessing whether platforms meet minimum response thresholds and apply appropriate measures when illegal hate speech is identified under national and the EU law.

Such a publication of results aims to strengthen transparency and accountability, while informing future improvements ahead of the next monitoring cycle.

The Code of Conduct on Countering Illegal Hate Speech Online+ now operates as part of the EU’s co-regulatory approach to platform governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU advances AI copyright safeguards through GPAI taskforce discussions

The European Commission has convened the second meeting of the Signatory Taskforce under the General-Purpose AI Code of Practice (GPAI), focusing on copyright protection in AI systems.

The discussion brought together signatories to exchange early implementation practices and technical approaches.

Participants examined methods to reduce copyright risks in AI-generated outputs, highlighting measures applied across the model’s lifecycle, including data selection, training, and deployment.

Emphasis was placed on combining technical safeguards with organisational processes to improve transparency and effectiveness.

One approach presented involved training models on licensed content alongside attribution systems to identify similarities between generated outputs and source material. Such a method aims to support fair remuneration and strengthen accountability within AI development.

The meeting also addressed mechanisms for handling complaints from rights holders, with participants discussing procedures for accessible and timely responses.

An exchange that forms part of ongoing EU efforts to refine governance standards for AI systems and copyright compliance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Greece moves to restrict youth social media access with new digital age rules

New measures to protect minors online have been announced by Greece, introducing a national ‘digital age of majority’, restricting access to social media for users under 15.

The policy forms part of a broader strategy addressing child safety and digital overuse, with implementation scheduled for January 2027.

An initiative that places primary responsibility on platforms, requiring robust age-verification systems and periodic re-verification of existing accounts. Authorities will oversee compliance under the EU’s Digital Services Act framework, with penalties including fines and operational restrictions for violations.

The policy builds on earlier tools such as KidsWallet, an age-verification mechanism already deployed nationally.

Authorities in Greece argue that reliance on parental control alone is insufficient, citing increasing evidence linking excessive platform use to mental health risks, including anxiety, reduced sleep, and social isolation.

A proposal that aligns with wider European discussions on youth protection, including efforts to establish a unified digital age threshold across member states.

Greece has also called for stronger EU-wide enforcement mechanisms, positioning the measure as part of a coordinated approach to safeguarding minors in digital environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

National Crime Agency to receive CSEA reports under UK Online Safety Act rules

UK regulations under the Online Safety Act 2023 are now in force, requiring certain regulated user-to-user services to register with the National Crime Agency and report detected and unreported child sexual exploitation and abuse content.

Under the Online Safety (CSEA Content Reporting by Regulated User-to-User Service Providers) Regulations 2026, providers subject to the reporting duty, and any third-party providers acting on their behalf, must register with the National Crime Agency through an online portal. They must also appoint an organisation administrator as a point of contact.

Reports submitted to the National Crime Agency must contain specified information, including details about the content, the time it was uploaded, relevant IP addresses, and user account data. The regulations also require providers to classify reports into three priority levels and submit them within the corresponding timeframes.

Record-keeping duties are also set out in the regulations. Providers must retain the report reference number for five years and keep the associated content and user data for one year from the reporting date.

The rules form part of the reporting framework under the Online Safety Act 2023 for child sexual exploitation and abuse content on regulated user-to-user services in the UK. Non-compliance may result in a penalty of up to 10% of qualifying worldwide revenue or £18 million, whichever is greater.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots are reshaping classroom debates, raising concerns over homogenised discussion

Generative AI chatbots are becoming embedded in university learning at Yale, students and academics told CNN, not only for essays and homework but also for real-time seminar participation. Students described classmates uploading readings and PDFs into chatbots before class, and even typing a professor’s question into AI during discussion to produce an immediate response to repeat aloud.

While this can make contributions sound more polished and prepared, some students said seminar conversations increasingly stall or feel flatter, with fewer personal interpretations and less exploratory debate. One student, ‘Amanda’, said she has noticed many classmates arriving with slick talking points but then offering near-identical arguments and phrasing, making discussions feel less distinctive than in earlier years.

Students gave several reasons for leaning on AI. ‘Jessica’, a senior, said she uses it daily, particularly in an economics seminar where the professor cold-calls students, both to digest readings quickly and to help her translate ideas into cohesive sentences when she struggles to phrase her comments.

‘Sophia’, a junior, said some students appear to use AI to draft ‘scripts’ for what to say in class, driven by insecurity about gaps in their understanding. She believes this weakens creativity and the ability to make original connections, replacing genuine engagement with impressive-sounding language.

A Yale spokesperson said the university is aware students are experimenting with AI in the classroom and noted a wider faculty trend towards limiting or banning laptops, using print-based materials, and prioritising direct engagement and original thinking.

The article links these observations to a March paper in ‘Trends in Cognitive Sciences’, which argues that large language models can systematically homogenise human expression and thought across language, perspective and reasoning. The paper’s authors say LLMs predict statistically likely next words based on training data that overrepresents dominant languages and ideas, potentially narrowing the ‘conceptual space’ for how people write and argue.

They warn that models tend to reproduce ‘WEIRD’ viewpoints, Western, educated, industrialised, rich and democratic, even when prompted otherwise, which may make those styles seem more credible and socially correct while marginalising other perspectives.

Researchers also describe a compounding feedback loop. As AI-generated outputs circulate in human discourse and eventually re-enter training data, sameness can intensify over time. Co-author Morteza Dehghani said offloading reasoning to AI risks intellectual laziness and could have broader social consequences, from weakened innovation to greater susceptibility to persuasion.

Educators quoted described both benefits and risks, and outlined practical responses. Thomas Chatterton Williams, a visiting professor and Bard College fellow, said AI can ‘raise the floor’ of discussion for difficult material but may suppress eccentric or truly original ideas, leaving students without a voice of their own or a sense of authorship.

Former teacher Daniel Buck called AI a ‘supercharged SparkNotes’ that can answer virtually any question, making it harder to detect shortcuts and easier for students to bypass the ‘boring minutiae’ where learning takes hold.

He worries that this also undermines relationships with professors and sustained cognitive work. Yale philosophy professor Sun-Joo Shin said model improvements forced her to redesign the assessment. Problem sets now earn completion credit and feedback, while in-class exams, oral tests and presentations carry more weight.

Williams said he has moved from writing to spontaneous, in-class, handwritten work and uses oral exit exams. Students who avoid AI argued that they are still affected by classmates’ reliance on it because it reduces the value and variety of seminar time, while others urged a middle path in which AI is treated as a collaborator, used to critique ideas rather than as a substitute for generating them or doing the reasoning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Will AI turn novel-writing into a collaborative process

The article argues that a novel’s value cannot be judged solely by the quality of its prose, because many readers respond to other elements such as premise, ideas and character. It points to Amazon reviews of ‘Shy Girl’, which holds a four-out-of-five-star rating based on hundreds of reviews, with many praising its hook despite awareness of ‘the controversy’ around it. One reviewer writes, ‘The premise sucked me in.’

The broader point is that plenty of novels are poorly written yet still succeed, because fiction, like music, is forgiving: a song may have an irresistible beat even with a predictable melody, and a book can move readers through suspense, beauty, realism, fantasy, or a protagonist they recognise in themselves.

From that premise, the piece asks whether fiction’s ‘layers’ (premise, plot, style and voice) must all come from a single person. It notes that collaborative creation is already normal in many fields, even if audiences rarely state their expectations explicitly: readers tend to assume a Booker Prize-winning novel is written entirely by the named author, while journalism is understood to be shaped by both writers and editors, and television and film are widely accepted as writers’ room and revision-heavy processes.

The article uses James Patterson as an example of industrial-scale collaboration in publishing, describing how he supplies collaborators with outlines and treatments and oversees many projects at once, an approach likened to a ‘novel factory’ that some argue distances him from ‘literary fiction’, yet may be the only practical way to sustain a decades-long series.

The author suggests AI will make such factories easier to create, citing a New York Times report on ‘Coral Hart’, a pseudonymous romance writer who uses AI to generate drafts in about 45 minutes, then revises them before self-publishing hundreds of books under dozens of names. Although not a bestseller, she reportedly earns ‘six figures’ and teaches others to do the same.

This points to a future in which authors act more like showrunners supervising AI-powered writers’ rooms, while raising a central risk: readers may not know who, or what, produced what they are reading, especially if AI use is not consistently disclosed despite platforms such as Amazon asking for it.

The piece ends by questioning whether AI necessarily implies high-volume, depersonalised production. Using a personal analogy from music-making, the author notes that technology can enable rapid output, but can also serve a more artistic purpose: helping a creator overcome technical limits and ‘realise a vision’.

Why does it matter?

The underlying argument is not that AI guarantees either shallow churn or genuine creativity, but that the most consequential issues may lie in intent, authorial expectations, and honest disclosure to readers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot