Parliamentarians call for stronger platform accountability and human rights protections at IGF 2025

At the 2025 Internet Governance Forum in Lillestrøm, Norway, parliamentarians from around the world gathered to share perspectives on how to regulate harmful online content without infringing on freedom of expression and democratic values. The session, moderated by Sorina Teleanu, Diplo’s Director of Knowledge, highlighted the increasing urgency for social media platforms to respond more swiftly and responsibly to harmful content, particularly content generated by AI that can lead to real-world consequences such as harassment, mental health issues, and even suicide.

Pakistan’s Anusha Rahman Ahmad Khan delivered a powerful appeal, pointing to cultural insensitivity and profit-driven resistance by platforms that often ignore urgent content removal requests. Representatives from Argentina, Nepal, Bulgaria, and South Africa echoed the need for effective legal frameworks that uphold safety and fundamental rights.

Argentina’s Franco Metaza, Member of Parliament of Mercosur, cited disturbing content that promotes eating disorders among young girls and detailed the tangible danger of disinformation, including an assassination attempt linked to online hate. Nepal’s MP Yogesh Bhattarai advocated for regulation without authoritarian control, underscoring the importance of constitutional safeguards for speech.

Member of European Parliament, Tsvetelina Penkova from Bulgaria, outlined the EU’s multifaceted digital laws, like the Digital Services Act and GDPR, which aim to protect users while grappling with implementation challenges across 27 diverse member states.

Youth engagement and digital literacy emerged as key themes, with several speakers emphasising that involving young people in policymaking leads to better, more inclusive policies. Panellists also stressed that education is essential for equipping users with the tools to navigate online spaces safely and critically.

Calls for multistakeholder cooperation rang throughout the session, with consensus on the need for collaboration between governments, tech companies, civil society, and international organisations. A thought-provoking proposal from a Congolese parliamentarian suggested that digital rights be recognised as a new, fourth generation of human rights—akin to civil, economic, and environmental rights already codified in international frameworks.

Other attendees welcomed the idea and agreed that without such recognition, the enforcement of digital protections would remain fragmented. The session concluded on a collaborative and urgent note, with calls for shared responsibility, joint strategies, and stronger international frameworks to create a safer, more just digital future.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

MIT study links AI chatbot use to reduced brain activity and learning

A new preprint study from MIT has revealed that using AI chatbots for writing tasks significantly reduces brain activity and impairs memory retention.

The research, led by Dr Nataliya Kosmyna at the MIT Media Lab, involved Boston-area students writing essays under three conditions: unaided, using a search engine, or assisted by OpenAI’s GPT-4o. Participants wore EEG headsets to monitor brain activity throughout.

Results indicated that those relying on AI exhibited the weakest neural connectivity, with up to 55% lower cognitive engagement than the unaided group. Those using search engines showed a moderate drop of up to 48%.

The researchers used Dynamic Directed Transfer Function (dDTF) to assess cognitive load and information flow across brain regions. They found that while the unaided group activated broad neural networks, AI users primarily engaged in procedural tasks with shallow encoding of information.

Participants using GPT-4o also performed worst in recall and perceived ownership of their written work. In follow-up sessions, students previously reliant on AI struggled more when the tool was removed, suggesting diminished internal processing skills.

Meanwhile, those who used their own cognitive skills earlier showed improved performance when later given AI support.

The findings suggest that early AI use in education may hinder deeper learning and critical thinking. Researchers recommend that students first engage in self-driven learning before incorporating AI tools to enhance understanding.

Dr Kosmyna emphasised that while the results are preliminary and not yet peer-reviewed, the study highlights the need for careful consideration of AI’s cognitive impact.

MIT’s team now plans to explore similar effects in coding tasks, studying how AI tools like code generators influence brain function and learning outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government backs AI to help teachers and reduce admin

The UK government has unveiled new guidance for schools that promotes the use of AI to reduce teacher workloads and increase face-to-face time with pupils.

The Department for Education (DfE) says AI could take over time-consuming administrative tasks such as lesson planning, report writing, and email drafting—allowing educators to focus more on classroom teaching.

The guidance, aimed at schools and colleges in the UK, highlights how AI can assist with formative assessments like quizzes and low-stakes feedback, while stressing that teachers must verify outputs for accuracy and data safety.

It also recommends using only school-approved tools and limits AI use to tasks that support rather than replace teaching expertise.

Education unions welcomed the move but said investment is needed to make it work. Leaders from the NAHT and ASCL praised AI’s potential to ease pressure on staff and help address recruitment issues, but warned that schools require proper infrastructure and training.

The government has pledged £1 million to support AI tool development for marking and feedback.

Education Secretary Bridget Phillipson said the plan will free teachers to deliver more personalised support, adding: ‘We’re putting cutting-edge AI tools into the hands of our brilliant teachers to enhance how our children learn and develop.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s AI tools disabled for gaokao exam

As millions of high school students across China began the rigorous ‘gaokao’ college entrance exam, the country’s leading tech companies took unprecedented action by disabling AI features on their popular platforms.

Apps from Tencent, ByteDance, and Moonshot AI temporarily blocked functionalities like photo recognition and real-time question answering. This move aimed to prevent students from using AI chatbots to cheat during the critical national examination, which largely dictates university admissions in China.

This year, approximately 13.4 million students are participating in the ‘gaokao,’ a multi-day test that serves as a pivotal determinant for social mobility, particularly for those from rural or lower-income backgrounds.

The immense pressure associated with the exam has historically fueled intense test preparation. However, screenshots circulating on Chinese social media app Rednote confirmed that AI chatbots like Tencent’s YuanBao, ByteDance’s Doubao, and Moonshot AI’s Kimi displayed messages indicating the temporary closure of exam-relevant features to ensure fairness.

China’s ‘gaokao’ exam highlights a balanced approach to AI: promoting its education from a young age, with compulsory instruction in Beijing schools this autumn, while firmly asserting it’s for learning, not cheating. Regulators draw a clear line, reinforcing that AI aids development, but never compromises academic integrity.

This coordinated action by major tech firms reinforces the message that AI has no place in the examination hall, despite China’s broader push to cultivate an AI-literate generation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Schools in the EU start adapting to the AI Act

European schools are taking their first concrete steps to integrate AI in line with the EU AI Act, with educators and experts urging a measured, strategic approach to compliance.

At a recent conference on AI in education, school leaders and policymakers explored how to align AI adoption with the incoming regulations.

With key provisions of the EU AI Act already in effect and full enforcement coming by August 2026, the pressure is on schools to ensure their use of AI is transparent, fair, and accountable. The law classifies AI tools by risk level, with those used to evaluate or monitor students subject to stricter oversight.

Matthew Wemyss, author of ‘AI in Education: An EU AI Act Guide,’ laid out a framework for compliance: assess current AI use, scrutinise the impact on students, and demand clear documentation from vendors.

Wemyss stressed that schools remain responsible as deployers, even when using third-party tools, and should appoint governance leads who understand both technical and ethical aspects.

Education consultant Philippa Wraithmell warned schools not to confuse action with strategy. She advocated starting small, prioritising staff confidence, and ensuring every tool aligns with learning goals, data safety, and teacher readiness.

Al Kingsley MBE emphasised the role of strong governance structures and parental transparency, urging school boards to improve their digital literacy to lead effectively.

The conference highlighted a unifying theme: meaningful AI integration in schools requires intentional leadership, community involvement, and long-term planning. With the right mindset, schools can use AI not just to automate, but to enhance learning outcomes responsibly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber attacks and ransomware rise globally in early 2025

Cyber attacks have surged by 47% globally in the first quarter of 2025, with organisations facing an average of 1,925 attacks each week.

Check Point Software, a cybersecurity firm, warns that attackers are growing more sophisticated and persistent, targeting critical sectors like healthcare, finance, and technology with increasing intensity.

Ransomware activity alone has soared by 126% compared to last year. Attackers are no longer just encrypting files but now also threaten to leak sensitive data unless paid — a tactic known as dual extortion.

Instead of operating as large, centralised gangs, modern ransomware groups are smaller and more agile, often coordinating through dark web forums, making them harder to trace.

The report also notes that cybercriminals are using AI to automate phishing attacks and scan systems for vulnerabilities, allowing them to strike with greater accuracy. Emerging markets remain particularly vulnerable, as they often lack advanced cybersecurity infrastructure.

Check Point urges companies to act decisively by adopting proactive security measures, investing in threat detection and employee training, and implementing real-time monitoring. Waiting for an attack instead of preparing in advance could leave organisations dangerously exposed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok bans ‘SkinnyTok’ hashtag worldwide

TikTok has globally banned the hashtag ‘SkinnyTok’ after pressure from the French government, which accused the platform of promoting harmful eating habits among young users. The decision comes as part of the platform’s broader effort to improve user safety, particularly around content linked to unhealthy weight loss practices.

The move was hailed as a win by France’s Digital Minister, Clara Chappaz, who led the charge and called it a ‘first collective victory.’ She, along with other top French digital and data protection officials, travelled to Dublin to engage directly with TikTok’s Trust and Safety team. Notably, no representatives from the European Commission were present during these discussions, raising questions about the EU’s role and influence in enforcing digital regulations.

While the European Commission had already opened a broader investigation into TikTok over child protection issues in early 2024 under the Digital Services Act (DSA), it has yet to comment on the SkinnyTok case specifically. Despite this, the Commission says it is still coordinating with French authorities on matters related to DSA enforcement.

The episode has spotlighted national governments’ power in pushing for online safety reforms and the uncertain role of the EU institutions in urgent digital policy actions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

The EU probes porn sites over DSA violations

The European Commission has launched a formal investigation into four major pornographic websites—Pornhub, Stripchat, XNXX, and XVideos—over concerns they may be violating the EU’s Digital Services Act (DSA). The probe centres on whether these platforms provide adequate protection for minors, notably regarding age verification.

According to the Commission, all four currently use simple click-through age checks, which are suspected of failing to meet DSA requirements. Authorities primarily focus on assessing whether the platforms have conducted proper risk assessments and implemented safeguards to protect children’s mental and physical health.

The European Commission emphasised that the investigation is a priority and will include collaboration with the EU member states to monitor smaller adult sites that fall under the 45-million-user threshold. In its statement, the Commission reiterated plans to roll out a standardised EU-wide age verification system by the end of next year.

While Pornhub, XVideos, and Stripchat were previously designated as Very Large Online Platforms (VLOPs), the Commission announced on Tuesday that Stripchat will no longer hold that status moving forward.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber scams use a three-letter trap

Staying safe from cybercriminals can be surprisingly simple. While AI-powered scams grow more realistic, some signs are still painfully obvious.

If you spot the letters ‘.TOP’ in any message link, it’s best to stop reading and hit delete. That single clue is often enough to expose a scam in progress.

Most malicious texts pose as alerts about road tolls, deliveries or account issues, using trusted brand names to lure victims into clicking fake links.

The worst of these is the ‘.TOP’ top-level domain (TLD), which has become infamous for its role in phishing and scam operations. Although launched in 2014 for premium business use, its low cost and lack of oversight quickly made it a favourite among cyber gangs, especially those based in China.

Today, nearly one-third of all .TOP domains are linked to cybercrime — far surpassing the criminal activity seen on mainstream domains like ‘.com’.

Despite repeated warnings and an unresolved compliance notice from internet regulator ICANN, abuse linked to .TOP has only worsened.

Experts warn that it is highly unlikely any legitimate Western organisation would ever use a .TOP domain. If one appears in your messages, the safest option is to delete it without clicking.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!