OHCHR seeks inputs on protecting human rights defenders in the digital age

The Office of the UN High Commissioner for Human Rights has issued a call for inputs to support a report on how new and emerging technologies are affecting human rights defenders, including women human rights defenders, in the digital age.

Issued under Human Rights Council resolution 58/23, the call sought submissions by 31 March 2026 and forms part of a wider effort to examine how digital technologies are reshaping the conditions under which defenders work, communicate, and stay safe.

According to the OHCHR, the report will look at how digital and emerging technologies affect the work, privacy, communications, and security of human rights defenders. The call notes that digital tools have transformed both how defenders operate and the threats they face, with consequences for their safety online and offline.

The questions set out in the call are organised into four broad areas: legislative and regulatory measures, digital communications, privacy restrictions, and corporate responses. The OHCHR specifically asks for information on online safety and cybercrime laws, internet shutdowns, platform attacks, content moderation, surveillance tools, biometric surveillance, encryption, AI-related risks, and how companies assess and respond to harms affecting human rights defenders on their services.

The OHCHR invited member states, civil society, industry, and other stakeholders to submit written inputs in English, French, or Spanish. Those submissions will inform online consultations in April and the preparation of a report to the Human Rights Council under resolution 58/23.

Why does it matter?

Because the call treats the digital environment facing human rights defenders as a governance issue in its own right, rather than only as a technical or security concern. It brings together surveillance, platform accountability, encryption, AI, online harassment, and internet shutdowns under a single human rights framework, while signalling that the OHCHR wants evidence not only on state conduct, but also on how private companies shape civic space in the digital age.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MIT develops AI framework to test ethics in autonomous systems

Researchers at MIT have introduced a new framework designed to evaluate the ethical impact of autonomous systems used in high-stakes environments. The approach aims to identify cases where AI-driven decisions may be technically efficient but fail to meet fairness expectations.

Growing reliance on AI in areas such as energy distribution and traffic management has raised concerns about unintended bias. Cost-optimised systems can still disadvantage communities, especially when ethical factors are hard to measure.

The framework, known as SEED-SET, separates objective performance metrics from subjective human values. A large language model is used to simulate stakeholder preferences, enabling the system to compare scenarios and detect where outcomes diverge from ethical expectations.

Testing shows the method generates more relevant scenarios while reducing manual analysis. Findings highlight its potential to improve transparency and support more balanced decision-making before AI systems are deployed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

France moves toward social media restrictions for children under 15

Legislative efforts in France signal a shift toward stricter governance of youth access to digital platforms, with policymakers preparing to debate a ban on social media use for children under 15.

A proposal that forms part of a broader strategy to address concerns over online harms and excessive screen exposure among adolescents.

The draft law in France extends beyond access restrictions, proposing a digital curfew for older teenagers and expanding existing school phone bans to include high schools.

These measures reflect increasing reliance on regulatory intervention instead of voluntary platform safeguards, as evidence links prolonged digital engagement with risks such as cyberbullying, disrupted sleep patterns and exposure to harmful content.

Political backing for the initiative has emerged from figures aligned with Emmanuel Macron, reinforcing the government’s position that stronger oversight of digital environments is necessary. The proposal also mirrors developments in Australia, where similar restrictions have already entered into force.

A debate that is further influenced by legal actions targeting major platforms, including TikTok and Meta, amid allegations that algorithmic systems contribute to harmful user experiences.

The outcome of the parliamentary discussions in France is expected to shape future approaches to child safety, platform accountability and digital rights governance across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Dutch court bans harmful Grok AI-generated images

A judge in Amsterdam has ordered AI chatbot Grok and platform X to stop generating and distributing explicit deepfake images. The ruling targets so-called ‘undressing’ content and illegal material involving minors.

The case was brought by Offlimits, which argued that safeguards were failing. The Dutch judges found sufficient evidence that harmful images could still be created despite existing restrictions.

The court imposed a penalty of €100,000 per day for violations, with a maximum of €10 million. Access to Grok on X must also be suspended if the system does not comply with the order.

The decision highlights growing legal pressure on AI platforms to control the misuse of generative tools. Regulators and courts are increasingly demanding stronger protections against online abuse and illegal content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU boosts fact-checking with €5 million disinformation resilience plan

The European Commission has committed €5 million to strengthen independent fact-checking networks, reinforcing efforts to counter disinformation across Europe. The initiative seeks to expand verification capacity in all EU languages while improving coordination among key stakeholders.

The programme introduces a comprehensive support system for fact-checkers, covering legal assistance, cybersecurity protection and psychological support.

It also establishes a centralised European repository of verified information, designed to enhance transparency and improve access to reliable content across the EU.

Led by the European Fact-Checking Standards Network, the project builds on existing frameworks such as the European Digital Media Observatory. The initiative forms part of the EU’s broader strategy to strengthen information integrity and safeguard democratic processes.

By reinforcing independent verification ecosystems, the programme reflects a policy-driven effort to address disinformation threats while supporting a more resilient and trustworthy digital environment across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UNESCO initiative drives new digital platform governance frameworks in South Asia

South Asia is strengthening digital platform governance through a rights-based approach shaped by regional cooperation and international guidance.

A workshop led by UNESCO brought together policymakers, civil society and academics to align platform regulation with principles of freedom of expression and access to information.

The discussions focused on addressing governance gaps linked to misinformation, platform accountability and transparency. Participants examined national experiences and identified shared regulatory challenges, emphasising the need for coordinated regional responses instead of fragmented national measures.

An initiative that also validated regional toolkits designed for policymakers and civil society, translating global principles into practical guidance. These tools aim to support the implementation of governance frameworks that reflect local contexts while upholding international human rights standards.

The process builds on UNESCO’s Internet for Trust guidelines, reinforcing a human-centred model of digital governance. Continued collaboration across South Asia is expected to strengthen regulatory capacity and ensure that digital platforms operate with greater accountability and public trust.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Campaign highlights risks of profit-driven digital platforms

A global campaign led by the Norwegian Consumer Council (NCC) has drawn attention to the decline in quality across digital platforms, a phenomenon widely referred to as ‘enshitification’, in which services deteriorate over time as companies prioritise monetisation over user experience.

The initiative has gained momentum through a viral video and coordinated advocacy efforts across multiple regions.

Inshitification is a term coined by journalist Cory Doctorow that describes a pattern in which platforms initially serve users well, then shift towards extracting value from both users and business partners.

In practice, it often results in increased advertising, paywalls, and reduced functionality, with platforms leveraging user dependence to introduce less favourable conditions.

More than 70 advocacy groups across the EU, the US and Norway have urged policymakers to take stronger action, arguing that declining competition and market concentration allow platforms to degrade services without losing users.

Network effects and high switching costs further limit consumer choice, making it difficult to move to alternative platforms even when dissatisfaction grows.

Existing frameworks, such as the Digital Markets Act and the Digital Services Act, aim to address some of these issues by promoting interoperability, transparency, and accountability.

However, experts argue that enforcement remains too slow and insufficient to deter harmful practices, suggesting that stronger regulatory intervention will be necessary to restore balance between consumers, platforms, and competition in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Stanford study warns about the risks of ‘sycophantic’ AI chatbots

A new study from Stanford University has raised concerns about the growing use of AI chatbots for personal advice, highlighting risks linked to a behaviour known as ‘sycophancy’, where systems validate users’ views instead of challenging them.

Researchers argue that such responses are not merely stylistic but have broader consequences for decision-making and social behaviour.

The analysis examined multiple leading models, including ChatGPT, Claude, and Gemini, and found that chatbot responses supported user perspectives far more often than human feedback.

In scenarios involving questionable or harmful actions, systems frequently endorsed behaviour that human evaluators would criticise, raising concerns about reliability in sensitive contexts such as relationships or ethical decisions.

Further experiments involving thousands of participants showed that users tend to prefer and trust sycophantic responses, increasing the likelihood of repeated use.

However, such interactions also appeared to reinforce self-centred thinking and reduce willingness to reconsider or apologise, suggesting a deeper impact on social judgement and interpersonal skills.

Researchers warn that users’ tendency to favour agreeable responses may create incentives for developers to prioritise engagement over accuracy or ethical balance.

The findings highlight the need for oversight and caution, with experts advising against relying on AI systems as substitutes for human guidance in complex personal situations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Lille proposed as EU customs hub

France has submitted a bid to host the future EU Customs Authority in Lille, positioning itself at the centre of efforts to modernise the customs union. The proposal highlights national expertise and a leading role in shaping recent reforms.

Authorities argue the new body will strengthen internal market security, improve oversight of e-commerce and enhance cooperation between member states. France has supported initiatives to tackle illicit trade and improve risk management.

Officials also point to strong operational experience, including international customs networks and the use of AI tools to screen postal shipments. Such capabilities are presented as key to supporting the authority from its launch, but questions are raised concerning the use of AI and its biases.

Lille is promoted as a strategic logistics hub with strong transport links and access to skilled workers. Its location near major European trade routes is expected to support recruitment and coordination across the bloc.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Human creativity outperforms AI in new research findings

New research challenges assumptions about AI creativity, concluding that human imagination remains significantly more advanced than generative systems.

The study, published in Advanced Science, examined how AI models perform in visual creative tasks compared with both professional artists and non-artists.

Researchers developed an experimental method to assess creativity using abstract visual tasks, comparing human and AI outputs under different conditions.

Results showed a clear hierarchy, with visual artists achieving the highest creativity scores, followed by the general population, while AI models ranked lower, especially when operating without human guidance.

These findings indicate that even when trained on human-created material, AI struggles to replicate originality and imaginative depth.

The study argues that creativity should be analysed as a process rather than judged solely by final outputs. By examining stages from idea generation to execution, researchers found that AI systems rely heavily on human input throughout development and use.

Removing human assistance significantly reduced the quality and originality of AI-generated results, reinforcing the limitations of current generative models.

Overall, the research highlights a persistent gap between human and artificial creativity, suggesting that AI operates more as a tool guided by human direction than as an independent creative agent.

The findings contribute to broader debates in cognitive science and AI, emphasising the continued importance of human involvement in creative processes.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!