Spain expands digital oversight of online hate

Spain has launched a digital system designed to track hate speech and disinformation across social media platforms. Prime Minister Pedro Sánchez presented the tool in Madrid as part of a wider effort to improve oversight of online platforms.

The platform known as HODIO will analyse public posts and measure the spread and reach of hateful content. Authorities in Spain say the project will publish regular reports examining how platforms respond to harmful material.

The monitoring initiative is managed by Spain’s Observatory on Racism and Xenophobia. Officials in Spain say the data will help citizens understand the scale of online hate and assess how social networks address abusive content.

The initiative forms part of a broader digital policy agenda in Spain that also includes measures to protect minors online. Policymakers in Spain have discussed proposals such as restrictions on social media use by children under 16.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and quantum computing reshape the global cybersecurity landscape

Cybersecurity risks are increasing as digital connectivity expands across governments, businesses and households.

According to Thales Group, a growing number of connected devices and digital services has significantly expanded the potential entry points for cyberattacks.

AI is reshaping the cybersecurity landscape by enabling attackers to identify vulnerabilities at unprecedented speed.

Security specialists increasingly describe the environment as a contest in which defensive systems must deploy AI to counter adversaries using similar technologies to exploit weaknesses in digital infrastructure.

Security concerns also extend beyond large institutions. Connected devices in homes, including smart cameras and speakers, often lack robust security protections, increasing exposure for individuals and networks.

Policymakers in Europe are responding through measures such as the Cyber Resilience Act, which will introduce mandatory security requirements for connected products sold in the EU.

Long-term risks are also emerging from advances in quantum computing.

Experts warn that powerful future machines could eventually break widely used encryption systems that currently protect communications, financial data and government networks, prompting organisations to adopt quantum-resistant security methods.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU updates voluntary code for labelling AI-generated content

The European Commission has released a second draft of its voluntary Code of Practice on marking and labelling AI-generated content, designed to support compliance with transparency rules under the Artificial Intelligence Act.

Published on 5 March, the updated draft reflects feedback from hundreds of stakeholders, including industry groups, academic researchers, policymakers, and civil society organisations.

Revisions follow consultations held in early 2026 as part of the broader rollout of the EU’s AI regulatory framework.

The proposed code outlines technical approaches for identifying AI-generated material. A two-layered system using secure metadata and digital watermarking is recommended, with optional fingerprinting, logging, and verification to improve detection.

Guidelines also address how platforms and publishers should label deepfakes and AI-generated text related to matters of public interest. Public feedback is open until 30 March, with the final code expected in early June before transparency rules take effect on 2 August 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU lawmakers call for stronger copyright safeguards in AI training

The European Parliament has adopted a report urging policymakers to establish a long-term framework protecting copyrighted works used in AI training.

These recommendations aim to ensure that creative industries retain transparency and fair treatment as generative AI technologies expand.

Among the central proposals is the creation of a European register managed by the European Union Intellectual Property Office. The database would list copyrighted works used to train AI systems and identify creators who have chosen to exclude their content from such use.

Lawmakers in the EU are also calling for greater transparency from AI developers, including disclosure of the websites from which training data has been collected. According to the report, failing to meet transparency requirements could raise questions about compliance with existing copyright rules.

The recommendations have received mixed reactions from industry stakeholders.

Organisations representing creators argue that stronger safeguards are necessary to ensure fair remuneration and legal clarity, while technology sector groups caution that additional requirements could create complexity for companies developing AI systems.

The report is not legally binding but signals the political direction of ongoing European discussions on copyright and AI governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

MIT scientists develop AI system to improve robot planning

Researchers at MIT have developed a hybrid AI framework designed to improve how robots plan and perform complex visual tasks. The approach combines generative AI with classical planning software, allowing machines to analyse images, simulate actions, and generate reliable plans to reach a goal.

The system relies on two specialised vision-language models. One model analyses an image, describes the environment, and simulates possible actions, while a second model converts those simulations into a formal programming language used for planning.

Generated files are then processed by established planning software to produce a step-by-step strategy.

Testing showed a significant improvement compared with existing techniques. The framework achieved an average success rate of about 70 percent, while many baseline methods reached roughly 30 percent.

Performance remained strong in unfamiliar scenarios, demonstrating the system’s ability to adapt to changing conditions.

The method could support applications such as robot navigation, autonomous driving, and multi-robot assembly systems. Continued development aims to handle more complex environments and reduce errors caused by AI model hallucinations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO and African network advance AI in justice

AI is increasingly shaping Africa’s courts, from translation tools to legal search engines. As AI becomes more integrated, judicial actors face new questions around transparency, accountability, and human rights.

Thirty-one members of the African Network of Judicial Trainers (ANJT) gathered in Maputo for a regional workshop on AI, Justice, and Human Rights.

Participants included judicial directors, Supreme Court justices and senior magistrates who shared strategies for responsibly integrating AI into courts. UNESCO highlighted the importance of keeping justice human-centred amid technological change.

Discussions examined the benefits of AI-assisted translation and data analysis, alongside risks such as bias, discrimination, and opacity.

UNESCO introduced practical resources, including the Guidelines for the Use of AI in Courts and Tribunals and AI Essentials for Judges, to help judicial professionals implement ethical practices.

Workshop participants committed to adapting these materials into national training curricula, aiming to multiply knowledge across African judicial systems. ANJT and UNESCO emphasised that AI adoption should enhance efficiency without compromising fairness or the rule of law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Dutch firms rank among EU leaders in sustainable ICT

Businesses in the Netherlands rank among the leading adopters of sustainable ICT practices in the EU, according to data from Statistics Netherlands and Eurostat. Around one quarter of companies use digital tools to reduce material consumption and improve resource efficiency.

The Netherlands ranked fourth in the EU for the use of technology to reduce waste and improve sustainability. Sectors including energy, water and waste management showed the strongest adoption of these ICT solutions.

Sustainable disposal of electronic equipment is also widespread among businesses in the Netherlands. About 9 in 10 companies recycle or return obsolete ICT equipment through approved e-waste collection systems.

Across the EU, more than three-quarters of businesses now dispose of outdated technology in environmentally responsible ways. Analysts say progress highlights growing corporate efforts to integrate the sustainability of e-waste into digital operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lawmakers urged to rethink rules on private messaging

Policymakers are being urged to rethink the regulation of private messaging platforms as disinformation campaigns increasingly spread through closed digital networks. Researchers say messaging apps now play a major role in political communication and crisis information flows.

Evidence from elections and conflicts highlights the challenge. During Brazil’s 2024 municipal elections, manipulated political content spread widely through WhatsApp groups, while authorities in Ukraine reported Telegram being used for both emergency communication and disinformation.

Experts argue that current laws often fail to address messaging platforms, such as Telegram, because regulation typically targets public social media spaces. Analysts say modern messaging services combine private chats with broadcast channels and other features that allow content to reach large audiences.

Policy specialists propose regulating specific platform features rather than entire services. Governments and technology companies are also encouraged to protect encryption while expanding transparency tools, media literacy programmes and user safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Writers publish protest book to challenge AI use of copyrighted works

Thousands of writers have joined a symbolic protest against AI companies by publishing a book that contains no traditional content.

The work, titled “Don’t Steal This Book,” lists only the names of roughly 10,000 contributors who oppose the use of their writing to train AI systems without their permission.

An initiative that was organised by composer and campaigner Ed Newton-Rex and distributed during the London Book Fair. Contributors include prominent authors such as Kazuo Ishiguro, Philippa Gregory and Richard Osman, along with thousands of other writers and creative professionals.

Campaigners argue that generative AI systems are trained on vast collections of copyrighted material gathered from the internet without authorisation or compensation.

According to organisers, such practices allow AI tools to compete with the creators whose works were used to develop them.

The protest arrives as the UK Government prepares an economic assessment of potential copyright reforms related to AI. Proposals under discussion include allowing AI developers to use copyrighted material unless rights holders explicitly opt out.

Many writers and artists oppose that approach and demand stronger copyright protections. In parallel, the publishing sector is preparing a licensing initiative through Publishers’ Licensing Services to provide AI developers with legal access to books while ensuring authors receive compensation.

The dispute reflects a growing global debate over how copyright law should apply to generative AI systems that rely on massive datasets to develop chatbots and other digital tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!