Dutch firms rank among EU leaders in sustainable ICT

Businesses in the Netherlands rank among the leading adopters of sustainable ICT practices in the EU, according to data from Statistics Netherlands and Eurostat. Around one quarter of companies use digital tools to reduce material consumption and improve resource efficiency.

The Netherlands ranked fourth in the EU for the use of technology to reduce waste and improve sustainability. Sectors including energy, water and waste management showed the strongest adoption of these ICT solutions.

Sustainable disposal of electronic equipment is also widespread among businesses in the Netherlands. About 9 in 10 companies recycle or return obsolete ICT equipment through approved e-waste collection systems.

Across the EU, more than three-quarters of businesses now dispose of outdated technology in environmentally responsible ways. Analysts say progress highlights growing corporate efforts to integrate the sustainability of e-waste into digital operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lawmakers urged to rethink rules on private messaging

Policymakers are being urged to rethink the regulation of private messaging platforms as disinformation campaigns increasingly spread through closed digital networks. Researchers say messaging apps now play a major role in political communication and crisis information flows.

Evidence from elections and conflicts highlights the challenge. During Brazil’s 2024 municipal elections, manipulated political content spread widely through WhatsApp groups, while authorities in Ukraine reported Telegram being used for both emergency communication and disinformation.

Experts argue that current laws often fail to address messaging platforms, such as Telegram, because regulation typically targets public social media spaces. Analysts say modern messaging services combine private chats with broadcast channels and other features that allow content to reach large audiences.

Policy specialists propose regulating specific platform features rather than entire services. Governments and technology companies are also encouraged to protect encryption while expanding transparency tools, media literacy programmes and user safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Writers publish protest book to challenge AI use of copyrighted works

Thousands of writers have joined a symbolic protest against AI companies by publishing a book that contains no traditional content.

The work, titled “Don’t Steal This Book,” lists only the names of roughly 10,000 contributors who oppose the use of their writing to train AI systems without their permission.

An initiative that was organised by composer and campaigner Ed Newton-Rex and distributed during the London Book Fair. Contributors include prominent authors such as Kazuo Ishiguro, Philippa Gregory and Richard Osman, along with thousands of other writers and creative professionals.

Campaigners argue that generative AI systems are trained on vast collections of copyrighted material gathered from the internet without authorisation or compensation.

According to organisers, such practices allow AI tools to compete with the creators whose works were used to develop them.

The protest arrives as the UK Government prepares an economic assessment of potential copyright reforms related to AI. Proposals under discussion include allowing AI developers to use copyrighted material unless rights holders explicitly opt out.

Many writers and artists oppose that approach and demand stronger copyright protections. In parallel, the publishing sector is preparing a licensing initiative through Publishers’ Licensing Services to provide AI developers with legal access to books while ensuring authors receive compensation.

The dispute reflects a growing global debate over how copyright law should apply to generative AI systems that rely on massive datasets to develop chatbots and other digital tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI deepfakes detection expands on YouTube for politicians and journalists

YouTube is expanding its likeness-detection technology designed to identify AI-generated deepfakes, extending access to a pilot group of government officials, political candidates, and journalists.

The tool allows participants to detect unauthorised AI-generated videos that simulate their faces and request removal if the content violates YouTube policies. The system builds on technology launched last year for around four million creators in the YouTube Partner Program.

Similar to YouTube’s Content ID system, which detects copyrighted material in uploaded videos, the likeness detection feature scans for AI-generated faces created with deepfake tools. Such technologies are increasingly used to spread misinformation or manipulate public perception by making prominent figures appear to say or do things they never did.

According to YouTube, the pilot programme aims to balance free expression with safeguards against AI impersonation, particularly in sensitive civic contexts.

‘This expansion is really about the integrity of the public conversation,’ said Leslie Miller, YouTube’s vice president of Government Affairs and Public Policy. ‘We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we’re also being careful about how we use it.’

Removal requests will be assessed individually under YouTube’s privacy policy rules to determine whether the content constitutes parody or political critique, which remain protected forms of expression. Participants must verify their identity by uploading a selfie and a government-issued ID before accessing the tool. Once verified, they can review detected matches and submit removal requests for content they believe violates policy.

YouTube also said it supports the proposed NO FAKES Act in the United States, which aims to regulate the unauthorised use of an individual’s voice or visual likeness in AI-generated media. AI-generated videos on the platform are already labelled, though label placement varies depending on the topic’s sensitivity.

‘There’s a lot of content that’s produced with AI, but that distinction’s actually not material to the content itself,’ said Amjad Hanif, YouTube’s vice president of Creator Products. The company said it plans to expand the technology over time to detect AI-generated voices and other intellectual property.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sustainable AI discussed by UNESCO and Saudi leaders under Vision 2030

Leaders from government, academia, and industry gathered to emphasise that sustainable AI must shape efficient, inclusive, and environmentally responsible systems. The discussion focused on embedding sustainability, ethics, and human-centred principles throughout the AI lifecycle by adopting a sustainable-by-design approach.

The workshop was built on Saudi Arabia’s expanding role in AI and digital transformation through the Saudi Data & AI Authority (SDAIA) and the National Strategy for Data and AI (NSDAI). The efforts are supported by significant investments in cloud infrastructure and data centres under the Kingdom’s Vision 2030 programme. Participants highlighted that sustainable AI must become a core principle in the development of emerging digital infrastructure and AI-powered services.

Abdulrahman Habib, Director of the International Centre for Artificial Intelligence Research and Ethics (ICAIRE), highlighted Saudi Arabia’s growing leadership in AI ethics and governance. With national AI Ethics Principles and a maturing regulatory landscape, the Kingdom is positioning itself as a global contributor to responsible AI dialogue, translating principles into operational governance systems rather than just policy statements.

Leona Verdadero of UNESCO highlighted two core concepts: Greening with AI, which uses AI to accelerate sustainability, and Greening of AI, which ensures systems are energy-efficient, ethical, and human-centred. She stressed that effective AI governance requires collaboration and industry leadership at every stage of development.

Per Ola Kristensson from the University of Cambridge urged action beyond rhetoric, stressing that true AI sustainability means developing technology to augment, not replace, human potential. Industry presentations reinforced that sustainable AI drives real-world progress. Companies like RECYCLEE optimise resource recovery, Remedium reduces environmental impacts in healthcare and infrastructure, and IDOM strengthens sustainability reporting through AI-enhanced design.

UNESCO supports Saudi Arabia’s drive for inclusive, ethical, and sustainable AI ecosystems, framing sustainable AI as critical in the global transition to green digital transformation.

Faisal Al Azib, Executive Director of the UN Global Compact Network Saudi Arabia, stated: ‘As the Kingdom advances its digital transformation under Vision 2030, we have a responsibility to ensure that innovation advances hand in hand with sustainability and human dignity.’

Al Azib concluded: ‘Sustainable AI is central to building resilient, future-ready businesses. Through partnerships with UNESCO and our local ecosystem, we aim to equip companies with the governance tools to embed responsible, energy-efficient, and human-centred AI into their core strategies.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU explores AI image generation safeguards

The Council of the European Union is examining a compromise proposal that could introduce restrictions on certain AI systems capable of generating sensitive synthetic images.

The discussions form part of ongoing adjustments to the EU AI Act.

A proposed measure that would primarily address AI tools that generate illegal material, particularly content involving the exploitation of minors.

Policymakers are considering ways to prevent the development or deployment of systems that could produce such material while maintaining proportionate rules for legitimate AI applications.

Early indications suggest the proposal may not apply to images depicting people in standard clothing contexts, such as swimwear. The distinction reflects policymakers’ effort to define the scope of restrictions without imposing unnecessary limits on common image-generation uses.

The debate highlights broader regulatory challenges linked to generative AI technologies. European institutions are seeking to strengthen protections against harmful uses of AI while preserving space for innovation and lawful digital services.

Further negotiations among the EU institutions are expected as lawmakers continue refining how these provisions could fit within the broader European framework governing AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Dutch court increases pressure on Meta over non-profiling social media feeds

A court in the Netherlands has increased potential penalties against Meta after ruling that changes to social media timelines must be implemented urgently.

The decision raises the potential fine for non-compliance from €5 million to €10 million if required adjustments are not applied to Facebook and Instagram feeds.

Judges at the Amsterdam Court of Appeals said users must be able to select a timeline that does not rely on profiling-based recommendations.

The ruling follows a legal challenge from the digital rights organisation Bits of Freedom, which argued that users who switched away from algorithmic feeds were automatically returned to them after navigating the platform or reopening the application.

The court concluded that the automatic resetting mechanism represents a deceptive design practice known as a ‘dark pattern’.

Such practices are prohibited under the EU’s Digital Services Act, which requires large online platforms to provide greater transparency and user control over recommendation systems.

Judges acknowledged that Meta had already introduced several technical changes, although not all required measures were fully implemented. The company must ensure that the non-profiling timeline option remains active once selected, rather than reverting to algorithmic recommendations.

The dispute also highlights regulatory tensions within the European framework. Before turning to the courts, Bits of Freedom submitted a complaint to Coimisiún na Meán, the national authority responsible for overseeing Meta’s compliance with the EU rules.

According to the organisation, the lack of progress from regulators encouraged legal action in Dutch courts.

Meta indicated that the company intends to challenge the decision and pursue further legal proceedings. The case could become an important test of how the Digital Services Act is enforced against major online platforms across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Digital sovereignty in Asia moves beyond US versus non-US cloud debate

AI, cloud computing, and cross-border data flows have made questions about control and jurisdiction increasingly important for governments and businesses. In Asia, the debate around digital sovereignty often focuses on ‘US versus non-US cloud’ providers or data localisation.

Such simplifications miss the practical challenges organisations face when choosing hosting locations or training AI models while navigating diverse regulatory regimes.

At the same time, Asia’s digital economy is building its own regulatory foundations. In Vietnam and Indonesia, new rules such as Vietnam’s Decree 53 and Indonesia’s data protection framework show how governments are shaping data governance while still relying on global cloud and AI platforms. Most organisations across the region continue to operate using a mix of local, regional, and international providers.

Organisations must address key questions about data jurisdiction and workload mobility when risks change. They must also control who can access sensitive systems during incidents. Digital sovereignty is clearer when seen through three pillars: data sovereignty, technical sovereignty, and operational sovereignty.

Data sovereignty is about jurisdiction, not just data storage. As AI regulation expands, businesses need to know which authorities can access their data and how it may be used. Technical sovereignty is the ability to move or redesign systems as regulations or geopolitics shift. Multi-cloud and hybrid strategies help organisations remain adaptable.

Operational sovereignty focuses on governance and control. It addresses who can access systems, from where, and under what safeguards, thus linking sovereignty directly to cybersecurity and incident response.

For Asia-Pacific organisations, digital sovereignty should not be a simple procurement checklist. Instead, it should guide cloud and AI strategies from the start, ensuring legal clarity, technical flexibility, and operational trust as the digital landscape evolves.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tycoon 2FA phishing service disrupted in global cybercrime crackdown

Authorities have disrupted the Tycoon 2FA phishing-as-a-service (PhaaS) platform, which sent millions of phishing emails to organisations worldwide.

The operation, led by Microsoft, Europol, and several industry partners, targeted the infrastructure behind Tycoon 2FA, which enabled large-scale phishing campaigns against more than 500,000 organisations each month.

By mid-2025, Tycoon 2FA accounted for 62% of the phishing attempts blocked by Microsoft, with over 30 million malicious emails blocked in a single month. Experts link the platform to around 96,000 global victims since 2023, including 55,000 Microsoft customers.

Researchers from Resecurity found cybercriminals widely used the platform to impersonate legitimate users and gain unauthorised access to accounts such as Microsoft 365, Outlook and Gmail. The service relied on techniques such as URL rotation using open redirect vulnerabilities and the misuse of Cloudflare Workers to hide malicious infrastructure.

‘The author of Tycoon 2FA is actively updating the tool with regular kit updates,’ reads the report published by Resecurity. ‘What makes Tycoon 2FA so special is that the kit effectively combines multiple methods to deliver phishing at scale—from PDF attachments to QR codes.’

Authorities say taking the infrastructure offline disrupts a key pathway for account takeover attacks and prevents additional threats, such as data theft, ransomware, business email compromise, and financial fraud.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT Edu launches at Clemson University for students and faculty

Clemson University has introduced ChatGPT Edu to its students, faculty, and staff, providing them free access to the secure, institutionally managed version of the AI platform.

The rollout is part of Clemson’s partnership with OpenAI. It forms part of the university’s broader AI Initiative, which aims to develop a human-centred approach to AI across education, research, and operations.

University officials said the ChatGPT Edu environment will expand access to generative AI tools while ensuring institutional data remains protected and is not used to train external AI systems.

Members of the Clemson community who want to use the platform must request access through a ChatGPT Edu account request form. Once approved, accounts are automatically created, and users can sign in through Clemson’s single sign-on system.

Even if students or staff members already have a ChatGPT account linked to their Clemson email, they will still need to request access to ChatGPT Edu. After approval, they can merge your current account or download your chat history before creating a new one.

The university said the launch reflects its view that access to emerging technologies should be paired with clear guidance and responsible use. Users are advised to review Clemson’s updated AI guidelines before using the system.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!