Data for Change: The PARIS21 Foundation

The Data for Change Foundation is a Geneva-based non-profit foundation with global ties to promote more, better, and equal data to enable evidence-based decisions and ensure no one is left behind. By fostering partnerships, empowering stakeholders, and leveraging technology, we aim to create a world where data enhances accountability and drives impactful, inclusive change. In close collaboration with PARIS21 (Partnership in Statistics for Development in the 21st Century), we strengthen national statistical systems (NSSs) to produce and use high-quality data for policymaking and monitoring progress. Our joint work helps countries build resilient, inclusive statistical capacities that adapt to evolving global data needs while ensuring all voices are represented.

Digital activities

One of our flagship initiatives, the SME Data Boost, supports small and medium enterprises (SMEs) in Sub-Saharan Africa to build a robust data footprint. This project addresses the risk of SMEs being excluded from global trade due to missing or inadequate data, ensuring they can meet reporting requirements, remain competitive, and retain their place in global value chains. By equipping SMEs with essential tools and capabilities, the initiative fosters accountability and resilience within regional economies, helping them thrive in an increasingly data-driven world.

The Gender Data Lab (GDL) in Rwanda, launched in collaboration with the National Institute of Statistics of Rwanda (NISR), PARIS21, and the Gender Monitoring Office (GMO), is another example of our commitment to digital transformation. The GDL seeks to revolutionise the collection, analysis, and use of gender-disaggregated data to bridge existing gaps and inform evidence-based policymaking. By consolidating data sources and applying advanced data science techniques, the GDL equips policymakers with actionable insights to design gender-responsive policies and programmes. This initiative represents a critical step toward achieving accountability and progress on gender equality targets, such as the sustainable development goals (SDGs) and Rwanda’s Vision 2050. It also emphasises Rwanda’s leadership in ensuring accurate, accessible, gender data-informed decisions at all levels. Through its work, the GDL fosters an environment where interventions are tailored to address the unique challenges faced by women and men, driving inclusive and sustainable development.

Both the SME Data Boost and the GDL exemplify how our digital activities leverage technology and innovation to enhance access to critical data. These initiatives not only strengthen statistical capacities but also promote equitable access to the tools and insights needed to ensure that no one is left behind in the digital age.

Digital policy issues

Artificial intelligence

AI regulation & AI acts in LMICs

  • Addressing regulatory challenges and governance of artificial intelligence (AI) in low- and middle-income countries (LMICs) to ensure ethical, transparent, and inclusive adoption of AI technologies.
  • Advocating for context-specific AI policies that balance innovation and accountability, ensuring that LMICs can leverage AI for development while safeguarding against risks such as bias, misinformation, and data privacy concerns.
  • Supporting the integration of AI governance frameworks that align with global AI acts and responsible AI principles, ensuring that developing regions are not left behind in digital policy discussions.

Sustainable development

Closing SDG data gaps through digital innovation

  • Promoting citizen-generated data (CGD) as a complementary source to official statistics, enabling more inclusive and granular data for monitoring SDG progress.
  • Advocating for the integration of digital and AI-driven tools into NSSs to improve data collection, processing, and utilisation in policymaking.
  • Addressing issues of data ownership, privacy, and trust in the use of digital tools for SDG monitoring, particularly in LMICs.

Digital tools

Citizen-generated data platforms (in planning)

In collaboration with partners in Africa, we are developing digital platforms that empower citizens to contribute real-time, localised data to close critical SDG data gaps.

SME Data Boost

A workstream designed to help SMEs in Sub-Saharan Africa establish a strong data footprint, enabling them to participate in global trade, meet reporting requirements, and stay competitive in digital economies.

Gender Data Lab (GDL)

An initiative that leverages advanced data science techniques to improve gender-disaggregated data collection and analysis, supporting evidence-based gender policies in Rwanda.

Social media channels

LinkedIn @Dataforchange:theparis21foundation

YouTube @DataForChange

Contact @info@dataforchange.net

Google tests AI mode on Search page

Google is experimenting with a redesigned version of its iconic Search homepage, swapping the familiar ‘I’m Feeling Lucky’ button for a new option called ‘AI Mode.’

The fresh feature, which began rolling out in limited tests earlier in May, is part of the company’s push to integrate more AI-driven capabilities into everyday search experiences.

According to a Google spokesperson, the change is currently being tested through the company’s experimental Labs platform, though there’s no guarantee it will become a permanent fixture.

The timing is notable, arriving just before Google I/O, the company’s annual developer conference where more AI-focused updates are expected.

Such changes to Google’s main Search page are rare, but the company may feel growing pressure to adapt. Just last week, an Apple executive revealed that Google searches on Safari had declined for the first time, linking the drop to the growing popularity of AI tools like ChatGPT.

By testing ‘AI Mode,’ Google appears to be responding to this shift, exploring ways to stay ahead in an increasingly AI-driven search landscape instead of sticking to its traditional layout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon to invest in Saudi AI Zone

Amazon has announced a new partnership with Humain, an AI company launched by Saudi Arabia’s Crown Prince Mohammed bin Salman, to invest over $5 billion in creating an ‘AI Zone’ in the kingdom.

The project will feature Amazon Web Services (AWS) infrastructure, including servers, networks, and training programmes, while Humain will develop AI tools using AWS and support Saudi startups with access to resources.

A move like this adds Amazon to a growing list of tech firms—such as Nvidia and AMD—that are working with Humain, which is backed by Saudi Arabia’s Public Investment Fund. American companies like Google and Salesforce have also recently turned to the PIF for funding and AI collaborations.

Under a new initiative supported by former US President Donald Trump, US tech firms can now pursue deals with Saudi-based partners more freely.

Instead of relying on foreign data centres, Saudi Arabia has required AI providers to store data locally, prompting companies like Google, Oracle, and now Amazon to expand operations within the region.

Amazon has already committed $5.3 billion to build an AWS region in Saudi Arabia by 2026, and says the AI Zone partnership is a separate, additional investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok unveils AI video feature

TikTok has launched ‘AI Alive,’ its first image-to-video feature that allows users to transform static photos into animated short videos within TikTok Stories.

Accessible only through the Story Camera, the tool applies AI-driven movement and effects—like shifting skies, drifting clouds, or expressive animations—to bring photos to life.

Unlike text-to-image tools found on Instagram and Snapchat, TikTok’s latest feature takes visual storytelling further by enabling full video generation from single images. Although Snapchat plans to introduce a similar function, TikTok has moved ahead with this innovation.

All AI Alive videos will carry an AI-generated label and include C2PA metadata to ensure transparency, even when shared beyond the platform.

TikTok emphasises safety, noting that every AI Alive video undergoes several moderation checks before it appears to creators.

Uploaded photos, prompts, and generated videos are reviewed to prevent rule-breaking content. Users can report violations, and final safety reviews are conducted before public sharing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Harvey adds Google and Anthropic AI

Harvey, the fast-growing legal AI startup backed early by the OpenAI Startup Fund, is now embracing foundation models from Google and Anthropic instead of relying solely on OpenAI’s.

In a recent blog post, the company said it would expand its AI model options after internal benchmarks showed that different tools excel at different legal tasks.

The shift marks a notable win for OpenAI’s competitors, even though Harvey insists it’s not abandoning OpenAI. Its in-house benchmark, BigLaw, revealed that several non-OpenAI models now outperform Harvey’s original system on specific legal functions.

For instance, Google’s Gemini 2.5 Pro performs well at legal drafting, while OpenAI’s o3 and Anthropic’s Claude 3.7 Sonnet are better suited for complex pre-trial work.

Instead of building its own models, Harvey now aims to fine-tune top-tier offerings from multiple vendors, including through Amazon’s cloud. The company also plans to launch a public legal benchmark leaderboard, combining expert legal reviews with technical metrics.

While OpenAI remains a close partner and investor, Harvey’s broader strategy signals growing competition in the race to serve the legal industry with AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Masked cybercrime groups rise as attacks escalate worldwide

Cybercrime is thriving like never before, with hackers launching attacks ranging from absurd ransomware demands of $1 trillion to large-scale theft of personal data. Despite efforts from Microsoft, Google and even the FBI, these threat actors continue to outpace defences.

A new report by Group-IB has analysed over 1,500 cybercrime investigations to uncover the most active and dangerous hacker groups operating today.

Rather than fading away after arrests or infighting, many cybercriminal gangs are re-emerging stronger than before.

Group-IB’s May 2025 report highlights a troubling increase in key attack types across 2024 — phishing rose by 22%, ransomware leak sites by 10%, and APT (advanced persistent threat) attacks by 58%. The United States was the most affected country by ransomware activity.

At the top of the cybercriminal hierarchy now sits RansomHub, a ransomware-as-a-service group that emerged from the collapsed ALPHV group and has already overtaken long-established players in attack numbers.

Behind it is GoldFactory, which developed the first iOS banking trojan and exploited facial recognition data. Lazarus, a well-known North Korean state-linked group, also remains highly active under multiple aliases.

Meanwhile, politically driven hacktivist group NoName057(16) has been targeting European institutions using denial-of-service attacks.

With jurisdictional gaps allowing cybercriminals to flourish, these masked hackers remain a growing concern for global cybersecurity, especially as new threat actors emerge from the shadows instead of disappearing for good.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US scraps Biden AI chip export rule

The US Department of Commerce has scrapped the Biden administration’s Artificial Intelligence Diffusion Rule just days before it was due to come into force.

Introduced in January, the rule would have restricted the export of US-made AI chips to many countries for the first time, while reinforcing existing controls.

Rather than enforcing broad restrictions, the Department now intends to pursue direct negotiations with individual countries.

The original rule divided the world into three tiers, with countries like Japan and South Korea spared restrictions, middle-tier countries such as Mexico and Portugal facing new limits, and nations like China and Russia subject to tighter controls.

According to Bloomberg, a replacement rule is expected at a later date.

Instead of issuing immediate new regulations, officials released industry guidance warning companies against using Huawei’s Ascend AI chips and highlighted the risks of allowing US chips to train AI in China.

Secretary Jeffrey Kessler criticised the Biden-era policy, promising a ‘bold, inclusive’ AI strategy that works with allies while limiting access for adversaries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU prolongs sanctions for cyberattackers until 2026

The EU Council has extended its sanctions on cyberattacks until May 18, 2026, with the legal framework for enforcing these measures now lasting until 2028. The sanctions target individuals and institutions involved in cyberattacks that pose a significant threat to the EU and its members.

The extended measures will allow the EU to impose restrictions on those responsible for cyberattacks, including freezing assets and blocking access to financial resources.

These actions may also apply to attacks against third countries or international organisations, if necessary for EU foreign and security policy objectives.

At present, sanctions are in place against 17 individuals and four institutions. The EU’s decision highlights its ongoing commitment to safeguarding its digital infrastructure and maintaining its foreign policy goals through legal actions against cyber threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Copyright Office avoids clear decision on AI and fair use

The US Copyright Office has stopped short of deciding whether AI companies can legally use copyrighted material to train their systems under fair use.

Its newly released report acknowledges that some uses—such as non-commercial research—may qualify, while others, like replicating expressive works from pirated content to produce market-ready AI output, likely won’t.

Rather than offering a definitive answer, the Office said such cases must be assessed by the courts, not through a universal standard.

The latest report is the third in a series aimed at guiding how copyright law applies to AI-generated content. It reiterates that works entirely created by AI cannot be copyrighted, but human-edited outputs might still qualify.

The 108-page document focuses heavily on whether AI training methods transform content enough to justify legal protection, and whether they harm creators’ livelihoods through lost sales or diluted markets.

Instead of setting new policy, the Office highlights existing legal principles, especially the four factors of fair use: the purpose, the nature of the work, the amount used, and the impact on the original market.

It notes that AI-generated content can sometimes alter original works meaningfully, but when styles or outputs closely resemble protected material, legal risks remain. Tools like content filters are seen as helpful in preventing infringement, even though they’re not always reliable.

The timing of the report has been overshadowed by political turmoil. President Donald Trump reportedly dismissed both the Librarian of Congress and the head of the Copyright Office days before the report’s release.

Meanwhile, creators continue urging the government not to permit fair use in AI training, arguing it threatens the value of original work. The debate is now expected to unfold further in courtrooms instead of regulatory offices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jamie Lee Curtis calls out Zuckerberg over AI scam using her likeness

Jamie Lee Curtis has directly appealed to Mark Zuckerberg after discovering her likeness had been used without consent in an AI-generated advert.

Posting on Facebook, Curtis expressed her frustration with Meta’s lack of proper channels to report such abuse, stating she had exhausted all official avenues before resorting to a public plea.

The fake video reportedly manipulated footage from an emotional interview following the January wildfires in Los Angeles, inserting false statements under the guise of a product endorsement.

Instead of remaining silent, Curtis urged Zuckerberg to take action, saying the unauthorised content damaged her integrity and voice. Within hours of her public callout, Meta confirmed the video had been removed for breaching its policies, a rare example of a swift response.

‘It worked! Yay Internet! Shame has its value!’ she wrote in a follow-up, though she also highlighted the broader risks posed by deepfakes.

The actress joins a growing list of celebrities, including Taylor Swift and Scarlett Johansson, who’ve been targeted by AI misuse.

Swift was forced to publicly clarify her political stance after an AI video falsely endorsed Donald Trump, while Johansson criticised OpenAI for allegedly using a voice nearly identical to hers despite her refusal to participate in a project.

The issue has reignited concerns around consent, misinformation and the exploitation of public figures.

Instead of waiting for further harm, lawmakers in California have already begun pushing back. New legislation signed by Governor Gavin Newsom aims to protect performers from unauthorised digital replicas and deepfakes.

Meanwhile, in Washington, proposals like the No Fakes Act seek to hold tech platforms accountable, possibly fining them thousands per violation. As Curtis and others warn, without stronger protections, the misuse of AI could spiral further, threatening not just celebrities but the public as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!