OpenAI launches AI safety hub

OpenAI has launched a public online hub to share internal safety evaluations of its AI models, aiming to increase transparency around harmful content, jailbreaks, and hallucination risks. The hub will be updated after major model changes, allowing the public to track progress in safety and reliability over time.

The move follows growing criticism about the company’s testing methods, especially after inappropriate ChatGPT responses surfaced in late 2023. Instead of waiting for backlash, OpenAI is now introducing an optional alpha testing phase, letting users provide feedback before wider model releases.

The hub also marks a departure from the company’s earlier stance on secrecy. In 2019, OpenAI withheld GPT-2 over misuse concerns. Since then, it has shifted towards transparency by forming safety-focused teams and responding to calls for open safety metrics.

OpenAI’s approach appears timely, as several countries are building AI Safety Institutes to evaluate models before launch. Instead of relying on private sector efforts alone, the global landscape now reflects a multi-stakeholder push to create stronger safety standards and governance for advanced AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok adds AI tool to animate photos with realistic effects

TikTok has launched a new feature called AI Alive, allowing users to turn still images into dynamic, short videos. Instead of needing advanced editing skills, creators can now use AI to generate movement and effects with a few taps.

By accessing the Story Camera and selecting a static photo, users can simply type how they want the image to change — such as making the subject smile, dance, or tilt forward. AI Alive then animates the photo, using creative effects to produce a more engaging story.

TikTok says its moderation systems review the original image, the AI prompt, and the final video before it’s shown to the user. A second check occurs before a post is shared publicly, and every video made with AI Alive will include an ‘AI-generated’ label and C2PA metadata to ensure transparency.

The feature stands out as one of the first built-in AI image-to-video tools on a major platform. Snapchat and Instagram already offer AI image generation from text, and Snapchat is reportedly developing a similar image-to-video feature.

Meanwhile, TikTok is also said to be working on adding support for sending photos and voice messages via direct message — something rival apps have long supported.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NatWest hit by 100 million cyber attacks every month

NatWest is defending itself against an average of 100 million cyber attacks each month, according to the bank’s head of cybersecurity.

Speaking to Holyrood’s Criminal Justice Committee, Chris Ulliott outlined the ‘staggering’ scale of digital threats targeting the bank’s systems. Around a third of all incoming emails are blocked before reaching staff, as they are suspected to be the start of an attack.

Instead of relying on basic filters, NatWest analyses every email for malicious content and has a cybersecurity team of hundreds, supported by a multi-million-pound budget.

Mr Ulliott also warned of the growing use of AI by cyber criminals to make scams more convincing—such as altering their appearance during video calls to build trust with victims.

Police Scotland reported that cybercrime has more than doubled since 2020, with incidents rising from 7,710 to 18,280 in 2024. Officials highlighted the threat posed by groups like Scattered Spider, believed to consist of young hackers sharing techniques online.

MSP Rona Mackay called the figures ‘absolutely staggering,’ while Ben Macpherson said he had even been impersonated by fraudsters.

Law enforcement agencies, including the FBI, are now working together to tackle online crime. Meanwhile, Age Scotland warned that many older people lack confidence online, making them especially vulnerable to scams that can lead to financial ruin and emotional distress.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Seattle startup ElastixAI raises $16 million for AI inference tech

A stealthy new AI startup in Seattle, ElastixAI, has raised $16 million to build technology that aims to reduce the cost and complexity of running large language models.

Rather than focusing on training, the company is developing an AI inference platform to optimise how these models operate, whether on cloud servers or edge devices. The funding round is led by Bellevue-based venture capital firm FUSE, with support from several others.

ElastixAI is led by CEO Mohammad Rastegari, formerly CTO of Xnor, a startup acquired by Apple in 2020. He co-founded the company with Saman Naderiparizi, also ex-Apple and Xnor, and Mahyar Najibi, who worked at both Apple and Waymo.

The team’s background in AI hardware and software gives them a unique edge in addressing challenges at a stage where AI models generate responses from trained data.

Instead of building a one-size-fits-all solution, the startup’s platform is designed for flexibility, allowing customers to fine-tune infrastructure to specific needs. ‘We saw a gap in delivering scalable and low-cost inference,’ said Rastegari.

The company remains in stealth but says its platform could serve both hyperscalers and enterprises looking to integrate AI into everyday operations.

With other players like Nvidia and Fireworks.ai competing in the inference space, ElastixAI may even count some of them as future customers.

Rastegari and Naderiparizi are also affiliate assistant professors at the University of Washington, and their startup reflects Seattle’s growing reputation as a hub for advanced AI development — a trend Apple has helped shape with several acquisitions in the region.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Valve denies Steam data breach

Valve has confirmed that a cache of old Steam two-factor authentication codes and phone numbers, recently circulated by a hacker known as ‘Machine1337’, is indeed real, but insists it did not suffer a data breach.

Instead of pointing to its own systems, Valve explained that the leak involves outdated SMS messages, which are typically sent unencrypted and routed through multiple providers. These codes, once valid for only 15 minutes, were not linked to specific Steam accounts, passwords, or payment information.

The leaked data sparked early speculation that third-party messaging provider Twilio was the source of the breach, especially after their name appeared in the dataset. However, both Valve and Twilio denied any direct involvement, with Valve stating it does not even use Twilio’s services.

The true origin of the breach remains uncertain, and Valve acknowledged that tracing it may be difficult, as SMS messages often pass through several intermediaries before reaching users.

While the leaked information may not immediately endanger Steam accounts, Valve advised users to remain cautious. Phone numbers, when combined with other data, could still be used for phishing attacks.

Instead of relying on SMS for security, users are encouraged to activate the Steam Mobile Authenticator, which offers a more secure alternative for account verification.

Despite the uncertainty surrounding the source of the breach, Valve reassured users there’s no need to change passwords or phone numbers. Still, it urged vigilance, recommending that users routinely review their security settings and remain wary of any unsolicited account notifications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use fake PayPal email to seize bank access

A man from Virginia fell victim to a sophisticated PayPal scam that allowed hackers to gain remote control of his computer and access his bank accounts.

After receiving a fake email about a laptop purchase, he called the number listed in the message, believing it to be legitimate. The person on the other end instructed him to enter a code into his browser, which unknowingly installed a program giving the scammer full access to his system.

Files were scanned, and money was transferred between his accounts—all while he was urged to stay on the line and visit the bank, without informing anyone.

The scam, known as a remote access attack, starts with a convincing email that appears to come from a trusted source. Instead of fixing any problem, the real aim is to deceive victims into granting hackers full control.

Once inside, scammers can steal personal data, access bank accounts, and install malware that remains even after the immediate threat ends. These attacks often unfold in minutes, using fear and urgency to manipulate targets into acting quickly and irrationally.

Quick action helped limit the damage in this case. The victim shut down his computer, contacted his bank and changed his passwords—steps that likely prevented more extensive losses. However, many people aren’t as fortunate.

Experts warn that scammers increasingly rely on psychological tricks instead of just technical ones, isolating their victims and urging secrecy during the attack.

To avoid falling for similar scams, it’s safer to verify emails by using official websites instead of clicking any embedded links or calling suspicious numbers.

Remote control should never be granted to unsolicited support calls, and all devices should have up-to-date antivirus protection and multifactor authentication enabled. Online safety now depends just as much on caution and awareness as it does on technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepMind unveils AlphaEvolve for scientific breakthroughs

Google DeepMind has unveiled AlphaEvolve, a new AI system designed to help solve complex scientific and mathematical problems by improving how algorithms are developed.

Rather than acting like a standard chatbot, AlphaEvolve blends large language models from the Gemini family with an evolutionary approach, enabling it to generate, assess, and refine multiple solutions at once.

Instead of relying on a single output, AlphaEvolve allows researchers to submit a problem and potential directions. The system then uses both Gemini Flash and Gemini Pro to create various solutions, which are automatically evaluated.

The best results are selected and enhanced through an iterative process, improving accuracy and reducing hallucinations—a common issue with AI-generated content.

Unlike earlier DeepMind tools such as AlphaFold, which focused on narrow domains, AlphaEvolve is a general-purpose AI for coding and algorithmic tasks.

It has already shown its value by optimising Google’s own Borg data centre management system, delivering a 0.7% efficiency gain—significant given Google’s global scale.

The AI also devised a new method for multiplying complex matrices, outperforming a decades-old technique and even beating DeepMind’s specialised AlphaTensor model.

AlphaEvolve has also contributed to improvements in Google’s hardware design by optimising Verilog code for upcoming Tensor chips.

Though not publicly available yet due to its complexity, AlphaEvolve’s evaluation-based framework could eventually be adapted for smaller AI tools used by researchers elsewhere.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Data for Change: The PARIS21 Foundation

The Data for Change Foundation is a Geneva-based non-profit foundation with global ties to promote more, better, and equal data to enable evidence-based decisions and ensure no one is left behind. By fostering partnerships, empowering stakeholders, and leveraging technology, we aim to create a world where data enhances accountability and drives impactful, inclusive change. In close collaboration with PARIS21 (Partnership in Statistics for Development in the 21st Century), we strengthen national statistical systems (NSSs) to produce and use high-quality data for policymaking and monitoring progress. Our joint work helps countries build resilient, inclusive statistical capacities that adapt to evolving global data needs while ensuring all voices are represented.

Digital activities

One of our flagship initiatives, the SME Data Boost, supports small and medium enterprises (SMEs) in Sub-Saharan Africa to build a robust data footprint. This project addresses the risk of SMEs being excluded from global trade due to missing or inadequate data, ensuring they can meet reporting requirements, remain competitive, and retain their place in global value chains. By equipping SMEs with essential tools and capabilities, the initiative fosters accountability and resilience within regional economies, helping them thrive in an increasingly data-driven world.

The Gender Data Lab (GDL) in Rwanda, launched in collaboration with the National Institute of Statistics of Rwanda (NISR), PARIS21, and the Gender Monitoring Office (GMO), is another example of our commitment to digital transformation. The GDL seeks to revolutionise the collection, analysis, and use of gender-disaggregated data to bridge existing gaps and inform evidence-based policymaking. By consolidating data sources and applying advanced data science techniques, the GDL equips policymakers with actionable insights to design gender-responsive policies and programmes. This initiative represents a critical step toward achieving accountability and progress on gender equality targets, such as the sustainable development goals (SDGs) and Rwanda’s Vision 2050. It also emphasises Rwanda’s leadership in ensuring accurate, accessible, gender data-informed decisions at all levels. Through its work, the GDL fosters an environment where interventions are tailored to address the unique challenges faced by women and men, driving inclusive and sustainable development.

Both the SME Data Boost and the GDL exemplify how our digital activities leverage technology and innovation to enhance access to critical data. These initiatives not only strengthen statistical capacities but also promote equitable access to the tools and insights needed to ensure that no one is left behind in the digital age.

Digital policy issues

Artificial intelligence

AI regulation & AI acts in LMICs

  • Addressing regulatory challenges and governance of artificial intelligence (AI) in low- and middle-income countries (LMICs) to ensure ethical, transparent, and inclusive adoption of AI technologies.
  • Advocating for context-specific AI policies that balance innovation and accountability, ensuring that LMICs can leverage AI for development while safeguarding against risks such as bias, misinformation, and data privacy concerns.
  • Supporting the integration of AI governance frameworks that align with global AI acts and responsible AI principles, ensuring that developing regions are not left behind in digital policy discussions.

Sustainable development

Closing SDG data gaps through digital innovation

  • Promoting citizen-generated data (CGD) as a complementary source to official statistics, enabling more inclusive and granular data for monitoring SDG progress.
  • Advocating for the integration of digital and AI-driven tools into NSSs to improve data collection, processing, and utilisation in policymaking.
  • Addressing issues of data ownership, privacy, and trust in the use of digital tools for SDG monitoring, particularly in LMICs.

Digital tools

Citizen-generated data platforms (in planning)

In collaboration with partners in Africa, we are developing digital platforms that empower citizens to contribute real-time, localised data to close critical SDG data gaps.

SME Data Boost

A workstream designed to help SMEs in Sub-Saharan Africa establish a strong data footprint, enabling them to participate in global trade, meet reporting requirements, and stay competitive in digital economies.

Gender Data Lab (GDL)

An initiative that leverages advanced data science techniques to improve gender-disaggregated data collection and analysis, supporting evidence-based gender policies in Rwanda.

Social media channels

LinkedIn @Dataforchange:theparis21foundation

YouTube @DataForChange

Contact @info@dataforchange.net

Google tests AI mode on Search page

Google is experimenting with a redesigned version of its iconic Search homepage, swapping the familiar ‘I’m Feeling Lucky’ button for a new option called ‘AI Mode.’

The fresh feature, which began rolling out in limited tests earlier in May, is part of the company’s push to integrate more AI-driven capabilities into everyday search experiences.

According to a Google spokesperson, the change is currently being tested through the company’s experimental Labs platform, though there’s no guarantee it will become a permanent fixture.

The timing is notable, arriving just before Google I/O, the company’s annual developer conference where more AI-focused updates are expected.

Such changes to Google’s main Search page are rare, but the company may feel growing pressure to adapt. Just last week, an Apple executive revealed that Google searches on Safari had declined for the first time, linking the drop to the growing popularity of AI tools like ChatGPT.

By testing ‘AI Mode,’ Google appears to be responding to this shift, exploring ways to stay ahead in an increasingly AI-driven search landscape instead of sticking to its traditional layout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon to invest in Saudi AI Zone

Amazon has announced a new partnership with Humain, an AI company launched by Saudi Arabia’s Crown Prince Mohammed bin Salman, to invest over $5 billion in creating an ‘AI Zone’ in the kingdom.

The project will feature Amazon Web Services (AWS) infrastructure, including servers, networks, and training programmes, while Humain will develop AI tools using AWS and support Saudi startups with access to resources.

A move like this adds Amazon to a growing list of tech firms—such as Nvidia and AMD—that are working with Humain, which is backed by Saudi Arabia’s Public Investment Fund. American companies like Google and Salesforce have also recently turned to the PIF for funding and AI collaborations.

Under a new initiative supported by former US President Donald Trump, US tech firms can now pursue deals with Saudi-based partners more freely.

Instead of relying on foreign data centres, Saudi Arabia has required AI providers to store data locally, prompting companies like Google, Oracle, and now Amazon to expand operations within the region.

Amazon has already committed $5.3 billion to build an AWS region in Saudi Arabia by 2026, and says the AI Zone partnership is a separate, additional investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!