Secret passwords could fight deepfake scams

As AI-generated images grow increasingly lifelike, a cyber security expert has warned that families should create secret passwords to guard against deepfake scams.

Cody Barrow, chief executive of EclecticIQ and a former US government adviser, says AI is making it far easier for criminals to impersonate others using fabricated videos or images.

Mr Barrow and his wife now use a private code to confirm each other’s identity if either receives a suspicious message or video.

He believes this precaution, simple enough for anyone regardless of age or digital skills, could soon become essential. ‘It may sound dramatic here in May 2025,’ he said, ‘but I’m quite confident that in a few years, if not months, people will say: I should have done that.’

The warning comes the same week Google launched Veo 3, its AI video generator capable of producing hyper-realistic footage and lifelike dialogue. Its public release has raised concerns about how easily deepfakes could be misused for scams or manipulation.

Meanwhile, President Trump signed the ‘Take It Down Act’ into law, making the creation of deepfake pornography a criminal offence. The bipartisan measure will see prison terms for anyone producing or uploading such content, with First Lady Melania Trump stating it will ‘prioritise people over politics’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Texas considers statewide social media ban for minors

Texas is considering a bill that would ban social media use for anyone under 18. The proposal, which recently advanced past the state Senate committee, is expected to be voted on before the legislative session ends June 2.

If passed, the bill would require platforms to verify the age of all users and allow parents to delete their child’s account. Platforms would have 10 days to comply or face penalties from the state attorney general.

This follows similar efforts in other states. Florida recently enacted a law banning social media use for children under 14 and requiring parental consent for those aged 14 to 15. The Texas bill, however, proposes broader restrictions.

At the federal level, a Senate bill introduced in 2024 aims to bar children under 13 from using social media. While it remains stalled in committee, comments from Senators Brian Schatz and Ted Cruz suggest a renewed push may be underway.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Telegram founder Durov to address Oslo Freedom Forum remotely amid legal dispute

Telegram founder Pavel Durov will deliver a livestreamed keynote at the Oslo Freedom Forum, following a French court decision barring him from international travel. The Human Rights Foundation (HRF), which organizes the annual event, expressed disappointment at the court’s ruling.

Durov, currently under investigation in France, was arrested in August 2024 on charges related to child sexual abuse material (CSAM) distribution and failure to assist law enforcement.

He was released on €5 million bail but ordered to remain in the country and report to police twice a week. Durov maintains the charges are unfounded and says Telegram complies with law enforcement when possible.

Recently, Durov accused French intelligence chief Nicolas Lerner of pressuring him to censor political voices ahead of elections in Romania. France’s DGSE denies the allegation, saying meetings with Durov focused solely on national security threats.

The claim has sparked international debate, with figures like Elon Musk and Edward Snowden defending Durov’s stance on free speech.

Supporters say the legal action against Durov may be politically motivated and warn it could set a dangerous precedent for holding tech executives accountable for user content. Critics argue Telegram must do more to moderate harmful material.

Despite legal restrictions, HRF says Durov’s remote participation is vital for ongoing discussions around internet freedom and digital rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Authorities strike down cybercriminal servers

Authorities across Europe, North America and the UK have dismantled a major global malware network by taking down over 300 servers and seizing millions in cryptocurrency. The operation, led by Eurojust, marks a significant phase of the ongoing Operation Endgame.

Law enforcement agencies from Germany, France, the Netherlands, Denmark, the UK, the US and Canada collaborated to target some of the world’s most dangerous malware variants and the cybercriminals responsible for them.

The takedown also resulted in international arrest warrants for 20 suspects and the identification of more than 36 individuals involved.

The latest move follows similar action in May 2024, which had been the largest coordinated effort against botnets. Since the start of the operation, over €21 million has been seized, including €3.5 million in cryptocurrency.

The malware disrupted in this crackdown, known as ‘initial access malware’, is used to gain a foothold in victims’ systems before further attacks like ransomware are launched.

Authorities have warned that Operation Endgame will continue, with further actions announced through the coalition’s website. Eighteen prime suspects will be added to the EU Most Wanted list.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Banks push to scrap SEC cyber reporting rule

Five major US banking groups have asked the Securities and Exchange Commission (SEC) to drop its cyber security disclosure rule. The rule requires public companies to report incidents, such as data breaches, within four days.

The American Bankers Association and others said in a letter that the rule conflicts with systems built to protect critical infrastructure. They warned it may hurt law enforcement and cause market confusion.

The rule, introduced in July 2023, also affects crypto firms like Coinbase. However, the exchange recently reported a breach where hackers bribed staff for user data. Coinbase rejected a $20 million ransom but now faces at least seven lawsuits.

Banking groups want the SEC to remove Item 1.05 from Form 8-K rules. They argue investors would still be protected under existing rules for material information, without the risks of rushed public reporting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Oracle and OpenAI target AI leadership with massive chip project

Oracle has reportedly acquired around 400,000 Nvidia GB200 AI chips valued at approximately $40 billion for deployment at a data centre in Abilene, Texas.

The location will be the first site of the Stargate project—a $500 billion AI infrastructure initiative backed by OpenAI, Oracle, SoftBank, and Abu Dhabi’s MGX fund, which President Trump announced earlier this year.

Once completed, the Abilene facility is expected to provide up to 1.2 gigawatts of computing power, rivalling Elon Musk’s Colossus project in Memphis.

Although Oracle will operate from the site, the land is owned by AI infrastructure firm Cruso and US investment company Blue Owl Capital, which have collectively invested more than $15 billion through financing.

Oracle will lease the campus for 15 years, using the chips to offer computing power to OpenAI for training its next-generation AI models.

Previously dependent solely on Microsoft’s data centres, OpenAI faced bottlenecks due to limited capacity, prompting it to end the exclusivity agreement and look elsewhere.

While individual investors have committed funds, the Stargate project has not officially financed any facility yet. In parallel, OpenAI has announced Stargate UAE—a 5-gigawatt site in Abu Dhabi using over 2 million Nvidia chips, built in partnership with G42.

A surging demand for AI infrastructure has significantly boosted Nvidia’s market value, with the company reclaiming its top global ranking in late 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia recovers as DeepSeek fears fade

Earlier this year, Nvidia shares declined following concerns over DeepSeek and the possibility that tech giants might reduce AI-related spending. Worries over export restrictions added to investor unease.

However, Wedbush Securities’ managing director Matt Bryson believes the DeepSeek issue is now firmly behind the company. According to Bryson, DeepSeek — mostly a China-based phenomenon — unexpectedly boosted demand for AI servers, which ultimately benefited Nvidia instead of hurting it.

Another key development is Oracle’s plan to spend around $40 billion on Nvidia’s GB200 chips to power OpenAI’s new data centre.

Bryson suggested this is part of a broader trend among hyperscalers like Oracle and Crusoe, which recently secured funding to build new facilities. He expects this spending to appear in Nvidia’s earnings as early as Q2 or Q3, instead of being delayed until the next chip generation, the GB300.

Looking ahead, investors remain focused on whether major tech firms will sustain their AI investment. Bryson pointed out that recent earnings reports from companies like Microsoft, Alphabet, and Meta show they remain committed to high capital expenditures.

Instead of retreating, Big Tech appears set to continue driving demand for AI infrastructure, which supports Nvidia’s long-term prospects.

Bryson also noted a significant new factor in AI growth: sovereign deals from countries such as Saudi Arabia and the UAE. He emphasised that the UAE’s expected chip purchases may even surpass Oracle’s.

The new demand, combined with increasing investments in AI-powered edge products — such as those hinted at by OpenAI’s collaboration with Jony Ive — signals that AI spending beyond 2025 will remain strong instead of slowing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pakistan aims to become global crypto and AI leader

Pakistan has set aside 2,000 megawatts of electricity in a major push to power Bitcoin mining and AI data centres, marking the start of a wider national digital strategy.

Led by the Pakistan Crypto Council (PCC), a body under the Ministry of Finance, this initiative aims to monetise surplus energy instead of wasting it, while attracting foreign investment, creating jobs, and generating much-needed revenue.

Bilal Bin Saqib, CEO of the PCC, stated that with proper regulation and transparency, Pakistan can transform into a global powerhouse for crypto and AI.

By redirecting underused power capacity, particularly from plants operating below potential, Pakistan seeks to convert a longstanding liability into a high-value asset, earning foreign currency through digital services and even storing Bitcoin in a national wallet.

Global firms have already shown interest, following recent visits from international miners and data centre operators.

Pakistan’s location — bridging Asia, the Middle East, and Europe — coupled with low energy costs and ample land, positions it as a competitive alternative to regional tech hubs like India and Singapore.

The arrival of the Africa-2 subsea cable has further boosted digital connectivity and resilience, strengthening the case for domestic AI infrastructure.

It is just the beginning of a multi-stage rollout. Plans include using renewable energy sources like wind, solar, and hydropower, while tax incentives and strategic partnerships are expected to follow.

With over 40 million crypto users and increasing digital literacy, Pakistan aims to emerge not just as a destination for digital infrastructure but as a sovereign leader in Web3, AI, and blockchain innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft allegedly blocked the email of the Chief Prosecutor of the International Criminal Court

Microsoft has come under scrutiny after the Associated Press reported that the company blocked the email account of Karim Khan, Chief Prosecutor of the International Criminal Court (ICC), in compliance with US sanctions imposed by the Trump administration. 

While this ban is widely reported, Microsoft, according to DataNews, strongly denied this action, arguing that ICC moved Khan’s email to the Proton service. So far, there has been no response from the ICC. 

Legal and sovereignty implications

The incident highlights tensions between US sanctions regimes and global digital governance. Section 2713 of the 2018 CLOUD Act requires US-based tech firms to provide data under their ‘possession, custody, or control,’ even if stored abroad or legally covered by a foreign jurisdiction – a provision critics argue undermines foreign data sovereignty.

That clash resurfaces as Microsoft campaigns to be a trusted partner for developing the EU digital and AI infrastructure, pledging alignment with European regulations as outlined in the company’s EU strategy.

Broader impact on AI and digital governance

The controversy emerges amid a global race among US tech giants to secure data for AI development. Initiatives like OpenAI’s for Countries programmes, which offer tailored AI services in exchange for data access, now face heightened scrutiny. European governments and international bodies are increasingly wary of entrusting critical digital infrastructure to firms bound by US laws, fearing legal overreach could compromise sovereignty.

Why does it matter?

The ‘Khan email’ controversy makes the question of digital vulnerabilities more tangible. It also brings into focus the question of data and digital sovereignty and the risks of exposure to foreign cloud and tech providers.

DataNews reports that the fallout may accelerate Europe’s push for sovereign cloud solutions and stricter oversight of foreign tech collaborations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SynthID Detector aims to boost transparency in AI content

Google has launched SynthID Detector, a verification portal designed to identify whether content was created using its AI models. The tool scans for SynthID, Google’s watermarking technology, which invisibly marks text, images, audio, and video generated by tools such as Gemini, Imagen, Lyria, and Veo.

The Detector highlights which parts of the content likely contain SynthID watermarks. These watermarks are invisible and do not affect the quality of the media. According to Google, over 10 billion pieces of AI-generated content have already been marked using SynthID.

Users can upload files to the SynthID Detector web portal, which then checks for the presence of watermarks. For example, the tool can identify specific segments in an audio file or regions in an image where watermarks are embedded.

Initially rolled out to early testers, the tool will become more widely available in the coming weeks. Google has also open sourced SynthID’s text watermarking technology to allow broader integration by developers.

The company says SynthID is part of a broader effort to address misinformation and improve transparency around AI-generated content. Google emphasized the importance of working with the AI community to support content authenticity as AI tools become more widespread.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!