Valve denies Steam data breach

Valve has confirmed that a cache of old Steam two-factor authentication codes and phone numbers, recently circulated by a hacker known as ‘Machine1337’, is indeed real, but insists it did not suffer a data breach.

Instead of pointing to its own systems, Valve explained that the leak involves outdated SMS messages, which are typically sent unencrypted and routed through multiple providers. These codes, once valid for only 15 minutes, were not linked to specific Steam accounts, passwords, or payment information.

The leaked data sparked early speculation that third-party messaging provider Twilio was the source of the breach, especially after their name appeared in the dataset. However, both Valve and Twilio denied any direct involvement, with Valve stating it does not even use Twilio’s services.

The true origin of the breach remains uncertain, and Valve acknowledged that tracing it may be difficult, as SMS messages often pass through several intermediaries before reaching users.

While the leaked information may not immediately endanger Steam accounts, Valve advised users to remain cautious. Phone numbers, when combined with other data, could still be used for phishing attacks.

Instead of relying on SMS for security, users are encouraged to activate the Steam Mobile Authenticator, which offers a more secure alternative for account verification.

Despite the uncertainty surrounding the source of the breach, Valve reassured users there’s no need to change passwords or phone numbers. Still, it urged vigilance, recommending that users routinely review their security settings and remain wary of any unsolicited account notifications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use fake PayPal email to seize bank access

A man from Virginia fell victim to a sophisticated PayPal scam that allowed hackers to gain remote control of his computer and access his bank accounts.

After receiving a fake email about a laptop purchase, he called the number listed in the message, believing it to be legitimate. The person on the other end instructed him to enter a code into his browser, which unknowingly installed a program giving the scammer full access to his system.

Files were scanned, and money was transferred between his accounts—all while he was urged to stay on the line and visit the bank, without informing anyone.

The scam, known as a remote access attack, starts with a convincing email that appears to come from a trusted source. Instead of fixing any problem, the real aim is to deceive victims into granting hackers full control.

Once inside, scammers can steal personal data, access bank accounts, and install malware that remains even after the immediate threat ends. These attacks often unfold in minutes, using fear and urgency to manipulate targets into acting quickly and irrationally.

Quick action helped limit the damage in this case. The victim shut down his computer, contacted his bank and changed his passwords—steps that likely prevented more extensive losses. However, many people aren’t as fortunate.

Experts warn that scammers increasingly rely on psychological tricks instead of just technical ones, isolating their victims and urging secrecy during the attack.

To avoid falling for similar scams, it’s safer to verify emails by using official websites instead of clicking any embedded links or calling suspicious numbers.

Remote control should never be granted to unsolicited support calls, and all devices should have up-to-date antivirus protection and multifactor authentication enabled. Online safety now depends just as much on caution and awareness as it does on technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepMind unveils AlphaEvolve for scientific breakthroughs

Google DeepMind has unveiled AlphaEvolve, a new AI system designed to help solve complex scientific and mathematical problems by improving how algorithms are developed.

Rather than acting like a standard chatbot, AlphaEvolve blends large language models from the Gemini family with an evolutionary approach, enabling it to generate, assess, and refine multiple solutions at once.

Instead of relying on a single output, AlphaEvolve allows researchers to submit a problem and potential directions. The system then uses both Gemini Flash and Gemini Pro to create various solutions, which are automatically evaluated.

The best results are selected and enhanced through an iterative process, improving accuracy and reducing hallucinations—a common issue with AI-generated content.

Unlike earlier DeepMind tools such as AlphaFold, which focused on narrow domains, AlphaEvolve is a general-purpose AI for coding and algorithmic tasks.

It has already shown its value by optimising Google’s own Borg data centre management system, delivering a 0.7% efficiency gain—significant given Google’s global scale.

The AI also devised a new method for multiplying complex matrices, outperforming a decades-old technique and even beating DeepMind’s specialised AlphaTensor model.

AlphaEvolve has also contributed to improvements in Google’s hardware design by optimising Verilog code for upcoming Tensor chips.

Though not publicly available yet due to its complexity, AlphaEvolve’s evaluation-based framework could eventually be adapted for smaller AI tools used by researchers elsewhere.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kick faces investigation after ignoring Ofcom risk assessment request

Ofcom has launched two investigations into Kick Online Entertainment, the provider of a pornography website, over potential breaches of the Online Safety Act.

The regulator said the company failed to respond to a statutory request for a risk assessment related to illegal content appearing on the platform.

As a result, Ofcom is investigating whether Kick has failed to meet its legal obligations to complete and retain a record of such a risk assessment, as well as for not responding to the regulator’s information request.

Ofcom confirmed it had received complaints about potentially illegal material on the site, including child sexual abuse content and extreme pornography.

It is also considering a third investigation into whether the platform has implemented adequate safety measures to protect users from such material—another requirement under the Act.

Under the Online Safety Act, firms found in breach can face fines of up to £18 million or 10% of their global revenue, whichever is higher. In the most severe cases, Ofcom can pursue court orders to block UK access to the website or compel payment providers and advertisers to cut ties with the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta targets critics as FTC case unfolds

Long-standing friction between Big Tech and the media resurfaced during Meta’s antitrust trial with the Federal Trade Commission this week. In a heated courtroom exchange, Meta’s legal team used critical commentary from prominent tech journalists to cast doubt on the FTC’s case.

Meta’s lead attorney, Mark Hansen, questioned the credibility of FTC expert Scott Hemphill by referencing a 2019 antitrust pitch Hemphill co-authored with Facebook co-founder Chris Hughes and former White House advisor Tim Wu.

The presentation cited public statements from reporters Kara Swisher and Om Malik as evidence of Meta’s dominance and aggressive acquisitions.

Hansen dismissed Malik as a ‘failed blogger’ with personal bias and accused Swisher of similar hostility, projecting a headline where she described Mark Zuckerberg as a ‘small little creature with a shriveled soul.’

He also attempted to discredit a cited New York Post article by invoking the tabloid’s notorious ‘Headless Body in Topless Bar’ cover.

These moments highlight Meta’s growing resentment toward the press, which has intensified alongside rising criticism of its business practices. Once seen as scrappy disruptors, Facebook and other tech giants now face regular scrutiny—and appear eager to push back.

Swisher and Malik have both openly criticized Meta in the past. Swisher famously challenged Zuckerberg over content moderation and political speech, while Malik has questioned the company’s global expansion strategies.

Their inclusion in a legal document presented in court underscores how media commentary is influencing regulatory narratives. Meta has previously blamed critical press for damaging user sentiment in the wake of scandals like Cambridge Analytica.

The FTC argues that consistent engagement levels despite bad press prove Meta’s monopoly power—users feel they have no real alternatives to Facebook and Instagram. As the trial continues, so too does Meta’s public battle—not just with regulators, but with the journalists documenting its rise and reckoning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instagram calls for EU-wide teen protection rules

Instagram is calling on the European Union to introduce new regulations requiring app stores to implement age verification and parental approval systems.

The platform argues that such protections, applied consistently across all apps, are essential to safeguarding teenagers from harmful content online.

‘The EU needs consistent standards for all apps, to help keep teens safe, empower parents and preserve privacy,’ Instagram said in a blog post.

The company believes the most effective way to achieve this is by introducing protections at the source—before teenagers download apps from the Apple App Store or Google Play Store.

Instagram is proposing that app stores verify users’ ages and require parental approval for teen app downloads. The social media platform cites new research from Morning Consult showing that three in four parents support such legislation.

Most parents also view app stores, rather than individual apps, as the safer and more manageable point for controlling what their teens can access.

To reinforce its position, Instagram points to its own safety efforts, such as the introduction of Teen Accounts. These private-by-default profiles limit teen exposure to messages and content from unknown users, and apply stricter filters to reduce exposure to sensitive material.

Instagram says it is working with civil society groups, industry partners, and European policymakers to push for rules that protect young users across platforms. With teen safety a growing concern, the company insists that industry-wide, enforceable solutions are urgently needed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Republicans seek to block state AI laws for a decade

Republican lawmakers in the US have introduced a proposal that would block states from regulating artificial intelligence for the next ten years. Critics argue the move is a handout to Big Tech and could stall protections already passed in states like California, Utah, and Colorado.

The measure, embedded in a budget reconciliation bill, would prevent states from enforcing rules on a wide range of automated systems, from AI chatbots to algorithms used in health and justice sectors.

Over 500 AI-related bills have been proposed this year at the state level, and many of them would be nullified if the federal ban succeeds. Supporters of the bill claim AI oversight should happen at the national level to avoid a confusing patchwork of state laws.

Opponents, including US Democrats and tech accountability groups, warn the ban could allow unchecked algorithmic discrimination, weaken privacy, and leave the public vulnerable to AI-driven harms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok unveils AI video feature

TikTok has launched ‘AI Alive,’ its first image-to-video feature that allows users to transform static photos into animated short videos within TikTok Stories.

Accessible only through the Story Camera, the tool applies AI-driven movement and effects—like shifting skies, drifting clouds, or expressive animations—to bring photos to life.

Unlike text-to-image tools found on Instagram and Snapchat, TikTok’s latest feature takes visual storytelling further by enabling full video generation from single images. Although Snapchat plans to introduce a similar function, TikTok has moved ahead with this innovation.

All AI Alive videos will carry an AI-generated label and include C2PA metadata to ensure transparency, even when shared beyond the platform.

TikTok emphasises safety, noting that every AI Alive video undergoes several moderation checks before it appears to creators.

Uploaded photos, prompts, and generated videos are reviewed to prevent rule-breaking content. Users can report violations, and final safety reviews are conducted before public sharing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Harvey adds Google and Anthropic AI

Harvey, the fast-growing legal AI startup backed early by the OpenAI Startup Fund, is now embracing foundation models from Google and Anthropic instead of relying solely on OpenAI’s.

In a recent blog post, the company said it would expand its AI model options after internal benchmarks showed that different tools excel at different legal tasks.

The shift marks a notable win for OpenAI’s competitors, even though Harvey insists it’s not abandoning OpenAI. Its in-house benchmark, BigLaw, revealed that several non-OpenAI models now outperform Harvey’s original system on specific legal functions.

For instance, Google’s Gemini 2.5 Pro performs well at legal drafting, while OpenAI’s o3 and Anthropic’s Claude 3.7 Sonnet are better suited for complex pre-trial work.

Instead of building its own models, Harvey now aims to fine-tune top-tier offerings from multiple vendors, including through Amazon’s cloud. The company also plans to launch a public legal benchmark leaderboard, combining expert legal reviews with technical metrics.

While OpenAI remains a close partner and investor, Harvey’s broader strategy signals growing competition in the race to serve the legal industry with AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Masked cybercrime groups rise as attacks escalate worldwide

Cybercrime is thriving like never before, with hackers launching attacks ranging from absurd ransomware demands of $1 trillion to large-scale theft of personal data. Despite efforts from Microsoft, Google and even the FBI, these threat actors continue to outpace defences.

A new report by Group-IB has analysed over 1,500 cybercrime investigations to uncover the most active and dangerous hacker groups operating today.

Rather than fading away after arrests or infighting, many cybercriminal gangs are re-emerging stronger than before.

Group-IB’s May 2025 report highlights a troubling increase in key attack types across 2024 — phishing rose by 22%, ransomware leak sites by 10%, and APT (advanced persistent threat) attacks by 58%. The United States was the most affected country by ransomware activity.

At the top of the cybercriminal hierarchy now sits RansomHub, a ransomware-as-a-service group that emerged from the collapsed ALPHV group and has already overtaken long-established players in attack numbers.

Behind it is GoldFactory, which developed the first iOS banking trojan and exploited facial recognition data. Lazarus, a well-known North Korean state-linked group, also remains highly active under multiple aliases.

Meanwhile, politically driven hacktivist group NoName057(16) has been targeting European institutions using denial-of-service attacks.

With jurisdictional gaps allowing cybercriminals to flourish, these masked hackers remain a growing concern for global cybersecurity, especially as new threat actors emerge from the shadows instead of disappearing for good.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!