EU to propose new rules and app to protect children online

The European Commission is taking significant steps to create a safer online environment for children by introducing draft guidelines under the Digital Services Act. These guidelines aim to ensure that online platforms accessible to minors maintain a high level of privacy, safety, and security.

The draft guidelines propose several key measures to safeguard minors online. These include verifying users’ ages to restrict access where appropriate, improving content recommendation systems to reduce children’s exposure to harmful or inappropriate material, and setting children’s accounts to private by default.

Additionally, the guidelines recommend best practices for child-safe content moderation, as well as providing child-friendly reporting channels and user support. They also offer guidance on how platforms should govern themselves internally to maintain a child-safe environment.

These guidelines will apply to all online platforms that minors can access, except for very small enterprises, and will also cover very large platforms with over 45 million monthly users in the EU. The European Commission has involved a wide range of stakeholders in developing the guidelines, including Better Internet for Kids (BIK+) Youth ambassadors, children, parents, guardians, national authorities, online platform providers, and experts.

The inclusive consultation process helps ensure the guidelines are practical and comprehensive. The guidelines are open for feedback until June 10, 2025, with adoption expected by summer.

Meanwhile, the Commission is creating an open-source age-verification app to confirm users’ age without risking privacy, as a temporary measure before the EU Digital Identity Wallet launches in 2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI launches AI safety hub

OpenAI has launched a public online hub to share internal safety evaluations of its AI models, aiming to increase transparency around harmful content, jailbreaks, and hallucination risks. The hub will be updated after major model changes, allowing the public to track progress in safety and reliability over time.

The move follows growing criticism about the company’s testing methods, especially after inappropriate ChatGPT responses surfaced in late 2023. Instead of waiting for backlash, OpenAI is now introducing an optional alpha testing phase, letting users provide feedback before wider model releases.

The hub also marks a departure from the company’s earlier stance on secrecy. In 2019, OpenAI withheld GPT-2 over misuse concerns. Since then, it has shifted towards transparency by forming safety-focused teams and responding to calls for open safety metrics.

OpenAI’s approach appears timely, as several countries are building AI Safety Institutes to evaluate models before launch. Instead of relying on private sector efforts alone, the global landscape now reflects a multi-stakeholder push to create stronger safety standards and governance for advanced AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok adds AI tool to animate photos with realistic effects

TikTok has launched a new feature called AI Alive, allowing users to turn still images into dynamic, short videos. Instead of needing advanced editing skills, creators can now use AI to generate movement and effects with a few taps.

By accessing the Story Camera and selecting a static photo, users can simply type how they want the image to change — such as making the subject smile, dance, or tilt forward. AI Alive then animates the photo, using creative effects to produce a more engaging story.

TikTok says its moderation systems review the original image, the AI prompt, and the final video before it’s shown to the user. A second check occurs before a post is shared publicly, and every video made with AI Alive will include an ‘AI-generated’ label and C2PA metadata to ensure transparency.

The feature stands out as one of the first built-in AI image-to-video tools on a major platform. Snapchat and Instagram already offer AI image generation from text, and Snapchat is reportedly developing a similar image-to-video feature.

Meanwhile, TikTok is also said to be working on adding support for sending photos and voice messages via direct message — something rival apps have long supported.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NatWest hit by 100 million cyber attacks every month

NatWest is defending itself against an average of 100 million cyber attacks each month, according to the bank’s head of cybersecurity.

Speaking to Holyrood’s Criminal Justice Committee, Chris Ulliott outlined the ‘staggering’ scale of digital threats targeting the bank’s systems. Around a third of all incoming emails are blocked before reaching staff, as they are suspected to be the start of an attack.

Instead of relying on basic filters, NatWest analyses every email for malicious content and has a cybersecurity team of hundreds, supported by a multi-million-pound budget.

Mr Ulliott also warned of the growing use of AI by cyber criminals to make scams more convincing—such as altering their appearance during video calls to build trust with victims.

Police Scotland reported that cybercrime has more than doubled since 2020, with incidents rising from 7,710 to 18,280 in 2024. Officials highlighted the threat posed by groups like Scattered Spider, believed to consist of young hackers sharing techniques online.

MSP Rona Mackay called the figures ‘absolutely staggering,’ while Ben Macpherson said he had even been impersonated by fraudsters.

Law enforcement agencies, including the FBI, are now working together to tackle online crime. Meanwhile, Age Scotland warned that many older people lack confidence online, making them especially vulnerable to scams that can lead to financial ruin and emotional distress.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Valve denies Steam data breach

Valve has confirmed that a cache of old Steam two-factor authentication codes and phone numbers, recently circulated by a hacker known as ‘Machine1337’, is indeed real, but insists it did not suffer a data breach.

Instead of pointing to its own systems, Valve explained that the leak involves outdated SMS messages, which are typically sent unencrypted and routed through multiple providers. These codes, once valid for only 15 minutes, were not linked to specific Steam accounts, passwords, or payment information.

The leaked data sparked early speculation that third-party messaging provider Twilio was the source of the breach, especially after their name appeared in the dataset. However, both Valve and Twilio denied any direct involvement, with Valve stating it does not even use Twilio’s services.

The true origin of the breach remains uncertain, and Valve acknowledged that tracing it may be difficult, as SMS messages often pass through several intermediaries before reaching users.

While the leaked information may not immediately endanger Steam accounts, Valve advised users to remain cautious. Phone numbers, when combined with other data, could still be used for phishing attacks.

Instead of relying on SMS for security, users are encouraged to activate the Steam Mobile Authenticator, which offers a more secure alternative for account verification.

Despite the uncertainty surrounding the source of the breach, Valve reassured users there’s no need to change passwords or phone numbers. Still, it urged vigilance, recommending that users routinely review their security settings and remain wary of any unsolicited account notifications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use fake PayPal email to seize bank access

A man from Virginia fell victim to a sophisticated PayPal scam that allowed hackers to gain remote control of his computer and access his bank accounts.

After receiving a fake email about a laptop purchase, he called the number listed in the message, believing it to be legitimate. The person on the other end instructed him to enter a code into his browser, which unknowingly installed a program giving the scammer full access to his system.

Files were scanned, and money was transferred between his accounts—all while he was urged to stay on the line and visit the bank, without informing anyone.

The scam, known as a remote access attack, starts with a convincing email that appears to come from a trusted source. Instead of fixing any problem, the real aim is to deceive victims into granting hackers full control.

Once inside, scammers can steal personal data, access bank accounts, and install malware that remains even after the immediate threat ends. These attacks often unfold in minutes, using fear and urgency to manipulate targets into acting quickly and irrationally.

Quick action helped limit the damage in this case. The victim shut down his computer, contacted his bank and changed his passwords—steps that likely prevented more extensive losses. However, many people aren’t as fortunate.

Experts warn that scammers increasingly rely on psychological tricks instead of just technical ones, isolating their victims and urging secrecy during the attack.

To avoid falling for similar scams, it’s safer to verify emails by using official websites instead of clicking any embedded links or calling suspicious numbers.

Remote control should never be granted to unsolicited support calls, and all devices should have up-to-date antivirus protection and multifactor authentication enabled. Online safety now depends just as much on caution and awareness as it does on technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepMind unveils AlphaEvolve for scientific breakthroughs

Google DeepMind has unveiled AlphaEvolve, a new AI system designed to help solve complex scientific and mathematical problems by improving how algorithms are developed.

Rather than acting like a standard chatbot, AlphaEvolve blends large language models from the Gemini family with an evolutionary approach, enabling it to generate, assess, and refine multiple solutions at once.

Instead of relying on a single output, AlphaEvolve allows researchers to submit a problem and potential directions. The system then uses both Gemini Flash and Gemini Pro to create various solutions, which are automatically evaluated.

The best results are selected and enhanced through an iterative process, improving accuracy and reducing hallucinations—a common issue with AI-generated content.

Unlike earlier DeepMind tools such as AlphaFold, which focused on narrow domains, AlphaEvolve is a general-purpose AI for coding and algorithmic tasks.

It has already shown its value by optimising Google’s own Borg data centre management system, delivering a 0.7% efficiency gain—significant given Google’s global scale.

The AI also devised a new method for multiplying complex matrices, outperforming a decades-old technique and even beating DeepMind’s specialised AlphaTensor model.

AlphaEvolve has also contributed to improvements in Google’s hardware design by optimising Verilog code for upcoming Tensor chips.

Though not publicly available yet due to its complexity, AlphaEvolve’s evaluation-based framework could eventually be adapted for smaller AI tools used by researchers elsewhere.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kick faces investigation after ignoring Ofcom risk assessment request

Ofcom has launched two investigations into Kick Online Entertainment, the provider of a pornography website, over potential breaches of the Online Safety Act.

The regulator said the company failed to respond to a statutory request for a risk assessment related to illegal content appearing on the platform.

As a result, Ofcom is investigating whether Kick has failed to meet its legal obligations to complete and retain a record of such a risk assessment, as well as for not responding to the regulator’s information request.

Ofcom confirmed it had received complaints about potentially illegal material on the site, including child sexual abuse content and extreme pornography.

It is also considering a third investigation into whether the platform has implemented adequate safety measures to protect users from such material—another requirement under the Act.

Under the Online Safety Act, firms found in breach can face fines of up to £18 million or 10% of their global revenue, whichever is higher. In the most severe cases, Ofcom can pursue court orders to block UK access to the website or compel payment providers and advertisers to cut ties with the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta targets critics as FTC case unfolds

Long-standing friction between Big Tech and the media resurfaced during Meta’s antitrust trial with the Federal Trade Commission this week. In a heated courtroom exchange, Meta’s legal team used critical commentary from prominent tech journalists to cast doubt on the FTC’s case.

Meta’s lead attorney, Mark Hansen, questioned the credibility of FTC expert Scott Hemphill by referencing a 2019 antitrust pitch Hemphill co-authored with Facebook co-founder Chris Hughes and former White House advisor Tim Wu.

The presentation cited public statements from reporters Kara Swisher and Om Malik as evidence of Meta’s dominance and aggressive acquisitions.

Hansen dismissed Malik as a ‘failed blogger’ with personal bias and accused Swisher of similar hostility, projecting a headline where she described Mark Zuckerberg as a ‘small little creature with a shriveled soul.’

He also attempted to discredit a cited New York Post article by invoking the tabloid’s notorious ‘Headless Body in Topless Bar’ cover.

These moments highlight Meta’s growing resentment toward the press, which has intensified alongside rising criticism of its business practices. Once seen as scrappy disruptors, Facebook and other tech giants now face regular scrutiny—and appear eager to push back.

Swisher and Malik have both openly criticized Meta in the past. Swisher famously challenged Zuckerberg over content moderation and political speech, while Malik has questioned the company’s global expansion strategies.

Their inclusion in a legal document presented in court underscores how media commentary is influencing regulatory narratives. Meta has previously blamed critical press for damaging user sentiment in the wake of scandals like Cambridge Analytica.

The FTC argues that consistent engagement levels despite bad press prove Meta’s monopoly power—users feel they have no real alternatives to Facebook and Instagram. As the trial continues, so too does Meta’s public battle—not just with regulators, but with the journalists documenting its rise and reckoning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instagram calls for EU-wide teen protection rules

Instagram is calling on the European Union to introduce new regulations requiring app stores to implement age verification and parental approval systems.

The platform argues that such protections, applied consistently across all apps, are essential to safeguarding teenagers from harmful content online.

‘The EU needs consistent standards for all apps, to help keep teens safe, empower parents and preserve privacy,’ Instagram said in a blog post.

The company believes the most effective way to achieve this is by introducing protections at the source—before teenagers download apps from the Apple App Store or Google Play Store.

Instagram is proposing that app stores verify users’ ages and require parental approval for teen app downloads. The social media platform cites new research from Morning Consult showing that three in four parents support such legislation.

Most parents also view app stores, rather than individual apps, as the safer and more manageable point for controlling what their teens can access.

To reinforce its position, Instagram points to its own safety efforts, such as the introduction of Teen Accounts. These private-by-default profiles limit teen exposure to messages and content from unknown users, and apply stricter filters to reduce exposure to sensitive material.

Instagram says it is working with civil society groups, industry partners, and European policymakers to push for rules that protect young users across platforms. With teen safety a growing concern, the company insists that industry-wide, enforceable solutions are urgently needed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!