EU to propose new rules and app to protect children online

The European Commission is taking significant steps to create a safer online environment for children by introducing draft guidelines under the Digital Services Act. These guidelines aim to ensure that online platforms accessible to minors maintain a high level of privacy, safety, and security.

The draft guidelines propose several key measures to safeguard minors online. These include verifying users’ ages to restrict access where appropriate, improving content recommendation systems to reduce children’s exposure to harmful or inappropriate material, and setting children’s accounts to private by default.

Additionally, the guidelines recommend best practices for child-safe content moderation, as well as providing child-friendly reporting channels and user support. They also offer guidance on how platforms should govern themselves internally to maintain a child-safe environment.

These guidelines will apply to all online platforms that minors can access, except for very small enterprises, and will also cover very large platforms with over 45 million monthly users in the EU. The European Commission has involved a wide range of stakeholders in developing the guidelines, including Better Internet for Kids (BIK+) Youth ambassadors, children, parents, guardians, national authorities, online platform providers, and experts.

The inclusive consultation process helps ensure the guidelines are practical and comprehensive. The guidelines are open for feedback until June 10, 2025, with adoption expected by summer.

Meanwhile, the Commission is creating an open-source age-verification app to confirm users’ age without risking privacy, as a temporary measure before the EU Digital Identity Wallet launches in 2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Hackers use fake PayPal email to seize bank access

A man from Virginia fell victim to a sophisticated PayPal scam that allowed hackers to gain remote control of his computer and access his bank accounts.

After receiving a fake email about a laptop purchase, he called the number listed in the message, believing it to be legitimate. The person on the other end instructed him to enter a code into his browser, which unknowingly installed a program giving the scammer full access to his system.

Files were scanned, and money was transferred between his accounts—all while he was urged to stay on the line and visit the bank, without informing anyone.

The scam, known as a remote access attack, starts with a convincing email that appears to come from a trusted source. Instead of fixing any problem, the real aim is to deceive victims into granting hackers full control.

Once inside, scammers can steal personal data, access bank accounts, and install malware that remains even after the immediate threat ends. These attacks often unfold in minutes, using fear and urgency to manipulate targets into acting quickly and irrationally.

Quick action helped limit the damage in this case. The victim shut down his computer, contacted his bank and changed his passwords—steps that likely prevented more extensive losses. However, many people aren’t as fortunate.

Experts warn that scammers increasingly rely on psychological tricks instead of just technical ones, isolating their victims and urging secrecy during the attack.

To avoid falling for similar scams, it’s safer to verify emails by using official websites instead of clicking any embedded links or calling suspicious numbers.

Remote control should never be granted to unsolicited support calls, and all devices should have up-to-date antivirus protection and multifactor authentication enabled. Online safety now depends just as much on caution and awareness as it does on technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kick faces investigation after ignoring Ofcom risk assessment request

Ofcom has launched two investigations into Kick Online Entertainment, the provider of a pornography website, over potential breaches of the Online Safety Act.

The regulator said the company failed to respond to a statutory request for a risk assessment related to illegal content appearing on the platform.

As a result, Ofcom is investigating whether Kick has failed to meet its legal obligations to complete and retain a record of such a risk assessment, as well as for not responding to the regulator’s information request.

Ofcom confirmed it had received complaints about potentially illegal material on the site, including child sexual abuse content and extreme pornography.

It is also considering a third investigation into whether the platform has implemented adequate safety measures to protect users from such material—another requirement under the Act.

Under the Online Safety Act, firms found in breach can face fines of up to £18 million or 10% of their global revenue, whichever is higher. In the most severe cases, Ofcom can pursue court orders to block UK access to the website or compel payment providers and advertisers to cut ties with the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instagram calls for EU-wide teen protection rules

Instagram is calling on the European Union to introduce new regulations requiring app stores to implement age verification and parental approval systems.

The platform argues that such protections, applied consistently across all apps, are essential to safeguarding teenagers from harmful content online.

‘The EU needs consistent standards for all apps, to help keep teens safe, empower parents and preserve privacy,’ Instagram said in a blog post.

The company believes the most effective way to achieve this is by introducing protections at the source—before teenagers download apps from the Apple App Store or Google Play Store.

Instagram is proposing that app stores verify users’ ages and require parental approval for teen app downloads. The social media platform cites new research from Morning Consult showing that three in four parents support such legislation.

Most parents also view app stores, rather than individual apps, as the safer and more manageable point for controlling what their teens can access.

To reinforce its position, Instagram points to its own safety efforts, such as the introduction of Teen Accounts. These private-by-default profiles limit teen exposure to messages and content from unknown users, and apply stricter filters to reduce exposure to sensitive material.

Instagram says it is working with civil society groups, industry partners, and European policymakers to push for rules that protect young users across platforms. With teen safety a growing concern, the company insists that industry-wide, enforceable solutions are urgently needed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybercriminals trick users with fake AI apps

Cybercriminals are tricking users into downloading a dangerous new malware called Noodlophile by disguising it as AI software. Rather than using typical phishing tactics, attackers create convincing fake platforms that appear to offer AI-powered tools for editing videos or images.

These are promoted through realistic-looking Facebook groups and viral social media posts, some of which have received over 62,000 views.

Users are lured with promises of AI-generated content and are directed to bogus sites, one of which pretends to be CapCut AI, offering video editing features. Once users upload prompts and attempt to download the content, they unknowingly receive a malicious ZIP file.

Inside, it is a disguised program that kicks off a chain of infections, eventually installing the Noodlophile malware. However, this software can steal browser credentials, crypto wallet details, and other sensitive data.

The malware is linked to a Vietnamese developer who identifies themselves as a ‘passionate Malware Developer’ on GitHub. Vietnam has a known history of cybercrime activity targeting social media platforms like Facebook.

In some cases, the Noodlophile Stealer has been bundled with remote access tools like XWorm, which allow attackers to maintain long-term control over victims’ systems.

This isn’t the first time attackers have used public interest in AI for malicious purposes. Meta removed over 1,000 dangerous links in 2023 that exploited ChatGPT’s popularity to spread malware.

Meanwhile, cybersecurity experts at CYFIRMA have reported another threat: a new, simple yet effective malware called PupkinStealer, which secretly sends stolen information to hackers using Telegram bots.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google to roll out Gemini AI for kids under 13

Google has announced plans to introduce its Gemini AI platform to children under 13, a move that has sparked mixed reactions.

Parents recently received notifications about the rollout, with Google stating that children will be able to use Gemini for tasks such as homework help, answering general questions, and even bedtime stories.

The announcement has triggered concern among some organisations due to the risks associated with young users interacting with AI.

Critics point out that AI models have previously struggled to maintain child-appropriate safeguards and worry that children may not fully grasp the implications of engaging with such technology. Despite these issues, others have applauded Google’s decision to keep parents closely involved.

Taylor Barkley, Director of Public Policy at the Abundance Institute, praised Google for prioritising parental involvement. He noted that while risks exist, the best approach is not to impose strict bans but to work collaboratively with parents and caregivers to manage children’s AI usage.

‘Google should be applauded for proactively notifying parents,’ Barkley said in a statement. ‘When it comes to new technologies, parents come first.’

To ensure parental oversight, Google will require children’s access to Gemini to be managed through Family Link, its parental control platform. Family Link allows parents to monitor device usage, manage privacy settings, share location, and establish healthy digital habits for their families.

As AI continues to permeate everyday life, Google’s decision highlights the delicate balance between offering educational opportunities and ensuring the safe and responsible use of technology among younger users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini AI now accessible to kids via Family Link

Google has announced that children under the age of 13 will soon be able to access its Gemini AI chatbot through Family Link accounts. The service will allow parents to monitor their child’s use, set screen time limits, and disable access if desired.

Gemini, designed to assist with tasks like homework and storytelling, includes safeguards to prevent inappropriate content and protect child users. Google acknowledged the possibility of errors in the AI’s responses and urged parental oversight.

Google emphasised that data collected from child users will not be used to train AI models. Parents will be notified when their child first uses Gemini and are advised to encourage critical thinking and remind children not to share personal information with the chatbot.

Despite these precautions, child safety advocates have voiced concerns. Organisations such as Fairplay argue that allowing young children to interact with AI chatbots could expose them to risks, citing previous incidents involving other AI platforms.

International bodies, including UNICEF, have also highlighted the need for stringent regulations to safeguard children’s rights in an increasingly digital world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US lawmakers push for app store age checks

A new bill introduced by US lawmakers could force app stores like Apple’s App Store and Google Play to verify the age of all users, in a move aimed at increasing online safety for minors.

Known as the App Store Accountability Act, the legislation would require age categorisation and parental consent before minors can download apps or make in-app purchases. If passed, the law would apply to platforms with at least five million users and would come into effect one year after approval.

The bill proposes dividing users into age brackets — from ‘young child’ to ‘adult’ — and holding app stores accountable for enforcing access restrictions.

Lawmakers behind the bill, Republican Senator Mike Lee and Representative John James, argue that Big Tech companies must take responsibility for limiting children’s exposure to harmful content. They believe app stores are the right gatekeepers for verifying age and protecting minors online.

Privacy advocates and tech companies have voiced concern about the bill’s implications. Legal experts warn that verifying users’ ages may require sensitive personal data, such as ID documents or facial recognition scans, raising the risk of data misuse.

Apple said such verification would apply to all users, not just children, and criticised the idea as counterproductive to privacy.

The proposal has widened a rift between app store operators and social media platforms. While Meta, X, and Snap back centralised age checks at the app store level, Apple and Google accuse them of shifting the burden of responsibility.

Both tech giants emphasise the importance of shared responsibility and continue to engage with lawmakers on crafting practical and privacy-conscious solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK refuses to include Online Safety Act in US trade talks

The UK government has ruled out watering down the Online Safety Act as part of any trade negotiations with the US, despite pressure from American tech giants.

Speaking to MPs on the Science, Innovation and Technology Committee, Baroness Jones of Whitchurch, the parliamentary under-secretary for online safety, stated unequivocally that the legislation was ‘not up for negotiation’.

‘There have been clear instructions from the Prime Minister,’ she said. ‘The Online Safety Act is not part of the trade deal discussions. It’s a piece of legislation — it can’t just be negotiated away.’

Reports had suggested that President Donald Trump’s administration might seek to make loosening the UK’s online safety rules a condition of a post-Brexit trade agreement, following lobbying from large US-based technology firms.

However, Baroness Jones said the legislation was well into its implementation phase and that ministers were ‘happy to reassure everybody’ that the government is sticking to it.

The Online Safety Act will require tech platforms that host user-generated content, such as social media firms, to take active steps to protect users — especially children — from harmful and illegal content.

Non-compliant companies may face fines of up to £18 million or 10% of global turnover, whichever is greater. In extreme cases, platforms could be blocked from operating in the UK.

Mark Bunting, a representative of Ofcom, which is overseeing enforcement of the new rules, said the regulator would have taken action had the legislation been in force during last summer’s riots in Southport, which were exacerbated by online misinformation.

His comments contrasted with tech firms including Meta, TikTok and X, which claimed in earlier hearings that little would have changed under the new rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake victims gain new rights with House-approved bill

The US House of Representatives has passed the ‘Take It Down’ Act with overwhelming bipartisan support, aiming to protect Americans from the spread of deepfake and revenge pornography.

The bill, approved by a 409-2 vote, criminalises the distribution of non-consensual intimate imagery—including AI-generated content—and now heads to President Donald Trump for his signature.

First Lady Melania Trump, who returned to public advocacy earlier this year, played a key role in supporting the legislation. She lobbied lawmakers last month and celebrated the bill’s passage, saying she was honoured to help guide it through Congress.

The White House confirmed she will attend the signing ceremony.

The law requires social media platforms and similar websites to remove such harmful content upon request from victims, instead of allowing it to remain unchecked.

Victims of deepfake pornography have included both public figures such as Taylor Swift and Alexandria Ocasio-Cortez, and private individuals like high school students.

Introduced by Republican Senator Ted Cruz and backed by Democratic lawmakers including Amy Klobuchar and Madeleine Dean, the bill reflects growing concern across party lines about online abuse.

Melania Trump, echoing her earlier ‘Be Best’ initiative, stressed the need to ensure young people—especially girls—can navigate the internet safely instead of being left vulnerable to digital exploitation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!