Kick faces investigation after ignoring Ofcom risk assessment request

Ofcom has launched two investigations into Kick Online Entertainment, the provider of a pornography website, over potential breaches of the Online Safety Act.

The regulator said the company failed to respond to a statutory request for a risk assessment related to illegal content appearing on the platform.

As a result, Ofcom is investigating whether Kick has failed to meet its legal obligations to complete and retain a record of such a risk assessment, as well as for not responding to the regulator’s information request.

Ofcom confirmed it had received complaints about potentially illegal material on the site, including child sexual abuse content and extreme pornography.

It is also considering a third investigation into whether the platform has implemented adequate safety measures to protect users from such material—another requirement under the Act.

Under the Online Safety Act, firms found in breach can face fines of up to £18 million or 10% of their global revenue, whichever is higher. In the most severe cases, Ofcom can pursue court orders to block UK access to the website or compel payment providers and advertisers to cut ties with the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instagram calls for EU-wide teen protection rules

Instagram is calling on the European Union to introduce new regulations requiring app stores to implement age verification and parental approval systems.

The platform argues that such protections, applied consistently across all apps, are essential to safeguarding teenagers from harmful content online.

‘The EU needs consistent standards for all apps, to help keep teens safe, empower parents and preserve privacy,’ Instagram said in a blog post.

The company believes the most effective way to achieve this is by introducing protections at the source—before teenagers download apps from the Apple App Store or Google Play Store.

Instagram is proposing that app stores verify users’ ages and require parental approval for teen app downloads. The social media platform cites new research from Morning Consult showing that three in four parents support such legislation.

Most parents also view app stores, rather than individual apps, as the safer and more manageable point for controlling what their teens can access.

To reinforce its position, Instagram points to its own safety efforts, such as the introduction of Teen Accounts. These private-by-default profiles limit teen exposure to messages and content from unknown users, and apply stricter filters to reduce exposure to sensitive material.

Instagram says it is working with civil society groups, industry partners, and European policymakers to push for rules that protect young users across platforms. With teen safety a growing concern, the company insists that industry-wide, enforceable solutions are urgently needed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybercriminals trick users with fake AI apps

Cybercriminals are tricking users into downloading a dangerous new malware called Noodlophile by disguising it as AI software. Rather than using typical phishing tactics, attackers create convincing fake platforms that appear to offer AI-powered tools for editing videos or images.

These are promoted through realistic-looking Facebook groups and viral social media posts, some of which have received over 62,000 views.

Users are lured with promises of AI-generated content and are directed to bogus sites, one of which pretends to be CapCut AI, offering video editing features. Once users upload prompts and attempt to download the content, they unknowingly receive a malicious ZIP file.

Inside, it is a disguised program that kicks off a chain of infections, eventually installing the Noodlophile malware. However, this software can steal browser credentials, crypto wallet details, and other sensitive data.

The malware is linked to a Vietnamese developer who identifies themselves as a ‘passionate Malware Developer’ on GitHub. Vietnam has a known history of cybercrime activity targeting social media platforms like Facebook.

In some cases, the Noodlophile Stealer has been bundled with remote access tools like XWorm, which allow attackers to maintain long-term control over victims’ systems.

This isn’t the first time attackers have used public interest in AI for malicious purposes. Meta removed over 1,000 dangerous links in 2023 that exploited ChatGPT’s popularity to spread malware.

Meanwhile, cybersecurity experts at CYFIRMA have reported another threat: a new, simple yet effective malware called PupkinStealer, which secretly sends stolen information to hackers using Telegram bots.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google to roll out Gemini AI for kids under 13

Google has announced plans to introduce its Gemini AI platform to children under 13, a move that has sparked mixed reactions.

Parents recently received notifications about the rollout, with Google stating that children will be able to use Gemini for tasks such as homework help, answering general questions, and even bedtime stories.

The announcement has triggered concern among some organisations due to the risks associated with young users interacting with AI.

Critics point out that AI models have previously struggled to maintain child-appropriate safeguards and worry that children may not fully grasp the implications of engaging with such technology. Despite these issues, others have applauded Google’s decision to keep parents closely involved.

Taylor Barkley, Director of Public Policy at the Abundance Institute, praised Google for prioritising parental involvement. He noted that while risks exist, the best approach is not to impose strict bans but to work collaboratively with parents and caregivers to manage children’s AI usage.

‘Google should be applauded for proactively notifying parents,’ Barkley said in a statement. ‘When it comes to new technologies, parents come first.’

To ensure parental oversight, Google will require children’s access to Gemini to be managed through Family Link, its parental control platform. Family Link allows parents to monitor device usage, manage privacy settings, share location, and establish healthy digital habits for their families.

As AI continues to permeate everyday life, Google’s decision highlights the delicate balance between offering educational opportunities and ensuring the safe and responsible use of technology among younger users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini AI now accessible to kids via Family Link

Google has announced that children under the age of 13 will soon be able to access its Gemini AI chatbot through Family Link accounts. The service will allow parents to monitor their child’s use, set screen time limits, and disable access if desired.

Gemini, designed to assist with tasks like homework and storytelling, includes safeguards to prevent inappropriate content and protect child users. Google acknowledged the possibility of errors in the AI’s responses and urged parental oversight.

Google emphasised that data collected from child users will not be used to train AI models. Parents will be notified when their child first uses Gemini and are advised to encourage critical thinking and remind children not to share personal information with the chatbot.

Despite these precautions, child safety advocates have voiced concerns. Organisations such as Fairplay argue that allowing young children to interact with AI chatbots could expose them to risks, citing previous incidents involving other AI platforms.

International bodies, including UNICEF, have also highlighted the need for stringent regulations to safeguard children’s rights in an increasingly digital world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US lawmakers push for app store age checks

A new bill introduced by US lawmakers could force app stores like Apple’s App Store and Google Play to verify the age of all users, in a move aimed at increasing online safety for minors.

Known as the App Store Accountability Act, the legislation would require age categorisation and parental consent before minors can download apps or make in-app purchases. If passed, the law would apply to platforms with at least five million users and would come into effect one year after approval.

The bill proposes dividing users into age brackets — from ‘young child’ to ‘adult’ — and holding app stores accountable for enforcing access restrictions.

Lawmakers behind the bill, Republican Senator Mike Lee and Representative John James, argue that Big Tech companies must take responsibility for limiting children’s exposure to harmful content. They believe app stores are the right gatekeepers for verifying age and protecting minors online.

Privacy advocates and tech companies have voiced concern about the bill’s implications. Legal experts warn that verifying users’ ages may require sensitive personal data, such as ID documents or facial recognition scans, raising the risk of data misuse.

Apple said such verification would apply to all users, not just children, and criticised the idea as counterproductive to privacy.

The proposal has widened a rift between app store operators and social media platforms. While Meta, X, and Snap back centralised age checks at the app store level, Apple and Google accuse them of shifting the burden of responsibility.

Both tech giants emphasise the importance of shared responsibility and continue to engage with lawmakers on crafting practical and privacy-conscious solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK refuses to include Online Safety Act in US trade talks

The UK government has ruled out watering down the Online Safety Act as part of any trade negotiations with the US, despite pressure from American tech giants.

Speaking to MPs on the Science, Innovation and Technology Committee, Baroness Jones of Whitchurch, the parliamentary under-secretary for online safety, stated unequivocally that the legislation was ‘not up for negotiation’.

‘There have been clear instructions from the Prime Minister,’ she said. ‘The Online Safety Act is not part of the trade deal discussions. It’s a piece of legislation — it can’t just be negotiated away.’

Reports had suggested that President Donald Trump’s administration might seek to make loosening the UK’s online safety rules a condition of a post-Brexit trade agreement, following lobbying from large US-based technology firms.

However, Baroness Jones said the legislation was well into its implementation phase and that ministers were ‘happy to reassure everybody’ that the government is sticking to it.

The Online Safety Act will require tech platforms that host user-generated content, such as social media firms, to take active steps to protect users — especially children — from harmful and illegal content.

Non-compliant companies may face fines of up to £18 million or 10% of global turnover, whichever is greater. In extreme cases, platforms could be blocked from operating in the UK.

Mark Bunting, a representative of Ofcom, which is overseeing enforcement of the new rules, said the regulator would have taken action had the legislation been in force during last summer’s riots in Southport, which were exacerbated by online misinformation.

His comments contrasted with tech firms including Meta, TikTok and X, which claimed in earlier hearings that little would have changed under the new rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake victims gain new rights with House-approved bill

The US House of Representatives has passed the ‘Take It Down’ Act with overwhelming bipartisan support, aiming to protect Americans from the spread of deepfake and revenge pornography.

The bill, approved by a 409-2 vote, criminalises the distribution of non-consensual intimate imagery—including AI-generated content—and now heads to President Donald Trump for his signature.

First Lady Melania Trump, who returned to public advocacy earlier this year, played a key role in supporting the legislation. She lobbied lawmakers last month and celebrated the bill’s passage, saying she was honoured to help guide it through Congress.

The White House confirmed she will attend the signing ceremony.

The law requires social media platforms and similar websites to remove such harmful content upon request from victims, instead of allowing it to remain unchecked.

Victims of deepfake pornography have included both public figures such as Taylor Swift and Alexandria Ocasio-Cortez, and private individuals like high school students.

Introduced by Republican Senator Ted Cruz and backed by Democratic lawmakers including Amy Klobuchar and Madeleine Dean, the bill reflects growing concern across party lines about online abuse.

Melania Trump, echoing her earlier ‘Be Best’ initiative, stressed the need to ensure young people—especially girls—can navigate the internet safely instead of being left vulnerable to digital exploitation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government urged to outlaw apps creating deepfake abuse images

The Children’s Commissioner has urged the UK Government to ban AI apps that create sexually explicit images through “nudification” technology. AI tools capable of manipulating real photos to make people appear naked are being used to target children.

Concerns in the UK are growing as these apps are now widely accessible online, often through social media and search platforms. In a newly published report, Dame Rachel warned that children, particularly girls, are altering their online behaviour out of fear of becoming victims of such technologies.

She stressed that while AI holds great potential, it also poses serious risks to children’s safety. The report also recommends stronger legal duties for AI developers and improved systems to remove explicit deepfake content from the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK introduces landmark online safety rules to protect children

The UK’s regulator, Ofcom, has unveiled new online safety rules to provide stronger protections for children, requiring platforms to adjust algorithms, implement stricter age checks, and swiftly tackle harmful content by 25 July or face hefty fines. These measures target sites hosting pornography or content promoting self-harm, suicide, and eating disorders, demanding more robust efforts to shield young users.

Ofcom chief Dame Melanie Dawes called the regulations a ‘gamechanger,’ emphasising that platforms must adapt if they wish to serve under-18s in the UK. While supporters like former Facebook safety officer Prof Victoria Baines see this as a positive step, critics argue the rules don’t go far enough, with campaigners expressing disappointment over perceived gaps, particularly in addressing encrypted private messaging.

The rules, part of the Online Safety Act pending parliamentary approval, include over 40 obligations such as clearer terms of service for children, annual risk reviews, and dedicated accountability for child safety. The NSPCC welcomed the move but urged Ofcom to tighten oversight, especially where hidden online risks remain unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!