Denmark moves to ban social media for under-15s amid child safety concerns

Joining the broader trend, Denmark plans to ban children under 15 from using social media as Prime Minister Mette Frederiksen announced during her address to parliament on Tuesday.

Describing platforms as having ‘stolen our children’s childhood’, she said the government must act to protect young people from the growing harms of digital dependency.

Frederiksen urged lawmakers to ‘tighten the law’ to ensure greater child safety online, adding that parents could still grant consent for children aged 13 and above to have social media accounts.

Although the proposal is not yet part of the government’s legislative agenda, it builds on a 2024 citizen initiative that called for banning platforms such as TikTok, Snapchat and Instagram.

The prime minister’s comments reflect Denmark’s broader push within the EU to require age verification systems for online platforms.

Her statement follows a broader debate across Europe over children’s digital well-being and the responsibilities of tech companies in verifying user age and safeguarding minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy passes Europe’s first national AI law

Italy has become the first EU country to pass a national AI law, introducing detailed rules to govern the development and use of AI technologies across key sectors such as health, work, and justice.

The law, approved by the Senate on 17 September and in effect on 10 October, defines new national authorities responsible for oversight, including the Agency for Digital Italy and the National Cybersecurity Agency. Both bodies will supervise compliance, security, and responsible use of AI systems.

In healthcare, the law simplifies data-sharing for scientific research by allowing the secondary use of anonymised or pseudonymised patient data. New rules also ensure transparency and consent when AI is used by minors under 14.

The law introduces criminal penalties for those who use AI-generated images or videos to cause harm or deception. The Italian approach combines regulation with innovation, seeking to protect citizens while promoting responsible growth in AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Thousands affected by AI-linked data breach in New South Wales

A major data breach has affected the Northern Rivers Resilient Homes Program in New South Wales.

Authorities confirmed that personal information was exposed after a former contractor uploaded data to the AI platform ChatGPT between 12 and 15 March 2025.

The leaked file contained over 12,000 records, with details including names, addresses, contact information and health data. Up to 3,000 individuals may be impacted.

While there is no evidence yet that the information has been accessed by third parties, the NSW Reconstruction Authority (RA) and Cyber Security NSW have launched a forensic investigation.

Officials apologised for the breach and pledged to notify all affected individuals in the coming week. ID Support NSW is offering free advice and resources, while compensation will be provided for any costs linked to replacing compromised identity documents.

The RA has also strengthened its internal policies to prevent unauthorised use of AI platforms. An independent review of the incident is underway to determine how the breach occurred and why notification took several months.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FRA presents rights framework at EU Innovation Hub AI Cluster workshop in Tallinn

The EU Innovation Hub for Internal Security’s AI Cluster gathered in Tallinn on 25–26 September for a workshop focused on AI and its implications for security and rights.

The European Union Agency for Fundamental Rights (FRA) played a central role, presenting its Fundamental Rights Impact Assessment framework under the AI Act and highlighting its ongoing project on assessing high-risk AI.

A workshop that also provided an opportunity for FRA to give an update on its internal and external work in the AI field, reflecting the growing need to balance technological innovation with rights-based safeguards.

AI-driven systems in security and policing are increasingly under scrutiny, with regulators and agencies seeking to ensure compliance with EU rules on privacy, transparency and accountability.

In collaboration with Europol, FRA also introduced plans for a panel discussion on ‘The right to explanation of AI-driven individual decision-making’. Scheduled for 19 November in Brussels, the session will form part of the Annual Event of the EU Innovation Hub for Internal Security.

It is expected to draw policymakers, law enforcement representatives and rights advocates into dialogue about transparency obligations in AI use for security contexts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybercriminals abandon Kido extortion attempt amid public backlash

Hackers who stole data and images of children from Kido Schools have removed the material from the darknet and claimed to delete it. The group, calling itself Radiant, had demanded a £600,000 Bitcoin ransom, but Kido did not pay.

Radiant initially blurred the photos but kept the data online before later removing all content and issuing an apology. Experts remain sceptical, warning that cybercriminals often claim to delete stolen data while secretly keeping or selling it.

The breach exposed details of around 8,000 children and their families, sparking widespread outrage. Cybersecurity experts described the extortion attempt as a ‘new low’ for hackers and said Radiant likely backtracked due to public pressure.

Radiant said it accessed Kido’s systems by buying entry from an ‘initial access broker’ and then stealing data from accounts linked to Famly, an early years education platform. The Famly told the BBC its infrastructure was not compromised.

Kido confirmed the incident and stated that they are working with external specialists and authorities. With no ransom paid and Radiant abandoning its attempt, the hackers appear to have lost money on the operation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Calls for regulation grow as OpenAI and Meta adjust chatbots for teen mental health

OpenAI and Meta are adjusting how their chatbots handle conversations with teenagers showing signs of distress or asking about suicide. OpenAI plans to launch new parental controls this fall, enabling parents to link accounts, restrict features, and receive alerts if their child appears to be in acute distress.

The company says its chatbots will also route sensitive conversations to more capable models, aiming to improve responses to vulnerable users. The announcement follows a lawsuit alleging that ChatGPT encouraged a California teenager to take his own life earlier this year.

Meta, the parent company of Instagram and Facebook, is also tightening its restrictions. Its chatbots will no longer engage teens on self-harm, suicide, eating disorders, or inappropriate topics, instead redirecting them towards expert resources. Meta already offers parental controls across teen accounts.

The moves come amid growing scrutiny of chatbot safety. A RAND Corporation study found inconsistent responses from ChatGPT, Google’s Gemini, and Anthropic’s Claude when asked about suicide, suggesting the tools require further refinement before being relied upon in high-risk situations.

Lead author Ryan McBain welcomed the updates but called them only incremental. Without safety benchmarks and enforceable standards, he argued, companies remain self-regulating in an area where risks to teenagers are uniquely high.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK users lose access to Imgur amid watchdog probe

Imgur has cut off access for UK users after regulators warned its parent company, MediaLab AI, of a potential fine over child data protection.

Visitors to the platform since 30 September have been met with a notice saying that content is unavailable in their region, with embedded Imgur images on other sites also no longer visible.

The UK’s Information Commissioner’s Office (ICO) began investigating the platform in March, questioning whether it complied with data laws and the Children’s Code.

The regulator said it had issued MediaLab with a notice of intent to fine the company following provisional findings. Officials also emphasised that leaving the UK would not shield Imgur from responsibility for any past breaches.

Some users speculated that the withdrawal was tied to new duties under the Online Safety Act, which requires platforms to check whether visitors are over 18 before allowing access to harmful content.

However, both the ICO and Ofcom stated that Imgur decided on a commercial choice. Other MediaLab services, such as Kik Messenger, continue to operate in the UK with age verification measures in place.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSW expands secure AI platform NSWEduChat across schools

Following successful school trials, the New South Wales Department of Education has confirmed the broader rollout of its in-house generative AI platform, NSWEduChat.

The tool, developed within the department’s Sydney-based cloud environment, prioritises privacy, security, and equity while tailoring content to the state’s educational context. It is aligned with the NSW AI Assessment Framework.

The trial began in 16 schools in Term 1, 2024, and then expanded to 50 schools in Term 2. Teachers reported efficiency gains, and students showed strong engagement. Access was extended to all staff in Term 4, 2024, with Years 5–12 students due to follow in Term 4, 2025.

Key features include a privacy-first design, built-in safeguards, and a student mode that encourages critical thinking by offering guided prompts rather than direct answers. Staff can switch between staff and student modes for lesson planning and preparation.

All data is stored in Australia under departmental control. NSWEduChat is free and billed as the most cost-effective AI tool for schools. Other systems are accessible but not endorsed; staff must follow safety rules, while students are limited to approved tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Greece considers social media ban for under-16s, says Mitsotakis

Greek Prime Minister Kyriakos Mitsotakis has signalled that Greece may consider banning social media use for children under 16.

He raised the issue during a UN event in New York, hosted by Australia, titled ‘Protecting Children in the Digital Age’, held as part of the 80th UN General Assembly.

Mitsotakis emphasised that any restrictions would be coordinated with international partners, warning that the world is carrying out the largest uncontrolled experiment on children’s minds through unchecked social media exposure.

He cautioned that the long-term effects are uncertain but unlikely to be positive.

The prime minister pointed to new national initiatives, such as the ban on mobile phone use in schools, which he said has transformed the educational experience.

He also highlighted the recent launch of parco.gov.gr, which provides age verification and parental control tools to support families in protecting children online.

Mitsotakis stressed that difficulties enforcing such measures cannot serve as an excuse for inaction, urging global cooperation to address the growing risks children face in the digital age.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

California enacts first state-level AI safety law

In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies.

The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act.

Reactions across the industry have been mixed. Anthropic supported the law, while Meta and OpenAI lobbied against it, with OpenAI publishing an open letter urging Newsom not to sign. Tech firms have warned that state-level measures could create a patchwork of regulation that stifles innovation.

Despite resistance, the law positions California as a national leader in AI governance. Newsom said the state had demonstrated that it was possible to safeguard communities without stifling growth, calling AI ‘the new frontier in innovation’.

Similar legislation is under consideration in New York, while California lawmakers are also debating SB 243, a separate bill that would regulate AI companion chatbots.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!