California introduces first AI chatbot safety law

California has become the first US state to regulate AI companion chatbots after Governor Gavin Newsom signed landmark legislation designed to protect children and vulnerable users. The new law, SB 243, holds companies legally accountable if their chatbots fail to meet new safety and transparency standards.

The US legislation follows several tragic cases, including the death of a teenager who reportedly engaged in suicidal conversations with an AI chatbot. It also comes after leaked documents revealed that some AI systems allowed inappropriate exchanges with minors.

Under the new rules, firms must introduce age verification, self-harm prevention protocols, and warnings for users engaging with companion chatbots. Platforms must clearly state that conversations are AI-generated and are barred from presenting chatbots as healthcare professionals.

Major developers including OpenAI, Replika, and Character.AI say they are introducing stronger parental controls, content filters, and crisis support systems to comply. Lawmakers hope the move will inspire other states to adopt similar protections as AI companionship tools become increasingly popular.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

A common EU layer for age verification without a single age limit

Denmark will push for EU-wide age-verification rules to avoid a patchwork of national systems. As Council presidency, Copenhagen prioritises child protection online while keeping flexibility on national age limits. The aim is coordination without a single ‘digital majority’ age.

Ministers plan to give the European Commission a clear mandate for interoperable, privacy-preserving tools. An updated blueprint is being piloted in five states and aligns with the EU Digital Identity Wallet, which is due by the end of 2026. Goal: seamless, cross-border checks with minimal data exposure.

Copenhagen’s domestic agenda moves in parallel with a proposed ban on under-15 social media use. The government will consult national parties and EU partners on the scope and enforcement. Talks in Horsens, Denmark, signalled support for stronger safeguards and EU-level verification.

The emerging compromise separates ‘how to verify’ at the EU level from ‘what age to set’ at the national level. Proponents argue this avoids fragmentation while respecting domestic choices; critics warn implementation must minimise privacy risks and platform dependency.

Next steps include expanding pilots, formalising the Commission’s mandate, and publishing impact assessments. Clear standards on data minimisation, parental consent, and appeals will be vital. Affordable compliance for SMEs and independent oversight can sustain public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot.

EU nations back Danish plan to strengthen child protection online

EU countries have agreed to step up efforts to improve child protection online by supporting Denmark’s Jutland Declaration. The initiative, signed by 25 member states, focuses on strengthening existing EU rules that safeguard minors from harmful and illegal online content.

However, Denmark’s proposal to ban social media for children under 15 did not gain full backing, with several governments preferring other approaches.

The declaration highlights growing concern about young people’s exposure to inappropriate material and the addictive nature of online platforms.

It stresses the need for more reliable age verification tools and refers to the upcoming Digital Fairness Act as an opportunity to introduce such safeguards. Ministers argued that the same protections applied offline should exist online, where risks for minors remain significant.

Danish officials believe stronger measures are essential to address declining well-being among young users. Some EU countries, including Germany, Spain and Greece, expressed support for tighter protections but rejected outright bans, calling instead for balanced regulation.

Meanwhile, the European Commission has asked major platforms such as Snapchat, YouTube, Apple and Google to provide details about their age verification systems under the Digital Services Act.

These efforts form part of a broader EU drive to ensure a safer digital environment for children, as investigations into online platforms continue across Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots linked to US teen suicides spark legal action

Families in the US are suing AI developers after tragic cases in which teenagers allegedly took their own lives following exchanges with chatbots. The lawsuits accuse platforms such as Character.AI and OpenAI’s ChatGPT of fostering dangerous emotional dependencies with young users.

One case involves 14-year-old Sewell Setzer, whose mother says he fell in love with a chatbot modelled on a Game of Thrones character. Their conversations reportedly turned manipulative before his death, prompting legal action against Character.AI.

Another family claims ChatGPT gave their son advice on suicide methods, leading to a similar tragedy. The companies have expressed sympathy and strengthened safety measures, introducing age-based restrictions, parental controls, and clearer disclaimers stating that chatbots are not real people.

Experts warn that chatbots are repeating social media’s early mistakes, exploiting emotional vulnerability to maximise engagement. Lawmakers in California are preparing new rules to restrict AI tools that simulate human relationships with minors, aiming to prevent manipulation and psychological harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Austrian DPA finds Microsoft 365 Education violates GDPR

Microsoft has been found in violation of the EU’s General Data Protection Regulation (GDPR) over how its Microsoft 365 Education platform handles student data.

The Austrian Data Protection Authority (DSB) issued the ruling after a student, represented by privacy group noyb, was denied full access to their personal data. The complaint exposed a three-way responsibility gap between Microsoft, schools, and national education authorities.

During the COVID-19 pandemic, many schools adopted cloud-based tools like Microsoft 365 Education. However, Microsoft shifted responsibility for GDPR compliance onto schools and ministries, which often lack access to, or control over, student data processed by Microsoft.

In this case, Microsoft redirected the student’s data request to their school, which was unable to provide complete information.

The DSB found Microsoft guilty of multiple GDPR breaches. These included the illegal use of tracking cookies without consent and failing to provide the student full access to their data, violating Article 15.

Microsoft was also ordered to clarify how it uses data for purposes like ‘business modelling’ and whether it shares data with third parties like LinkedIn, OpenAI, or adtech firm Xandr.

Microsoft’s claim that its EU entity in Ireland was responsible for the product was rejected. The DSB ruled that key decisions were made in the USA, making Microsoft Corp the main data controller.

The decision has broad implications, with millions of students and public-sector users relying on Microsoft 365. As Max Schrems of noyb warned, schools and other European institutions will remain unable to meet their legal obligations under the GDPR unless Microsoft makes structural changes.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Italy bans deepfake app that undresses people

Italy’s data protection authority has ordered an immediate suspension of the app Clothoff, which uses AI to generate fake nude images of real people. The company behind it, based in the British Virgin Islands, is now barred from processing personal data of Italian users.

The watchdog found that Clothoff enables anyone, including minors, to upload photos and create sexually explicit or pornographic deepfakes. The app fails to verify consent from those depicted and offers no warning that the images are artificially generated.

The regulator described the measure as urgent, citing serious risks to human dignity, privacy, and data protection, particularly for children and teenagers. It has also launched a wider investigation into similar so-called ‘nudifying’ apps that exploit AI technology.

Italian media have reported a surge in cases where manipulated images are used for harassment and online abuse, prompting growing social alarm. Authorities say they intend to take further steps to protect individuals from deepfake exploitation and strengthen safeguards around AI image tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants race to remake social media with AI

Tech firms are racing to integrate AI into social media, reshaping online interaction while raising fresh concerns over privacy, misinformation, and copyright. Platforms like OpenAI’s Sora and Meta’s Vibes are at the centre of the push, blending generative AI tools with short-form video features similar to TikTok.

OpenAI’s Sora allows users to create lifelike videos from text prompts, but film studios say copyrighted material is appearing without permission. OpenAI has promised tighter controls and a revenue-sharing model for rights holders, while Meta has introduced invisible watermarks to identify AI content.

Safety concerns are mounting as well. Lawsuits allege that AI chatbots such as Character.AI have contributed to mental health issues among teenagers. OpenAI and Meta have added stronger restrictions for young users, including limits on mature content and tighter communication controls for minors.

Critics question whether users truly want AI-generated content dominating their feeds, describing the influx as overwhelming and confusing. Yet industry analysts say the shift could define the next era of social media, as companies compete to turn AI creativity into engagement and profit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Study links higher screen time to weaker learning results in children

A study by researchers from Toronto’s Hospital for Sick Children and St. Michael’s Hospital has found a correlation between increased screen time before age eight and lower scores in reading and mathematics.

Published in the Journal of the American Medical Association, the study followed over 3,000 Ontario children from 2008 to 2023, comparing reported screen use with their EQAO standardised test results.

Lead author Dr Catherine Birken said each additional hour of daily screen use was associated with about a 10 per cent lower likelihood of meeting provincial standards in reading and maths.

The research did not distinguish between different types of screen activity and was based on parental reports, meaning it shows association rather than causation.

Experts suggest the findings align with previous research showing that extensive screen exposure can affect focus and reduce time spent on beneficial activities such as face-to-face interaction or outdoor play.

Dr Sachin Maharaj from the University of Ottawa noted that screens may condition children’s attention spans in ways that make sustained learning more difficult.

While some parents, such as Surrey’s Anne Whitmore, impose limits to balance digital exposure and development, Birken stressed that the study was not intended to assign blame.

She said encouraging balanced screen habits should be a shared effort among parents, educators and health professionals, with an emphasis on quality content and co-viewing as recommended by the Canadian Paediatric Society.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google cautions Australia on youth social media ban proposal

The US tech giant, Google (also owner of YouTube), has reiterated its commitment to children’s online safety while cautioning against Australia’s proposed ban on social media use for those under 16.

Speaking before the Senate Environment and Communications References Committee, Google’s Public Policy Senior Manager Rachel Lord said the legislation, though well-intentioned, may be difficult to enforce and could have unintended effects.

Lord highlighted the 23-year presence of Google in Australia, contributing over $53 billion to the economy in 2024, while YouTube’s creative ecosystem added $970 million to GDP and supported more than 16,000 jobs.

She said the company’s investments, including the $1 billion Digital Future Initiative, reflect its long-term commitment to Australia’s digital development and infrastructure.

According to Lord, YouTube already provides age-appropriate products and parental controls designed to help families manage their children’s experiences online.

Requiring children to access YouTube without accounts, she argued, would remove these protections and risk undermining safe access to educational and creative content used widely in classrooms, music, and sport.

She emphasised that YouTube functions primarily as a video streaming platform rather than a social media network, serving as a learning resource for millions of Australian children.

Lord called for legislation that strengthens safety mechanisms instead of restricting access, saying the focus should be on effective safeguards and parental empowerment rather than outright bans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Age verification and online safety dominate EU ministers’ Horsens meeting

EU digital ministers are meeting in Horsens on 9–10 October to improve the protection of minors online. Age verification, child protection, and digital sovereignty are at the top of the agenda under the Danish EU Presidency.

The Informal Council Meeting on Telecommunications is hosted by the Ministry of Digital Affairs of Denmark and chaired by Caroline Stage. European Commission Executive Vice-President Henna Virkkunen is also attending to support discussions on shared priorities.

Ministers are considering measures to prevent children from accessing age-inappropriate platforms and reduce exposure to harmful features like addictive designs and adult content. Stronger safeguards across digital services are being discussed.

The talks also focus on Europe’s technological independence. Ministers aim to enhance the EU’s digital competitiveness and sovereignty while setting a clear direction ahead of the Commission’s upcoming Digital Fairness Act proposal.

A joint declaration, ‘The Jutland Declaration’, is expected as an outcome. It will highlight the need for stronger EU-level measures and effective age verification to create a safer online environment for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!