OpenAI sets new rules for teen safety in AI use

OpenAI has outlined a new framework for balancing safety, privacy and freedom in its AI systems, with a strong focus on teenagers.

The company stressed that conversations with AI often involve sensitive personal information, which should be treated with the same level of protection as communications with doctors or lawyers.

At the same time, it aims to grant adult users broad freedom to direct AI responses, provided safety boundaries are respected.

The situation changes for younger users. Teenagers are seen as requiring stricter safeguards, with safety taking priority over privacy and freedom. OpenAI is developing age-prediction tools to identify users under 18, and where uncertainty exists, it will assume the user is a teenager.

In some regions, identity verification may also be required to confirm age, a step the company admits reduces privacy but argues is essential for protecting minors.

Teen users will face tighter restrictions on certain types of content. ChatGPT will be trained not to engage in flirtatious exchanges, and sensitive issues such as self-harm will be carefully managed.

If signs of suicidal thoughts appear, the company says it will first try to alert parents. Where there is imminent risk and parents cannot be reached, OpenAI is prepared to notify the authorities.

The new approach raises questions about privacy trade-offs, the accuracy of age prediction, and the handling of false classifications.

Critics may also question whether restrictions on creative content hinder expression. OpenAI acknowledges these tensions but argues the risks faced by young people online require stronger protections.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australia outlines guidelines for social media age ban

Australia has released its regulatory guidance for the incoming social media age restriction law, which takes effect on December 10. Users under 16 will be barred from holding accounts on most major platforms, including Instagram, TikTok, and Facebook.

The new guidance details what are considered ‘reasonable steps’ for compliance. Platforms must detect and remove underage accounts, communicating clearly with affected users. It remains uncertain whether removed accounts will have their content deleted or if they can be reactivated once the user turns 16.

Platforms are also expected to block attempts to re-register, including the use of VPNs or other workarounds. Companies are encouraged to implement a multi-step age verification process and provide users with a range of options, rather than relying solely on government-issued identification.

Blanket age verification won’t be required, nor will platforms need to store personal data from verification processes. Instead, companies must demonstrate effectiveness through system-level records. Existing data, such as an account’s creation date, may be used to estimate age.

Under-16s will still be able to view content without logging in, for example, watching YouTube videos in a browser. However, shared access to adult accounts on family devices could present enforcement challenges.

Communications Minister Anika Wells stated that there is ‘no excuse for non-compliance.’ Each platform must now develop its own strategy to meet the law’s requirements ahead of the fast-approaching deadline.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EdChat AI app set for South Australian schools amid calls for careful use

South Australian public schools will soon gain access to EdChat, a ChatGPT-style app developed by Microsoft in partnership with the state government. Education Minister Blair Boyer said the tool will roll out next term across public high schools following a successful trial.

Safeguards have been built into EdChat to protect student data and alert moderators if students type concerning prompts, such as those related to self-harm or other sensitive topics. Boyer said student mental health was a priority during the design phase.

Teachers report that students use EdChat to clarify instructions, get maths solutions explained, and quiz themselves on exam topics. Adelaide Botanic High School principal Sarah Chambers described it as an ‘education equaliser’ that provides students with access to support throughout the day.

While many educators in Australia welcome the rollout, experts warn against overreliance on AI tools. Toby Walsh of UNSW said students must still learn how to write essays and think critically, while others noted that AI could actually encourage deeper questioning and analysis.

RMIT computing expert Michael Cowling said generative AI can strengthen critical thinking when used for brainstorming and refining ideas. He emphasised that students must learn to critically evaluate AI output and utilise the technology as a tool, rather than a substitute for learning.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces memory feature to Claude AI for workplace productivity

The AI startup Anthropic has added a memory feature to its Claude AI, designed to automatically recall details from earlier conversations, such as project information and team preferences.

Initially, the upgrade is only available to Team and Enterprise subscribers, who can manage, edit, or delete the content that the system retains.

Anthropic presents the tool as a way to improve workplace efficiency instead of forcing users to repeat instructions. Enterprise administrators have additional controls, including entirely turning memory off.

Privacy safeguards are included, such as an ‘incognito mode’ for conversations that are not stored.

Analysts view the step as an effort to catch up with competitors like ChatGPT and Gemini, which already offer similar functions. Memory also links with Claude’s newer tools for creating spreadsheets, presentations, and PDFs, allowing past information to be reused in future documents.

Anthropic plans a wider release after testing the feature with businesses. Experts suggest the approach could strengthen the company’s position in the AI market by offering both continuity and security, which appeal to enterprises handling sensitive data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France pushes for nighttime social media curfews for teens

French lawmakers are calling for stricter regulations on teen social media use, including mandatory nighttime curfews, following a parliamentary report examining TikTok’s psychological impact on minors.

The 324-page report, published Thursday by a National Assembly Inquiry Commission, proposes that social media accounts for 15- to 18-year-olds be automatically disabled between 10 p.m. and 8 a.m. to help combat mental health issues.

The report contains 43 recommendations, including greater funding for youth mental health services, awareness campaigns in schools, and a national ban on social media access for those under 15. Platforms with algorithmic recommendation systems, like TikTok, are specifically targeted.

Arthur Delaporte, the lead rapporteur and a socialist MP, also announced plans to refer TikTok to the Paris Public Prosecutor, accusing the platform of knowingly exposing minors to harmful content.

The report follows a December 2024 lawsuit filed by seven families who claim TikTok’s content contributed to their children’s suicides.

TikTok rejected the accusations, calling the report “misleading” and highlighting its safety features for minors.

The report urges France not to wait for EU-level legislation and instead to lead on national regulation. President Emmanuel Macron previously demanded an EU-wide ban on social media for under-15s.

However, the European Commission has said cultural differences make such a bloc-wide approach unfeasible.

Looking ahead, the report supports stronger obligations in the upcoming Digital Fairness Act, such as giving users greater control over content feeds and limiting algorithmic manipulation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

FTC opens inquiry into AI chatbots and child safety

The US Federal Trade Commission has launched an inquiry into AI chatbots that act as digital companions, raising concerns about their impact on children and teenagers.

Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships.

Chairman Andrew Ferguson said protecting children online was a top priority, stressing the need to balance safety with maintaining US leadership in AI. Regulators fear minors may be particularly vulnerable to forming emotional bonds with AI chatbots that simulate friendship and empathy.

An inquiry that will investigate how companies develop AI chatbot personalities, monetise user interactions and enforce age restrictions. It will also assess how personal information from conversations is handled and whether privacy laws are being respected.

Other companies receiving orders include Character.AI and Elon Musk’s xAI.

The probe follows growing public concern over the psychological effects of generative AI on young people.

Last month, the parents of a 16-year-old who died by suicide sued OpenAI, alleging ChatGPT provided harmful instructions. The company later pledged corrective measures, admitting its chatbot does not always recommend mental health support during prolonged conversations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU considers social media restrictions for minors

European Commission President Ursula von der Leyen announced that the EU is considering tighter restrictions on children’s access to social media platforms.

During her annual State of the Union address, von der Leyen said the Commission is closely monitoring Australia’s approach, where individuals under 16 are banned from using platforms like TikTok, Instagram, and Snapchat.

‘I am watching the implementation of their policy closely,’ von der Leyen said, adding that a panel of experts will advise her on the best path forward for Europe by the end of 2025.

Currently, social media age limits are handled at the national level across the EU, with platforms generally setting a minimum age of 13. France, however, is moving toward a national ban for those under 15 unless an EU-wide measure is introduced.

Several EU countries, including the Netherlands, have already warned against children under 15 using social media, citing health risks.

In June, the European Commission issued child protection guidelines under the Digital Services Act, and began working with five member states on age verification tools, highlighting growing concern over digital safety for minors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Growing concern over AI fatigue among students and teachers

Experts say growing exposure to AI is leaving many people exhausted, a phenomenon increasingly described as ‘AI fatigue’.

Educators and policymakers note that AI adoption surged before society had time to thoroughly weigh its ethical or social effects. The technology now underpins tasks from homework writing to digital art, leaving some feeling overwhelmed or displaced.

University students are among those most affected, with many relying heavily on AI for assignments. Teachers say it has become challenging to identify AI-generated work, as detection tools often produce inconsistent results.

Some educators are experimenting with low-tech classrooms, banning phones and requiring handwritten work. They report deeper conversations and stronger engagement when distractions are removed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

2025 State of the Union: Tech sovereignty amid geopolitical pressure

The European Commission President, Ursula von der Leyen, delivered her 2025 State of the Union address to the European Parliament in Strasbourg. The speech set out priorities for the coming year and was framed by growing geopolitical tensions and the push for a more self-reliant Europe.

Von der Leyen highlighted that global dynamics have shifted.

‘Battlelines for a new world order based on power are being drawn right now, ’ she said.

In this context, Europe must take a more assertive role in defending its own security and advancing the technologies that will underpin its economic future. The President characterised this moment as a turning point for European independence.

Digital policy appeared less prominently than expected in the address. Von der Leyen often referred to ‘technology sovereignty’ to encompass not only digital technologies, but also other types of technologies necessary for the green transition and to achieve energetic autonomy. In spite of that, some specific references to digital policy are worth highlighting.

  • Europe’s right to regulate. Von der Leyen defended Europe’s right to set its own standards and regulations. The assertion came right after her defence of the US-EU trade deal, making it a direct response to the mounting pressure and tariff threats from the US President Donald Trump’s administration.
  • Regulatory simplification. A specific regulatory package (omnibus) on digital was promised, under inspiration from the Draghi report on EU competitiveness. 
  • Investment in digital technology. Startups in key areas, such as quantum and AI, could receive particular attention, in order to enhance the availability of European capital and strengthen European sovereignty in these areas. According to her, the Commission ‘will partner with private investors on a multi-billion euro Scaleup Europe Fund’. No concrete figures were provided, however.
  • Artificial intelligence as key to European independence. In order to support this sector, von der Leyen highlighted the importance of some initiatives, such as the Cloud and AI Development Act, and the European AI Gigafactories. She praised the commitment of CEOs from some leading European companies to invest in digital in the recently launched AI and Tech Declaration
  • Mainstreaming information integrity. According to von der Leyen, Europe’s democracy is under attack, with the rise of information manipulation and disinformation. She proposed to create a new European Centre for Democratic Resilience, which will bring together all the expertise and capacity across member states and neighbouring countries. A new Media Resilience Programme aimed at supporting independent journalism and media literacy was also announced.
  • Limits to the use of social media by young people. The President of the Commission raised concerns about the impact of social media on children’s mental health and safety. She committed to convening a panel of experts to consider restrictions for social media access, referencing efforts that have been put in place in Australia.  

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teens turn to AI chatbots for support, raising mental health concerns

Mental health experts in Iowa have warned that teenagers are increasingly turning to AI chatbots instead of seeking human connection, raising concerns about misinformation and harmful advice.

The issue comes into focus on National Suicide Prevention Day, shortly after a lawsuit against ChatGPT was filed over a teenager’s suicide.

Jessica Bartz, a therapy supervisor at Vera French Duck Creek, said young people are at a vulnerable stage of identity formation while family communication often breaks down.

She noted that some teens use chatbot tools like ChatGPT, Genius and Copilot to self-diagnose, which can reinforce inaccurate or damaging ideas.

‘Sometimes AI can validate the wrong things,’ Bartz said, stressing that algorithms only reflect the limited information users provide.

Without human guidance, young people risk misinterpreting results and worsening their struggles.

Experts recommend that parents and trusted adults engage directly with teenagers, offering empathy and open communication instead of leaving them dependent on technology.

Bartz emphasised that nothing can replace a caring person noticing warning signs and intervening to protect a child’s well-being.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!