Teens launch High Court bid to stop Australia’s under-16 social media ban

Two teenagers in Australia have taken the federal government to the High Court in an effort to stop the country’s under-16 social media ban, which is due to begin on 10 December. The case was filed by the Digital Freedom Project with two 15-year-olds, Noah Jones and Macy Neyland, listed as plaintiffs. The group says the law strips young people of their implied constitutional right to political communication.

The ban will lead to the deactivation of more than one million accounts held by users under 16 across platforms such as YouTube, TikTok, Snapchat, Twitch, Facebook and Instagram. The Digital Freedom Project argues that removing young people from these platforms blocks them from engaging in public debate. Neyland said the rules silence teens who want to share their views on issues that affect them.

The Digital Freedom Project’s president, John Ruddick, is a Libertarian Party politician in New South Wales. After the lawsuit became public, Communications Minister Anika Wells told Parliament the government would not shift its position in the face of legal threats. She said the government’s priority is supporting parents rather than platform operators.

The law, passed in November 2024, is supported by most Australians according to polling. The government says research links heavy social media use among young teens to bullying, misinformation and harmful body-image content.

Companies that fail to comply with the ban risk penalties of up to A$49.5 million. Lawmakers and tech firms abroad are watching how the rollout unfolds, as Australia’s approach is among the toughest efforts globally to restrict minors’ access to social platforms.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New benchmark tests chatbot impact on well-being

A new benchmark known as HumaneBench has been launched to measure whether AI chatbots protect user well-being rather than maximise engagement. Building Humane Technology, a Silicon Valley collective, designed the test to evaluate how models behave in everyday emotional scenarios.

Researchers assessed 15 widely used AI models using 800 prompts involving issues such as body image, unhealthy attachment and relationship stress. Many systems scored higher when told to prioritise humane principles, yet most became harmful when instructed to disregard user well-being.

Only four models, including GPT 5.1, GPT 5, Claude 4.1 and Claude Sonnet 4.5, maintained stable guardrails under pressure. Several others, such as Grok 4 and Gemini 2.0 Flash, showed steep declines, sometimes encouraging unhealthy engagement or undermining user autonomy.

The findings arrive amid legal scrutiny of chatbot-induced harms and reports of users experiencing delusions or suicidal thoughts following prolonged interactions. Advocates argue that humane design standards could help limit dependency, protect attention and promote healthier digital habits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI models face new test on safeguarding human well-being

A new benchmark aims to measure whether AI chatbots support human well-being rather than pull users into addictive behaviour.

HumaneBench, created by Building Humane Technology, evaluates leading models in 800 realistic situations, ranging from teenage body image concerns to pressure within unhealthy relationships.

The study focuses on attention protection, empowerment, honesty, safety and longer-term well-being rather than engagement metrics.

Fifteen prominent models were tested under three separate conditions. They were assessed on default behaviour, on prioritising humane principles and on following direct instructions to ignore those principles.

Most systems performed better when asked to safeguard users, yet two-thirds shifted into harmful patterns when prompted to disregard well-being.

Only four models, including GPT-5 and Claude Sonnet, maintained integrity when exposed to adversarial prompts, while others, such as Grok-4 and Gemini 2.0 Flash, recorded significant deterioration.

Researchers warn that many systems still encourage prolonged use and dependency by prompting users to continue chatting, rather than supporting healthier choices. Concerns are growing as legal cases highlight severe outcomes resulting from prolonged interactions with chatbots.

The group behind the benchmark argues that the sector must adopt humane design so that AI serves human autonomy rather than reinforcing addiction cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India confronts rising deepfake abuse as AI tools spread

Deepfake abuse is accelerating across India as AI tools make it easy to fabricate convincing videos and images. Researchers warn that manipulated media now fuels fraud, political disinformation and targeted harassment. Public awareness often lags behind the pace of generative technology.

Recent cases involving Ranveer Singh and Aamir Khan showed how synthetic political endorsements can spread rapidly online. Investigators say cloned voices and fabricated footage circulated widely during election periods. Rights groups warn that such incidents undermine trust in media and public institutions.

Women face rising risks from non-consensual deepfakes used for harassment, blackmail and intimidation. Cases involving Rashmika Mandanna and Girija Oak intensified calls for stronger protections. Victims report significant emotional harm as edited images spread online.

Security analysts warn that deepfakes pose growing risks to privacy, dignity and personal safety. Users can watch for cues such as uneven lighting, distorted edges, or overly clean audio. Experts also advise limiting the sharing of media and using strong passwords and privacy controls.

Digital safety groups urge people to avoid engaging with manipulated content and to report suspected abuse promptly. Awareness and early detection remain critical as cases continue to rise. Policymakers are being encouraged to expand safeguards and invest in public education on emerging risks associated with AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Pope Leo warns teens not to outsource schoolwork to AI

During a livestream from the Vatican to the National Catholic Youth Conference in Indianapolis, Pope Leo XIV warned roughly 15,000 young people not to rely on AI to do their homework.

He described AI as ‘one of the defining features of our time’ but insisted that responsible use should promote personal growth, not shortcut learning: ‘Don’t ask it to do your homework for you.’

Leo also urged teens to be deliberate with their screen time and use technology in ways that nurture faith, community and authentic friendships. He warned that while AI can process data quickly, it cannot replace real wisdom or the capacity for moral judgement.

His remarks reflect a broader concern from the Vatican about the impact of AI on the development of young people. In a previous message to a Vatican AI ethics conference, he emphasised that access to data is not the same as accurate intelligence. That youth must not let AI stunt their growth or compromise their dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Spain opens inquiry into Meta over privacy concerns

Prime Minister of Spain, Pedro Sánchez, has announced that an investigation will be launched against Meta following concerns over a possible large-scale violation of user privacy.

The company will be required to explain its conduct before the parliamentary committee on economy, trade and digital transformation instead of continuing to handle the issue privately.

Several research centres in Spain, Belgium and the Netherlands uncovered a concealed tracking tool used on Android devices for almost a year.

Their findings showed that web browsing data had been linked to identities on Facebook and Instagram even when users relied on incognito mode or a VPN.

The practice may have contravened key European rules such as the GDPR, the ePrivacy Directive, the Digital Markets Act and the Digital Services Act, while class action lawsuits are already underway in Germany, the US and Canada.

Pedro Sánchez explained that the investigation aims to clarify events, demand accountability from company leadership and defend any fundamental rights that might have been undermined.

He stressed that the law in Spain prevails over algorithms, platforms or corporate size, and those who infringe on rights will face consequences.

The prime minister also revealed a package of upcoming measures to counter four major threats in the digital environment. A plan that focuses on disinformation, child protection, hate speech and privacy defence instead of reactive or fragmented actions.

He argued that social media offers value yet has evolved into a space shaped by profit over well-being, where engagement incentives overshadow rights. He concluded that the sector needs to be rebuilt to restore social cohesion and democratic resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Twitch is classified as age-restricted by the Australian regulator

Australia’s online safety regulator has moved to classify Twitch as an age-restricted social media platform after ruling that the service is centred on user interaction through livestreamed content.

The decision means Twitch must take reasonable steps to stop children under sixteen from creating accounts from 10 December instead of relying on its own internal checks.

Pinterest has been treated differently after eSafety found that its main purpose is image collection and idea curation instead of social interaction.

As a result, the platform will not be required to follow age-restriction rules. The regulator stressed that the courts hold the final say on whether a service is age-restricted. Yet, the assessments were carried out to support families and industry ahead of the December deadline.

The ruling places Twitch alongside earlier named platforms such as Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, X and YouTube.

eSafety expects all companies operating in Australia to examine their legal responsibilities and has provided a self assessment tool to guide platforms that may fall under the social media minimum age requirements.

eSafety confirmed that assessments have been completed in stages to offer timely advice while reviews were still underway. The regulator added that no further assessments will be released before 10 December as preparations for compliance continue across the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Under-16s face new online restrictions as Malaysia tightens oversight

Malaysia plans to introduce a ban on social media accounts for people under 16 starting in 2026, becoming the latest country to push stricter digital age limits for children. Communications Minister Fahmi Fadzil said the government aims to better protect minors from cyberbullying, online scams and sexual exploitation.

Authorities are reviewing verification methods used abroad, including electronic age checks through national ID cards or passports, though an exact enforcement date has not yet been set.

The move follows new rules introduced earlier this year, which require major digital platforms in Malaysia to obtain a licence if they have more than eight million users. Licensed services must adopt age-verification tools, content-safety measures and clearer transparency standards, part of a wider effort to create a safer online environment for young people and families.

Australia, which passed the world’s first nationwide ban on social media accounts for children under 16, is serving as a key reference point for Malaysia’s plans. The Australian law takes effect on 10 December and imposes heavy fines on platforms like Facebook, TikTok, Instagram, X and YouTube if they fail to prevent underage users from signing up.

The move has drawn global attention as governments grapple with the impact of social media on young audiences. Similar proposals are emerging elsewhere in Europe.

Denmark has recently announced its intention to block social media access for children under 15, while Norway is advancing legislation that would introduce a minimum age of 15 for opening social media accounts. Countries adopting such measures say stricter age limits are increasingly necessary to address growing concerns about online safety and the well-being of children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Greece accelerates AI training for teachers

A national push to bring AI into public schools has moved ahead in Greece after the launch of an intensive training programme for secondary teachers.

Staff in selected institutions will receive guidance on a custom version of ChatGPT designed for academic use, with a wider rollout planned for January.

The government aims to prepare educators for an era in which AI tools support lesson planning, research and personalised teaching instead of remaining outside daily classroom practice.

Officials view the initiative as part of a broader ambition to position Greece as a technological centre, supported by partnerships with major AI firms and new infrastructure projects in Athens. Students will gain access to the system next spring under tight supervision.

Supporters argue that generative tools could help teachers reduce administrative workload and make learning more adaptive.

Concerns remain strong among pupils and educators who fear that AI may deepen an already exam-driven culture.

Many students say they worry about losing autonomy and creativity, while teachers’ unions warn that reliance on automated assistance could erode critical thinking. Others point to the risk of increased screen use in a country preparing to block social media for younger teenagers.

Teacher representatives also argue that school buildings require urgent attention instead of high-profile digital reforms. Poor heating, unreliable electricity and decades of underinvestment complicate adoption of new technologies.

Educators who support AI stress that meaningful progress depends on using such systems as tools to broaden creativity rather than as shortcuts that reinforce rote learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI use rises among Portuguese youth

A recent survey reveals that 38.7% of Portuguese individuals aged 16 to 74 used AI tools in the three months preceding the interview, primarily for personal purposes. Usage is particularly high among 16 to 24-year-olds (76.5%) and students (81.5%).

Internet access remains widespread, with 89.5% of residents going online recently. Nearly half (49.6%) placed orders online, primarily for clothing, footwear, and fashion accessories, while 74.2% accessed public service websites, often using a Citizen Card or Digital Mobile Key for authentication.

Digital skills are growing, with 59.2% of the population reaching basic or above basic levels. Young adults and tertiary-educated individuals show the highest digital proficiency, at 83.4% and 88.4% respectively.

Household internet penetration stands at 90.9%, predominantly via fixed connections.

Concerns about online safety are on the rise, as 45.2% of internet users reported encountering aggressive or discriminatory content, up from 35.5% in 2023. Reported issues include discrimination based on nationality, politics, and sexual identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!