Two teenagers in Australia have taken the federal government to the High Court in an effort to stop the country’s under-16 social media ban, which is due to begin on 10 December. The case was filed by the Digital Freedom Project with two 15-year-olds, Noah Jones and Macy Neyland, listed as plaintiffs. The group says the law strips young people of their implied constitutional right to political communication.
The ban will lead to the deactivation of more than one million accounts held by users under 16 across platforms such as YouTube, TikTok, Snapchat, Twitch, Facebook and Instagram. The Digital Freedom Project argues that removing young people from these platforms blocks them from engaging in public debate. Neyland said the rules silence teens who want to share their views on issues that affect them.
The Digital Freedom Project’s president, John Ruddick, is a Libertarian Party politician in New South Wales. After the lawsuit became public, Communications Minister Anika Wells told Parliament the government would not shift its position in the face of legal threats. She said the government’s priority is supporting parents rather than platform operators.
The law, passed in November 2024, is supported by most Australians according to polling. The government says research links heavy social media use among young teens to bullying, misinformation and harmful body-image content.
Companies that fail to comply with the ban risk penalties of up to A$49.5 million. Lawmakers and tech firms abroad are watching how the rollout unfolds, as Australia’s approach is among the toughest efforts globally to restrict minors’ access to social platforms.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has launched a confidential tool enabling insiders at AI developers to report suspected rule breaches. The channel forms part of wider efforts to prepare for enforcement of the EU AI Act, which will introduce strict obligations for model providers.
Legal protections for users of the tool will only apply from August 2026, leaving early whistleblowers exposed to employer retaliation until the Act’s relevant provisions take effect. The Commission acknowledges the gap and stresses strong encryption to safeguard identities.
Advocates say the channel still offers meaningful progress. Karl Koch, founder of the AI whistleblower initiative, argues that existing EU whistleblowing rules on product safety may already cover certain AI-related concerns, potentially offering partial protection.
Koch also notes parallels with US practice, where regulators accept overseas tips despite limited powers to shield informants. The Commission’s transparency about current limitations has been welcomed by experts who view the tool as an important foundation for long-term AI oversight.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Macquarie Dictionary has named ‘AI slop’ its 2025 Word of the Year, reflecting widespread concern about the flood of low-quality, AI-generated content circulating online. The selection committee noted that the term captures a major shift in how people search for and evaluate information, stating that users now need to act as ‘prompt engineers’ to navigate the growing sea of meaningless material.
‘AI slop’ topped a shortlist packed with culturally resonant expressions, including ‘Ozempic face’, ‘blind box’, ‘ate (and left no crumbs)’ and ‘Roman Empire’. Honourable mentions went to emerging technology-related words such as ‘clankers’, referring to AI-powered robots, and ‘medical misogyny’.
The public vote aligned with the experts, also choosing ‘AI slop’ as its top pick.
The rise of the term reflects the explosive growth of AI over the past year, from social media content shared by figures like Donald Trump to deepfake-driven misinformation flagged by the Australian Electoral Commission. Language specialist David Astle compared AI slop to the modern equivalent of spam, noting its adaptability into new hybrid terms.
Asked about the title, ChatGPT said the win suggests people are becoming more critical of AI output, which is a reminder, it added, of the standard it must uphold.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A national push to bring AI into public schools has moved ahead in Greece after the launch of an intensive training programme for secondary teachers.
Staff in selected institutions will receive guidance on a custom version of ChatGPT designed for academic use, with a wider rollout planned for January.
The government aims to prepare educators for an era in which AI tools support lesson planning, research and personalised teaching instead of remaining outside daily classroom practice.
Officials view the initiative as part of a broader ambition to position Greece as a technological centre, supported by partnerships with major AI firms and new infrastructure projects in Athens. Students will gain access to the system next spring under tight supervision.
Supporters argue that generative tools could help teachers reduce administrative workload and make learning more adaptive.
Concerns remain strong among pupils and educators who fear that AI may deepen an already exam-driven culture.
Many students say they worry about losing autonomy and creativity, while teachers’ unions warn that reliance on automated assistance could erode critical thinking. Others point to the risk of increased screen use in a country preparing to block social media for younger teenagers.
Teacher representatives also argue that school buildings require urgent attention instead of high-profile digital reforms. Poor heating, unreliable electricity and decades of underinvestment complicate adoption of new technologies.
Educators who support AI stress that meaningful progress depends on using such systems as tools to broaden creativity rather than as shortcuts that reinforce rote learning.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Since yesterday, OpenAI has launched group chats worldwide for all ChatGPT users on Free, Go, Plus and Pro plans instead of limiting access to small trial regions.
The upgrade follows a pilot in Japan and New Zealand and marks a turning point in how the company wants people to use AI in everyday communication.
Group chats enable up to twenty participants to collaborate in a shared space, where they can plan trips, co-write documents, or settle disagreements through collective decision-making.
ChatGPT remains available as a partner that contributes when tagged, reacts with emojis and references profile photos instead of taking over the conversation. Each participant keeps private settings and memory, which prevents personal information from being shared across the group.
Users start a session by tapping the people icon and inviting others directly or through a link. Adding someone later creates a new chat, rather than altering the original, which preserves previous discussions intact.
OpenAI presents the feature as a way to turn the assistant into a social environment rather than a solitary tool.
The announcement arrives shortly after the release of GPT-5.1 and follows the introduction of Sora, a social app that encourages users to create videos with friends.
OpenAI views group chats as the first step toward a more active role for AI in real human exchanges where people plan, create and make decisions together.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Northamptonshire Police will roll out live facial recognition cameras in three town centres. Deployments are scheduled in Northampton on 28 November and 5 December, in Kettering on 29 November, and in Wellingborough on 6 December.
The initiative uses a van loaned from Bedfordshire Police and the watch-lists include high-risk sex offenders or persons wanted for arrest. Facial and biometric data for non-alerts are deleted immediately; alerts are held only up to 24 hours.
Police emphasise the AI based technology is ‘very much in its infancy’ but expect future acquisition of dedicated kit. A coordinator post is being created to manage the LFR programme in-house.
British campaigners express concern the biometric tool may erode privacy or resemble mass surveillance. Police assert that appropriate signage and open policy documents will be in place to maintain public confidence.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google CEO Sundar Pichai advises people not to unquestioningly trust AI tools, warning that current models remain prone to errors. He told the BBC that users should rely on a broader information ecosystem rather than treat AI as a single source of truth.
Pichai said generative systems can produce inaccuracies and stressed that people must learn what the tools are good at. The remarks follow criticism of Google’s own AI Overviews feature, which attracted attention for erratic and misleading responses during its rollout.
Experts say the risk grows when users depend on chatbots for health, science, or news. BBC research found major AI assistants misrepresented news stories in nearly half of the tests this year, underscoring concerns about factual reliability and the limits of current models.
Google is launching Gemini 3.0, which it claims offers stronger multimodal understanding and reasoning. The company says its new AI Mode in search marks a shift in how users interact with online information, as it seeks to defend market share against ChatGPT and other rivals.
Pichai says Google is increasing its investment in AI security and releasing tools to detect AI-generated images. He maintains that no single company should control such powerful technology and argues that the industry remains far from a scenario in which one firm dominates AI development.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
US House Republicans are mounting a new effort to block individual states from regulating AI, reviving a proposal that the Senate overwhelmingly rejected just four months ago. Their push aligns with President Donald Trump’s call for a single federal AI standard, which he argues is necessary to avoid a ‘patchwork’ of state-level rules that he claims hinder economic growth and fuel what he described as ‘woke AI.’
House Majority Leader Steve Scalise is now attempting to insert the measure into the National Defence Authorisation Act, a must-pass annual defence spending bill expected to be finalised in the coming weeks. If successful, the move would place a moratorium on state-level AI regulation, effectively ending the states’ current role as the primary rule-setters on issues ranging from child safety and algorithmic fairness to workforce impacts.
The proposal faces significant resistance, including from within the Republican Party. Lawmakers who blocked the earlier attempt in July warned that stripping states of their authority could weaken protections in areas such as copyright, child safety, and political speech.
Critics, such as Senator Marsha Blackburn and Florida Governor Ron DeSantis, argue that the measure would amount to a handout to Big Tech and leave states unable to guard against the use of predatory or intrusive AI.
Congressional leaders hope to reach a deal before the Thanksgiving recess, but the ultimate fate of the measure remains uncertain. Any version of the moratorium would still need bipartisan support in the Senate, where most legislation requires 60 votes to advance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
TikTok has announced new tools to help users shape and understand AI-generated content (AIGC) in their feeds. A new ‘Manage Topics’ control will let users adjust how much AI content appears in their For You feeds alongside keyword filters and the ‘not interested’ option.
The aim is to personalise content rather than remove it entirely.
To strengthen transparency, TikTok is testing ‘invisible watermarking’ for AI-generated content created with TikTok tools or uploaded using C2PA Content Credentials. Combined with creator labels and AI detection, these watermarks help track and identify content even if edited or re-uploaded.
The platform has launched a $2 million AI literacy fund to support global experts in creating educational content on responsible AI. TikTok collaborates with industry partners and non-profits like Partnership on AI to promote transparency, research, and best practices.
Investments in AI extend beyond moderation and labeling. TikTok is developing innovative features such as Smart Split and AI Outline to enhance creativity and discovery, while using AI to protect user safety and improve the well-being of its trust and safety teams.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Public opinion surveys face a growing threat as AI becomes capable of producing highly convincing fake responses. New research from Dartmouth shows that AI-generated answers can pass every quality check, imitate real human behaviour and alter poll predictions without leaving evidence.
In several major polls conducted before the 2024 US election, inserting only a few dozen synthetic responses would have reversed expected outcomes.
The study reveals how easily malicious actors could influence democratic processes. AI models can operate in multiple languages yet deliver flawless English answers, allowing foreign groups to bypass detection.
An autonomous synthetic respondent that was created for the study passed nearly all attention tests, avoided errors in logic puzzles and adjusted its tone to match assigned demographic profiles instead of exposing its artificial nature.
The potential consequences extend far beyond electoral polling. Many scientific disciplines rely heavily on survey data to track public health risks, measure consumer behaviour or study mental wellbeing.
If AI-generated answers infiltrate such datasets, the reliability of thousands of studies could be compromised, weakening evidence used to shape policy and guide academic research.
Financial incentives further raise the risk. Human participants earn modest fees, while AI can produce survey responses at almost no cost. Existing detection methods failed to identify the synthetic respondent at any stage.
The researcher urges survey companies to adopt new verification systems that confirm the human identity of participants, arguing that stronger safeguards are essential to protect democratic accountability and the wider research ecosystem.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!