A major provider of three widely used nudify services has cut off Australian access after enforcement action from eSafety.
The company received an official warning in September for allowing its tools to be used to produce AI-generated material that harmed children.
A withdrawal that follows concerns about incidents involving school students and repeated reminders that online services must meet Australia’s mandatory safety standards.
eSafety stated that Australia’s codes and standards are encouraging companies to adopt stronger safeguards.
The Commissioner noted that preventing the misuse of consumer tools remains central to reducing the risk of harm and that more precise boundaries can lower the likelihood of abuse affecting young people.
Attention has also turned to underlying models and the hosting platforms that distribute them.
Hugging Face has updated its terms to require users to take steps to mitigate the risks associated with uploaded models, including preventing misuse for generating harmful content. The company is required to act when reports or internal checks reveal breaches of its policies.
eSafety indicated that failure to comply with industry codes or standards can lead to enforcement measures, including significant financial penalties.
The agency is working with the government on further reforms intended to restrict access to nudify tools and strengthen protections across the technology stack.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Lawmakers in Virginia are preparing fresh efforts to regulate AI as concerns grow over its influence on minors and vulnerable users.
Legislators will return in January with a set of proposals focused on limiting the capabilities of chatbots, curbing deepfakes and restricting automated ticket-buying systems. The push follows a series of failed attempts last year to define high-risk AI systems and expand protections for consumers.
Delegate Michelle Maldonado aims to introduce measures that restrict what conversational agents can say in therapeutic interactions instead of allowing them to mimic emotional support.
Her plans follow the well-publicised case of a sixteen-year-old who discussed suicidal thoughts with a chatbot before taking his own life. She argues that young people rely heavily on these tools and need stronger safeguards that recognise dangerous language and redirect users towards human help.
Maldonado will also revive a previous bill on high-risk AI, refining it to address particular sectors rather than broad categories.
Delegate Cliff Hayes is preparing legislation to require labels for synthetic media and to block AI systems from buying event tickets in bulk instead of letting automated tools distort prices.
Hayes already secured a law preventing predictions from AI tools from being the sole basis for criminal justice decisions. He warns that the technology has advanced too quickly for policy to remain passive and urges a balance between innovation and protection.
Proposals that come as the state continues to evaluate its regulatory environment under an executive order issued by Governor Glenn Youngkin.
The order directs AI systems to scan the state code for unnecessary or conflicting rules, encouraging streamlined governance instead of strict statutory frameworks. Observers argue that human oversight remains essential as legislators search for common ground on how far to extend regulatory control.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Yesterday, Australia entered a new phase of its online safety framework after the introduction of the Social Media Minimum Age policy.
eSafety has established a new Parent Advisory Group to support families as the country transitions to enhanced safeguards for young people. The group held its first meeting, with the Commissioner underlining the need for practical and accessible guidance for carers.
The initiative brings together twelve organisations representing a broad cross-section of communities in Australia, including First Nations families, culturally diverse groups, parents of children with disability and households in regional areas.
Their role is to help eSafety refine its approach, so parents can navigate social platforms with greater confidence, rather than feeling unsupported during rapid regulatory change.
A group that will advise on parent engagement, offer evidence-informed insights and test updated resources such as the redeveloped Online Safety Parent Guide.
Their advice will aim to ensure materials remain relevant, inclusive and able to reach priority communities that often miss out on official communications.
Members will serve voluntarily until June 2026 and will work with eSafety to improve distribution networks and strengthen the national conversation on digital literacy. Their collective expertise is expected to shape guidance that reflects real family experiences instead of abstract policy expectations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A US federal judge has condemned immigration agents in Chicago for using AI to draft use-of-force reports, warning that the practice undermines credibility. Judge Sara Ellis noted that one agent fed a short description and images into ChatGPT before submitting the report.
Body camera footage cited in the ruling showed discrepancies between events recorded and the written narrative. Experts say AI-generated accounts risk inaccuracies in situations where courts rely on an officer’s personal recollection to assess reasonableness.
Researchers argue that poorly supervised AI use could erode public trust and compromise privacy. Some warn that uploading images into public tools relinquishes control of sensitive material, exposing it to misuse.
Police departments across the US are still developing policies for safe deployment of generative tools. Several states now require officers to label AI-assisted reports, while specialists call for stronger guardrails before the technology is applied in high-stakes legal settings.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Two teenagers in Australia have taken the federal government to the High Court in an effort to stop the country’s under-16 social media ban, which is due to begin on 10 December. The case was filed by the Digital Freedom Project with two 15-year-olds, Noah Jones and Macy Neyland, listed as plaintiffs. The group says the law strips young people of their implied constitutional right to political communication.
The ban will lead to the deactivation of more than one million accounts held by users under 16 across platforms such as YouTube, TikTok, Snapchat, Twitch, Facebook and Instagram. The Digital Freedom Project argues that removing young people from these platforms blocks them from engaging in public debate. Neyland said the rules silence teens who want to share their views on issues that affect them.
The Digital Freedom Project’s president, John Ruddick, is a Libertarian Party politician in New South Wales. After the lawsuit became public, Communications Minister Anika Wells told Parliament the government would not shift its position in the face of legal threats. She said the government’s priority is supporting parents rather than platform operators.
The law, passed in November 2024, is supported by most Australians according to polling. The government says research links heavy social media use among young teens to bullying, misinformation and harmful body-image content.
Companies that fail to comply with the ban risk penalties of up to A$49.5 million. Lawmakers and tech firms abroad are watching how the rollout unfolds, as Australia’s approach is among the toughest efforts globally to restrict minors’ access to social platforms.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has launched a confidential tool enabling insiders at AI developers to report suspected rule breaches. The channel forms part of wider efforts to prepare for enforcement of the EU AI Act, which will introduce strict obligations for model providers.
Legal protections for users of the tool will only apply from August 2026, leaving early whistleblowers exposed to employer retaliation until the Act’s relevant provisions take effect. The Commission acknowledges the gap and stresses strong encryption to safeguard identities.
Advocates say the channel still offers meaningful progress. Karl Koch, founder of the AI whistleblower initiative, argues that existing EU whistleblowing rules on product safety may already cover certain AI-related concerns, potentially offering partial protection.
Koch also notes parallels with US practice, where regulators accept overseas tips despite limited powers to shield informants. The Commission’s transparency about current limitations has been welcomed by experts who view the tool as an important foundation for long-term AI oversight.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Macquarie Dictionary has named ‘AI slop’ its 2025 Word of the Year, reflecting widespread concern about the flood of low-quality, AI-generated content circulating online. The selection committee noted that the term captures a major shift in how people search for and evaluate information, stating that users now need to act as ‘prompt engineers’ to navigate the growing sea of meaningless material.
‘AI slop’ topped a shortlist packed with culturally resonant expressions, including ‘Ozempic face’, ‘blind box’, ‘ate (and left no crumbs)’ and ‘Roman Empire’. Honourable mentions went to emerging technology-related words such as ‘clankers’, referring to AI-powered robots, and ‘medical misogyny’.
The public vote aligned with the experts, also choosing ‘AI slop’ as its top pick.
The rise of the term reflects the explosive growth of AI over the past year, from social media content shared by figures like Donald Trump to deepfake-driven misinformation flagged by the Australian Electoral Commission. Language specialist David Astle compared AI slop to the modern equivalent of spam, noting its adaptability into new hybrid terms.
Asked about the title, ChatGPT said the win suggests people are becoming more critical of AI output, which is a reminder, it added, of the standard it must uphold.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A national push to bring AI into public schools has moved ahead in Greece after the launch of an intensive training programme for secondary teachers.
Staff in selected institutions will receive guidance on a custom version of ChatGPT designed for academic use, with a wider rollout planned for January.
The government aims to prepare educators for an era in which AI tools support lesson planning, research and personalised teaching instead of remaining outside daily classroom practice.
Officials view the initiative as part of a broader ambition to position Greece as a technological centre, supported by partnerships with major AI firms and new infrastructure projects in Athens. Students will gain access to the system next spring under tight supervision.
Supporters argue that generative tools could help teachers reduce administrative workload and make learning more adaptive.
Concerns remain strong among pupils and educators who fear that AI may deepen an already exam-driven culture.
Many students say they worry about losing autonomy and creativity, while teachers’ unions warn that reliance on automated assistance could erode critical thinking. Others point to the risk of increased screen use in a country preparing to block social media for younger teenagers.
Teacher representatives also argue that school buildings require urgent attention instead of high-profile digital reforms. Poor heating, unreliable electricity and decades of underinvestment complicate adoption of new technologies.
Educators who support AI stress that meaningful progress depends on using such systems as tools to broaden creativity rather than as shortcuts that reinforce rote learning.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Since yesterday, OpenAI has launched group chats worldwide for all ChatGPT users on Free, Go, Plus and Pro plans instead of limiting access to small trial regions.
The upgrade follows a pilot in Japan and New Zealand and marks a turning point in how the company wants people to use AI in everyday communication.
Group chats enable up to twenty participants to collaborate in a shared space, where they can plan trips, co-write documents, or settle disagreements through collective decision-making.
ChatGPT remains available as a partner that contributes when tagged, reacts with emojis and references profile photos instead of taking over the conversation. Each participant keeps private settings and memory, which prevents personal information from being shared across the group.
Users start a session by tapping the people icon and inviting others directly or through a link. Adding someone later creates a new chat, rather than altering the original, which preserves previous discussions intact.
OpenAI presents the feature as a way to turn the assistant into a social environment rather than a solitary tool.
The announcement arrives shortly after the release of GPT-5.1 and follows the introduction of Sora, a social app that encourages users to create videos with friends.
OpenAI views group chats as the first step toward a more active role for AI in real human exchanges where people plan, create and make decisions together.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Northamptonshire Police will roll out live facial recognition cameras in three town centres. Deployments are scheduled in Northampton on 28 November and 5 December, in Kettering on 29 November, and in Wellingborough on 6 December.
The initiative uses a van loaned from Bedfordshire Police and the watch-lists include high-risk sex offenders or persons wanted for arrest. Facial and biometric data for non-alerts are deleted immediately; alerts are held only up to 24 hours.
Police emphasise the AI based technology is ‘very much in its infancy’ but expect future acquisition of dedicated kit. A coordinator post is being created to manage the LFR programme in-house.
British campaigners express concern the biometric tool may erode privacy or resemble mass surveillance. Police assert that appropriate signage and open policy documents will be in place to maintain public confidence.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!