Major social media platforms restricted access to approximately 4.7 million accounts linked to children under 16 across Australia during early December, following the introduction of the national social media minimum age requirement.
Initial figures collected by eSafety indicate that platforms with high youth usage are already engaging in early compliance efforts.
Since the obligation took effect on 10 December, regulatory focus has shifted towards monitoring and enforcement instead of preparation, targeting services assessed as age-restricted.
Early data suggests meaningful steps are being taken, although authorities stress it remains too soon to determine whether platforms have achieved full compliance.
eSafety has emphasised continuous improvement in age-assurance accuracy, alongside the industry’s responsibility to prevent circumvention.
Reports indicate some under-16 accounts remain active, although early signals point towards reduced exposure and gradual behavioural change rather than immediate elimination.
Officials note that the broader impact of the minimum age policy will emerge over time, supported by a planned independent, longitudinal evaluation involving academic and youth mental health experts.
Data collection will continue to monitor compliance, platform migration trends and long-term safety outcomes for children and families in Australia.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Wikipedia marked its 25th anniversary by showcasing the rapid expansion of Wikimedia Enterprise and its growing tech partnerships. The milestone reflects Wikipedia’s evolution into one of the most trusted and widely used knowledge sources in the digital economy.
Amazon, Meta, Microsoft, Mistral AI, and Perplexity have joined the partner roster for the first time, alongside Google, Ecosia, and several other companies already working with Wikimedia Enterprise.
These organisations integrate human-curated Wikipedia content into search engines, AI models, voice assistants, and data platforms, helping deliver verified knowledge to billions of users worldwide.
Wikipedia remains one of the top ten most visited websites globally and the only one in that group operated by a non-profit organisation. With over 65 million articles in 300+ languages, the platform is a key dataset for training large language models.
Wikimedia Enterprise provides structured, high-speed access to this content through on-demand, snapshot, and real-time APIs, allowing companies to use Wikipedia data at scale while supporting its long-term sustainability.
As Wikipedia continues to expand into new languages and subject areas, its value for AI development, search, and specialised knowledge applications is expected to grow further.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new suite of open translation models, TranslateGemma, has been launched, bringing advanced multilingual capabilities to users worldwide. Built on the Gemma 3 architecture, the models support 55 languages and come in 4B, 12B, and 27B parameter sizes.
The release aims to make high-quality translation accessible across devices without compromising efficiency.
TranslateGemma delivers impressive performance gains, with the 12B model surpassing the 27B Gemma 3 baseline on WMT24++ benchmarks. The models achieve higher accuracy while requiring fewer parameters, enabling faster translations with lower latency.
The 4B model also performs on par with larger models, making it ideal for mobile deployment.
The development combines supervised fine-tuning on diverse parallel datasets with reinforcement learning guided by advanced metrics. TranslateGemma performs well in high- and low-resource languages and supports accurate text translation within images.
Designed for flexible deployment, the models cater to mobile devices, consumer laptops, and cloud environments. Researchers and developers can use TranslateGemma to build customised translation solutions and improve coverage for low-resource languages.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Children and young adults across South Asia are increasingly turning to AI tools for emotional reassurance, schoolwork and everyday advice, even while acknowledging their shortcomings.
Easy access to smartphones, cheap data and social pressures have made chatbots a constant presence, often filling gaps left by limited human interaction.
Researchers and child safety experts warn that growing reliance on AI risks weakening critical thinking, reducing social trust and exposing young users to privacy and bias-related harms.
Studies show that many children understand AI can mislead or oversimplify, yet receive little guidance at school or home on how to question outputs or assess risks.
Rather than banning AI outright, experts argue for child-centred regulation, stronger safeguards and digital literacy that involves parents, educators and communities.
Without broader social support systems and clear accountability from technology companies, AI risks becoming a substitute for human connection instead of a tool that genuinely supports learning and wellbeing.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s X has limited the image editing functions of its Grok AI tool after criticism over the creation of sexualised images of real people.
The platform said technological safeguards have been introduced to block such content in regions where it is illegal, following growing concern from governments and regulators.
UK officials described the move as a positive step, although regulatory scrutiny remains ongoing.
Authorities are examining whether X complied with existing laws, while similar investigations have been launched in the US amid broader concerns over the misuse of AI-generated imagery.
International pressure has continued to build, with some countries banning Grok entirely instead of waiting for platform-led restrictions.
Policy experts have welcomed stronger controls but questioned how effectively X can identify real individuals and enforce its updated rules across different jurisdictions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At General Wolfe School and other Winnipeg classrooms, students are using AI tools to help with tasks such as translating language and understanding complex terms, with teachers guiding them on how to verify AI-generated information against reliable sources.
Teachers are cautious but optimistic, developing a thinking framework that prioritises critical thinking and human judgement alongside AI use rather than rigid policies as the technology evolves.
Educators in the Winnipeg School Division are adapting teaching methods to incorporate AI while discouraging over-reliance, stressing that students should use AI as an aid rather than a substitute for learning.
This reflects broader discussions in education about how to balance innovation with foundational skills as AI becomes more commonplace in school environments.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Rising concern surrounds the growing number of people seeking help after becoming victims of AI-generated intimate deepfakes in Guernsey, part of the UK. Support services report a steady increase in cases.
Existing law criminalises sharing intimate images without consent, but AI-generated creations remain legal. Proposed reforms aim to close this gap and strengthen victim protection.
Police and support charities warn that deepfakes cause severe emotional harm and are challenging to prosecute. Cross-border platforms and anonymous perpetrators complicate enforcement and reporting.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Rising use of AI is transforming cyberattacks in the UAE, enabling deepfakes, automated phishing and rapid data theft. Expanding digital services increase exposure for businesses and residents.
Criminals deploy autonomous AI tools to scan networks, exploit weaknesses and steal information faster than humans. Shorter detection windows raise risks of breaches, disruption and financial loss.
High-value sectors such as government, finance and healthcare face sustained targeting amid skills shortages. Protection relies on cautious users, stronger governance and secure-by-design systems across smart infrastructure.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
UK lawmaker Jess Asato said an AI-altered image depicting her in a bikini circulated online. The incident follows wider reports of sexualised deepfake abuse targeting women on social media.
Platforms hosted thousands of comments, including further manipulated images, heightening distress. Victims describe the content as realistic, dehumanising and violating personal consent.
Government ministers of the UK pledged to ban nudification tools and criminalise non-consensual intimate images. Technology firms face pressure to remove content, suspend accounts, and follow Ofcom guidance to maintain a safe online environment.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A Grok-powered AI support tool has been added to Starlink’s website, expanding automated help for broadband users. The chatbot builds on a similar service already available through the company’s mobile app.
Users can access the chatbot via the checkout support page, receiving a link by email. Responses are limited to Starlink services and usually appear within several seconds.
The system is designed to streamline support for millions of users worldwide, including rural UK customers. Public opinion remains divided over the growing reliance on AI instead of human support staff.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!