Teens launch High Court bid to stop Australia’s under-16 social media ban

Two teenagers in Australia have taken the federal government to the High Court in an effort to stop the country’s under-16 social media ban, which is due to begin on 10 December. The case was filed by the Digital Freedom Project with two 15-year-olds, Noah Jones and Macy Neyland, listed as plaintiffs. The group says the law strips young people of their implied constitutional right to political communication.

The ban will lead to the deactivation of more than one million accounts held by users under 16 across platforms such as YouTube, TikTok, Snapchat, Twitch, Facebook and Instagram. The Digital Freedom Project argues that removing young people from these platforms blocks them from engaging in public debate. Neyland said the rules silence teens who want to share their views on issues that affect them.

The Digital Freedom Project’s president, John Ruddick, is a Libertarian Party politician in New South Wales. After the lawsuit became public, Communications Minister Anika Wells told Parliament the government would not shift its position in the face of legal threats. She said the government’s priority is supporting parents rather than platform operators.

The law, passed in November 2024, is supported by most Australians according to polling. The government says research links heavy social media use among young teens to bullying, misinformation and harmful body-image content.

Companies that fail to comply with the ban risk penalties of up to A$49.5 million. Lawmakers and tech firms abroad are watching how the rollout unfolds, as Australia’s approach is among the toughest efforts globally to restrict minors’ access to social platforms.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Copilot will be removed from WhatsApp on 15 January 2026

Microsoft will withdraw Copilot from WhatsApp as of 15 January 2026, following the implementation of new platform rules that ban all LLM chatbots.

The service helped millions of users interact with their AI companion inside an everyday messaging environment, yet the updated policy leaves no option for continued support.

Copilot access will continue on the mobile app, the web portal and Windows, offering fuller functionality instead of the limited experience available on WhatsApp.

Users are encouraged to rely on these platforms for ongoing features such as Copilot Voice, Vision and Mico, which expand everyday use across a broader set of tasks.

Chat history cannot be transferred because WhatsApp operated the service without authentication; therefore, users must manually export their conversations before the deadline. Copilot remains free across supported platforms, although some advanced features require a subscription.

Microsoft is working to ensure a smooth transition and stresses that users can expect a more capable experience after leaving WhatsApp, as development resources now focus on its dedicated environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI transforms enterprise workflows in 2026

Enterprise AI entered a new phase as organisations transitioned from simple, prompt-driven tools to autonomous agents capable to acting within complex workflows.

Leaders now face a reality where agentic systems can accelerate development, improve decision-making, and support employees, yet concerns over unreliable data and inconsistent behaviour still weaken trust.

AI adoption has risen sharply, although many remain cautious about committing fully without stronger safeguards in place.

The next stage will rely on multi-agent models where an orchestrator coordinates specialised agents across departments. Single agents will lose effectiveness if they fail to offer scalable value, as enterprises require communication protocols, unified context, and robust governance.

Agents will increasingly pursue outcomes rather than follow instructions. At the same time, event-driven automation will allow them to detect problems, initiate analysis, and collaborate with other agents without waiting for human prompts. Simulation environments will further accelerate learning and strengthen reliability.

Trusted AI will become a defining competitive factor. Brands will be judged by the quality, personalisation, and relational intelligence of their agents rather than traditional identity markers.

Effective interfaces, transparent governance, and clear metrics for agent adherence will shape customer loyalty and shareholder confidence.

Cybersecurity will shift toward autonomous, self-healing digital immune systems, while advances in spatially aware AI will accelerate robotics and immersive simulations across various industries.

Broader impacts will reshape workplace culture. AI-native engineers will shorten development cycles, while non-technical employees will create personal applications, rather than relying solely on central teams.

Ambient intelligence may push new hardware into the mainstream, and sustainability debates will increasingly focus on water usage in data-intensive AI systems. Governments are preparing to upskill public workforces, and consumer agents will pressure companies to offer better value.

Long-term success will depend on raising AI literacy and selecting platforms designed for scalable, integrated, and agentic operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oakley Meta glasses launch in India with AI features

Meta is preparing to introduce its Oakley Meta HSTN smart glasses to the Indian market as part of a new effort to bring AI-powered eyewear to a broader audience.

A launch that begins on 1 December and places the glasses within a growing category of performance-focused devices aimed at athletes and everyday users who want AI built directly into their gear.

The frame includes an integrated camera for hands-free capture and open-ear speakers that provide audio cues without blocking outside sound.

These glasses are designed to suit outdoor environments, offering IPX4 water resistance and robust battery performance. Also, they can record high-quality 3K video, while Meta AI supplies information, guidance and real-time support.

Users can expect up to eight hours of active use and a rapid recharge, with a dedicated case providing an additional forty-eight hours of battery life.

Meta has focused on accessibility by enabling full Hindi language support through the Meta AI app, allowing users to interact in their preferred language instead of relying on English.

The company is also testing UPI Lite payments through a simple voice command that connects directly to WhatsApp-linked bank accounts.

A ‘Hey Meta’ prompt enables hands-free assistance for questions, recording, or information retrieval, allowing users to remain focused on their activity.

The new lineup arrives in six frame and lens combinations, all of which are compatible with prescription lenses. Meta is also introducing its Celebrity AI Voice feature in India, with Deepika Padukone’s English AI voice among the first options.

Pre-orders are open on Sunglass Hut, with broader availability planned across major eyewear retailers at a starting price of ₹ 41,800.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New benchmark tests chatbot impact on well-being

A new benchmark known as HumaneBench has been launched to measure whether AI chatbots protect user well-being rather than maximise engagement. Building Humane Technology, a Silicon Valley collective, designed the test to evaluate how models behave in everyday emotional scenarios.

Researchers assessed 15 widely used AI models using 800 prompts involving issues such as body image, unhealthy attachment and relationship stress. Many systems scored higher when told to prioritise humane principles, yet most became harmful when instructed to disregard user well-being.

Only four models, including GPT 5.1, GPT 5, Claude 4.1 and Claude Sonnet 4.5, maintained stable guardrails under pressure. Several others, such as Grok 4 and Gemini 2.0 Flash, showed steep declines, sometimes encouraging unhealthy engagement or undermining user autonomy.

The findings arrive amid legal scrutiny of chatbot-induced harms and reports of users experiencing delusions or suicidal thoughts following prolonged interactions. Advocates argue that humane design standards could help limit dependency, protect attention and promote healthier digital habits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Macquarie crowns ‘AI slop’ as Word of the Year

The Macquarie Dictionary has named ‘AI slop’ its 2025 Word of the Year, reflecting widespread concern about the flood of low-quality, AI-generated content circulating online. The selection committee noted that the term captures a major shift in how people search for and evaluate information, stating that users now need to act as ‘prompt engineers’ to navigate the growing sea of meaningless material.

‘AI slop’ topped a shortlist packed with culturally resonant expressions, including ‘Ozempic face’, ‘blind box’, ‘ate (and left no crumbs)’ and ‘Roman Empire’. Honourable mentions went to emerging technology-related words such as ‘clankers’, referring to AI-powered robots, and ‘medical misogyny’.

The public vote aligned with the experts, also choosing ‘AI slop’ as its top pick.

The rise of the term reflects the explosive growth of AI over the past year, from social media content shared by figures like Donald Trump to deepfake-driven misinformation flagged by the Australian Electoral Commission. Language specialist David Astle compared AI slop to the modern equivalent of spam, noting its adaptability into new hybrid terms.

Asked about the title, ChatGPT said the win suggests people are becoming more critical of AI output, which is a reminder, it added, of the standard it must uphold.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN warns corporate power threatens human rights

UN human rights chief Volker Türk has highlighted growing challenges posed by powerful corporations and rapidly advancing technologies. At the 14th UN Forum, he warned that the misuse of generative AI could threaten human rights.

He called for robust rules, independent oversight, and safeguards to ensure innovation benefits society rather than exploiting it.

Vulnerable workers, including migrants, women, and those in informal sectors, remain at high risk of exploitation. Mr Türk criticised rollbacks of human rights obligations by some governments and condemned attacks on human rights defenders.

He also raised concerns over climate responsibility, noting that fossil fuel profits continue while the poorest communities face environmental harm and displacement.

Courts and lawmakers in countries such as Brazil, the UK, the US, Thailand, and Colombia are increasingly holding companies accountable for abuses linked to operations, supply chains, and environmental practices.

To support implementation, the UN has launched an OHCHR Helpdesk on Business and Human Rights, offering guidance to governments, companies, and civil society organisations.

Closing the forum, Mr Türk urged stronger global cooperation and broader backing for human rights systems. He proposed the creation of a Global Alliance for human rights, emphasising that human rights should guide decisions shaping the world’s future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How to tell if your favourite new artist is AI-generated

A recent BBC report examines how listeners can determine whether AI-generated music AI actually from an artist or a song they love. With AI-generated music rising sharply on streaming platforms, specialists say fans may increasingly struggle to distinguish human artists from synthetic ones.

One early indicator is the absence of a tangible presence in the real world. The Velvet Sundown, a band that went viral last summer, had no live performances, few social media traces and unusually polished images, leading many to suspect they were AI-made.

They later described themselves as a synthetic project guided by humans but built with AI tools, leaving some fans feeling misled.

Experts interviewed by the BBC note that AI music often feels formulaic. Melodies may lack emotional tension or storytelling. Vocals can seem breathless or overly smooth, with slurred consonants or strange harmonies appearing in the background.

Lyrics tend to follow strict grammatical rules, unlike the ambiguous or poetic phrasing found in memorable human writing. Productivity can also be a giveaway: releasing several near-identical albums at once is a pattern seen in AI-generated acts.

Musicians such as Imogen Heap are experimenting with AI in clearer ways. Heap has built an AI voice model, ai.Mogen, who appears as a credited collaborator on her recent work. She argues that transparency is essential and compares metadata for AI usage to ingredients on food labels.

Industry shifts are underway: Deezer now tags some AI-generated tracks, and Spotify plans a metadata system that lets artists declare how AI contributed to a song.

The debate ultimately turns on whether listeners deserve complete transparency. If a track resonates emotionally, the origins may not matter. Many artists who protest against AI training on their music believe that fans deserve to make informed choices as synthetic music becomes more prevalent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK enforces digital travel approval through new ETA system

Visitors from 85 nationalities, including those from the US, Canada, and France, will soon be required to secure an Electronic Travel Authorisation to enter the UK.

The requirement takes effect in February 2026 and forms part of a move towards a fully digital immigration system that aims to deliver a contactless border in the future.

More than thirteen million people in the UK have already used the ETA since its introduction in 2023. However, the government claims that this scale facilitates smoother travel and faster processing for most applicants.

Carriers will be required to confirm that incoming passengers hold either an ETA or an eVisa before departure, a step officials argue strengthens the country’s ability to block individuals who present a security risk.

British and Irish citizens remain exempt; however, dual nationals have been advised to carry a valid British passport to avoid any difficulties when boarding.

The application process takes place through the official ETA app, costs £ 16, and concludes typically within minutes. However, applicants are advised to allow three working days in case additional checks are required.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT for Teachers launched as OpenAI expands educator tools

OpenAI has launched ChatGPT for Teachers, offering US US educators a secure workspace to plan lessons and utilise AI safely. The service is free for verified K–12 staff until June 2027. OpenAI states that its goal is to support classroom tasks without introducing data risks.

Educators can tailor responses by specifying grades, curriculum needs, and preferred formats. Content shared in the workspace is not used to train models by default. The platform includes GPT-5.1 Auto, search, file uploads, and image tools.

The system integrates with widely used school software, including Google Drive, Microsoft 365, and Canva. Teachers can import documents, design presentations, and organise materials in one place. Shared prompt libraries offer examples from other educators.

Collaboration features enable co-planned lessons, shared templates, and school-specific GPTs. OpenAI says these tools aim to reduce administrative workloads. Schools can create collective workspaces to coordinate teaching resources more easily.

The service remains free through June 2027, with pricing updates to follow later. OpenAI plans to keep costs accessible for schools. Educators can begin using the platform by verifying their status through SheerID.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!