Social media platforms ordered to enforce minimum age rules in Australia

Australia’s eSafety Commissioner has formally notified major social media platforms, including Facebook, Instagram, TikTok, Snapchat, and YouTube, that they must comply with new minimum age restrictions from 10 December.

The rule will require these services to prevent social media users under 16 from creating accounts.

eSafety determined that nine popular services currently meet the definition of age-restricted platforms since their main purpose is to enable online social interaction. Platforms that fail to take reasonable steps to block underage users may face enforcement measures, including fines of up to 49.5 million dollars.

The agency clarified that the list of age-restricted platforms will not remain static, as new services will be reviewed and reassessed over time. Others, such as Discord, Google Classroom, and WhatsApp, are excluded for now as they do not meet the same criteria.

Commissioner Julie Inman Grant said the new framework aims to delay children’s exposure to social media and limit harmful design features such as infinite scroll and opaque algorithms.

She emphasised that age limits are only part of a broader effort to build safer, more age-appropriate online environments supported by education, prevention, and digital resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI introduces IndQA to test AI on Indian languages and culture

The US R&D company, OpenAI, has introduced IndQA, a new benchmark designed to test how well AI systems understand and reason across Indian languages and cultural contexts. The benchmark covers 2,278 questions in 12 languages and 10 cultural domains, from literature and food to law and spirituality.

Developed with input from 261 Indian experts, IndQA evaluates AI models through rubric-based grading that assesses accuracy, cultural understanding, and reasoning depth. Questions were created to challenge leading OpenAI models, including GPT-4o and GPT-5, ensuring space for future improvement.

India was chosen as the first region for the initiative, reflecting its linguistic diversity and its position as ChatGPT’s second-largest market.

OpenAI aims to expand the approach globally, using IndQA as a model for building culturally aware benchmarks that help measure real progress in multilingual AI performance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Facebook update lets admins make private groups public safely

Meta has introduced a new Facebook update allowing group administrators to change their private groups to public while keeping members’ privacy protected. The company said the feature gives admins more flexibility to grow their communities without exposing existing private content.

All posts, comments, and reactions shared before the change will remain visible only to previous members, admins, and moderators. The member list will also stay private. Once converted, any new posts will be visible to everyone, including non-Facebook users, which helps discussions reach a broader audience.

Admins have three days to review and cancel the conversion before it becomes permanent. Members will be notified when a group changes its status, and a globe icon will appear when posting in public groups as a reminder of visibility settings.

Groups can be switched back to private at any time, restoring member-only access.

Meta said the feature supports community growth and deeper engagement while maintaining privacy safeguards. Group admins can also utilise anonymous or nickname-based participation options, providing users with greater control over their engagement in public discussions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google removes Gemma AI model following defamation claims

Google has removed its Gemma AI model from AI Studio after US Senator Marsha Blackburn accused it of producing false sexual misconduct claims about her. The senator said Gemma fabricated an incident allegedly from her 1987 campaign, citing nonexistent news links to support the claim.

Blackburn described the AI’s response as defamatory and demanded action from Google.

The controversy follows a similar case involving conservative activist Robby Starbuck, who claims Google’s AI tools made false accusations about him. Google acknowledged that AI’ hallucinations’ are a known issue but insisted it is working to mitigate such errors.

Blackburn argued these fabrications go beyond harmless mistakes and represent real defamation from a company-owned AI model.

Google stated that Gemma was never intended as a consumer-facing tool, noting that some non-developers misused it to ask factual questions. The company confirmed it would remove the model from AI Studio while keeping it accessible via API for developers.

The incident has reignited debates over AI bias and accountability. Blackburn highlighted what she sees as a consistent pattern of conservative figures being targeted by AI systems, amid wider political scrutiny over misinformation and AI regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australian police create AI tool to decode predators’ slang

Australian police are developing an AI tool with Microsoft to decode slang and emojis used by online predators. The technology is designed to interpret coded messages in digital conversations to help investigators detect harmful intent more quickly.

Federal Police Commissioner Krissy Barrett said social media has become a breeding ground for exploitation, bullying, and radicalisation. The AI based prototype, she explained, could allow officers to identify threats earlier and rescue children before abuse occurs.

Barrett also warned about the rise of so-called ‘crimefluencers’, offenders using social media trends to lure young victims, many of whom are pre-teen or teenage girls. Australian authorities believe understanding modern online language is key to disrupting their methods.

The initiative follows Australia’s new under-16 social media ban, due to take effect in December. Regulators worldwide are monitoring the country’s approach as governments struggle to balance online safety with privacy and digital rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft leaders envision AI as an invisible partner in work and play

AI, gaming and work were at the heart of the discussion during the Paley International Council Summit, where three Microsoft executives explored how technology is reshaping human experience and industry structures.

Mustafa Suleyman, Phil Spencer and Ryan Roslansky offered perspectives on the next phase of digital transformation, from personalised AI companions to the evolution of entertainment and the changing nature of work.

Mustafa Suleyman, CEO of Microsoft AI, described a future where AI becomes an invisible companion that quietly assists users. He explained that AI is moving beyond standalone apps to integrate directly into systems and browsers, performing tasks through natural language rather than manual navigation.

With features like Copilot on Windows and Edge, users can let AI automate everyday functions, creating a seamless experience where technology anticipates rather than responds.

Phil Spencer, CEO of Microsoft Gaming, underlined gaming’s cultural impact, noting that the industry now surpasses film, books and music combined. He emphasised that gaming’s interactive nature offers lessons for all media, where creativity, participation and community define success.

For Spencer, the future of entertainment lies in blending audience engagement with technology, allowing fans and creators to shape experiences together.

Ryan Roslansky, CEO of LinkedIn, discussed how AI is transforming skills and workforce dynamics. He highlighted that required job skills are changing faster than ever, with adaptability, AI literacy and human-centred leadership becoming essential.

Roslansky urged companies to focus on potential and continuous learning instead of static job descriptions, suggesting that the most successful organisations will be those that evolve with technology and cultivate resilience through education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU considers classifying ChatGPT as a search engine under the DSA. What are the implications?

The European Commission is pondering whether OpenAI’s ChatGPT should be designated as a ‘Very Large Online Search Engine’ (VLOSE) under the Digital Services Act (DSA), a move that could reshape how generative AI tools are regulated across Europe.

OpenAI recently reported that ChatGPT’s search feature reached 120.4 million monthly users in the EU over the past six months, well above the 45 million threshold that triggers stricter obligations for major online platforms and search engines. The Commission confirmed it is reviewing the figures and assessing whether ChatGPT meets the criteria for designation.

The key question is whether ChatGPT’s live search function should be treated as an independent service or as part of the chatbot as a whole. Legal experts note that the DSA applies to intermediary services such as hosting platforms or search engines, categories that do not neatly encompass generative AI systems.

Implications for OpenAI

If designated, ChatGPT would be the first AI chatbot formally subject to DSA obligations, including systemic risk assessments, transparency reporting, and independent audits. OpenAI would need to evaluate how ChatGPT affects fundamental rights, democratic processes, and mental health, updating its systems and features based on identified risks.

‘As part of mitigation measures, OpenAI may need to adapt ChatGPT’s design, features, and functionality,’ said Laureline Lemoine of AWO. ‘Compliance could also slow the rollout of new tools in Europe if risk assessments aren’t planned in advance.’

The company could also face new data-sharing obligations under Article 40 of the DSA, allowing vetted researchers to request information about systemic risks and mitigation efforts, potentially extending to model data or training processes.

A test case for AI oversight

Legal scholars say the decision could set a precedent for generative AI regulation across the EU. ‘Classifying ChatGPT as a VLOSE will expand scrutiny beyond what’s currently covered under the AI Act,’ said Natali Helberger, professor of information law at the University of Amsterdam.

Experts warn the DSA would shift OpenAI from voluntary AI-safety frameworks and self-defined benchmarks to binding obligations, moving beyond narrow ‘bias tests’ to audited systemic-risk assessments, transparency and mitigation duties. ‘The DSA’s due diligence regime will be a tough reality check,’ said Mathias Vermeulen, public policy director at AWO.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!

NYPD sued over Microsoft-linked surveillance system

The New York Police Department is facing a lawsuit from the Surveillance Technology Oversight Project (S.T.O.P.), which accuses it of running an invasive citywide surveillance network built with Microsoft technology.

The system, known as the Domain Awareness System (DAS), has operated since 2012 and connects more than a dozen surveillance tools, including video cameras, biometric scanners, license plate readers, and financial analytics, into one centralised network. According to court filings, the system collects location data, social media activity, vehicle information, and even banking details to create ‘digital profiles’ of millions of residents.

S.T.O.P. argues that the network captures and stores data on all New Yorkers, including those never suspected of a crime, amounting to a ‘web of surveillance’ that violates constitutional rights. The group says newly obtained records show that DAS integrates citywide cameras, 911 and 311 call logs, police databases, and feeds from drones and helicopters into a single monitoring platform.

Calling DAS ‘an unprecedented violation of American life’, the organisation has asked the US District Court for the Southern District of New York to declare the city’s surveillance practices unconstitutional.

This is not the first time Microsoft’s technology has drawn scrutiny this year over data tracking and storing, its recently announced ‘Recall’ feature also raised alarm over potential privacy issues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

National internet shutdown grips Tanzania during contested vote

Tanzania is facing a nationwide internet shutdown that began as citizens headed to the polls in a tense general election. Connectivity across the country has been severely disrupted, with platforms like X (formerly Twitter), WhatsApp, and Instagram rendered inaccessible.

The blackout, confirmed by monitoring group NetBlocks, has left journalists, election observers, and citizens in Tanzania struggling to share updates as reports of protests and unrest spread. The government has reportedly deployed the army, deepening concerns over efforts to control information during this volatile period.

The move mirrors a growing global pattern where authorities restrict internet access during elections and political crises to curb dissent and manage narratives. Amnesty International has condemned the shutdown, warning that it risks escalating tensions and violating citizens’ right to information.

‘Authorities must ensure full internet access and allow free reporting before, during, and after the elections,’ said Tigere Chagutah, Amnesty’s Regional Director for East and Southern Africa.

Tanzania’s blackout follows similar crackdowns elsewhere, such as Afghanistan’s total internet shutdown, which left citizens completely cut off from the world.

These incidents underscore the fragility of digital freedoms in times of political turmoil. When governments ‘pull the plug,’ societies lose not only communication but also trust, transparency, and the fundamental ability to hold power to account.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US Internet Bill of Rights unveiled as response to global safety laws

A proposed US Internet Bill of Rights aims to protect digital freedoms as governments expand online censorship laws. The framework, developed by privacy advocates, calls for stronger guarantees of free expression, privacy, and access to information in the digital era.

Supporters argue that recent legislation such as the UK’s Online Safety Act, the EU’s Digital Services Act, and US proposals like KOSA and the STOP HATE Act have eroded civil liberties. They claim these measures empower governments and private firms to control online speech under the guise of safety.

The proposed US bill sets out rights including privacy in digital communications, platform transparency, protection against government surveillance, and fair access to the internet. It also calls for judicial oversight of censorship requests, open algorithms, and the protection of anonymous speech.

Advocates say the framework would enshrine digital freedoms through federal law or constitutional amendment, ensuring equal access and privacy worldwide. They argue that safeguarding free and open internet access is vital to preserve democracy and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot