Roblox faces Dutch investigation over child welfare concerns

Dutch officials will study how the gaming platform affects young users, focusing on safety, mental health, and privacy. The assessment aims to identify both the benefits and risks of Roblox. Authorities say the findings will help guide new policies and support parents in protecting their children online.

Roblox has faced mounting criticism over unsafe content and the presence of online predators. Reports of games containing violent or sexual material have raised alarms among parents and child protection groups.

The US state of Louisiana recently sued Roblox, alleging that it enabled systemic child exploitation through negligence. Dutch experts argue that similar concerns justify a thorough review in the Netherlands.

Previous Dutch investigations have examined platforms such as Instagram, TikTok, and Snapchat under similar children’s rights frameworks. Policymakers hope the Roblox review will set clearer standards for digital child safety across Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT to exit WhatsApp after Meta policy change

OpenAI says ChatGPT will leave WhatsApp on 15 January 2026 after Meta’s new rules banning general-purpose AI chatbots on the platform. ChatGPT will remain available on iOS, Android, and the web, the company said.

Users are urged to link their WhatsApp number to a ChatGPT account to preserve history, as WhatsApp doesn’t support chat exports. OpenAI will also let users unlink their phone numbers after linking.

Until now, users could message ChatGPT on WhatsApp to ask questions, search the web, generate images, or talk to the assistant. Similar third-party bots offered comparable features.

Meta quietly updated WhatsApp’s business API to prohibit AI providers from accessing or using it, directly or indirectly. The change effectively forces ChatGPT, Perplexity, Luzia, Poke, and others to shut down their WhatsApp bots.

The move highlights platform risk for AI assistants and shifts demand toward native apps and web. Businesses relying on WhatsApp AI automations will need alternatives that comply with Meta’s policies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Innovation versus risk shapes Australia’s AI debate

Australia’s business leaders were urged to adopt AI now to stay competitive, despite the absence of hard rules, at the AI Leadership Summit in Brisbane. The National AI Centre unveiled revised voluntary guidelines, and Assistant Minister Andrew Charlton said a national AI plan will arrive later this year.

The guidance sets six priorities, from stress-testing and human oversight to clearer accountability, aiming to give boards practical guardrails. Speakers from NVIDIA, OpenAI, and legal and academic circles welcomed direction but pressed for certainty to unlock stalled investment.

Charlton said the plan will focus on economic opportunity, equitable access, and risk mitigation, noting some harms are already banned, including ‘nudify’ apps. He argued Australia will be poorer if it hesitates, and regulators must be ready to address new threats directly.

The debate centred on proportional regulation: too many rules could stifle innovation, said Clayton Utz partner Simon Newcomb, yet delays and ambiguity can also chill projects. A ‘gap analysis’ announced by Treasurer Jim Chalmers will map which risks existing laws already cover.

CyberCX’s Alastair MacGibbon warned that criminals are using AI to deliver sharper phishing attacks and flagged the return of erotic features in some chatbots as an oversight test. His message echoed across panels: move fast with governance, or risk ceding both competitiveness and safety.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI still struggles to mimic natural human conversation

A recent study reveals that large language models such as ChatGPT-4, Claude, Vicuna, and Wayfarer still struggle to replicate natural human conversation. Researchers found AI over-imitates, misuses filler words, and struggles with natural openings and closings, revealing its artificial nature.

The research, led by Eric Mayor with contributions from Lucas Bietti and Adrian Bangerter, compared transcripts of human phone conversations with AI-generated ones. AI can speak correctly, but subtle social cues like timing, phrasing, and discourse markers remain hard to mimic.

Misplaced words such as ‘so’ or ‘well’ and awkward conversation transitions make AI dialogue recognisably non-human. Openings and endings also pose a challenge. Humans naturally engage in small talk or closing phrases such as ‘see you soon’ or ‘alright, then,’ which AI systems often fail to reproduce convincingly.

These gaps in social nuance, researchers argue, prevent large language models from consistently fooling people in conversation tests.

Despite rapid progress, experts caution that AI may never fully capture all elements of human interaction, such as empathy and social timing. Advances may narrow the gap, but key differences will likely remain, keeping AI speech subtly distinguishable from real human dialogue.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

PAHO issues new guide on designing AI prompts for public health

The Pan American Health Organization (PAHO) has released a guide with practical advice on creating effective AI prompts for public health. The guide AI prompt design for public health helps professionals use AI responsibly to generate accurate and culturally appropriate content.

PAHO says generative AI aids in public health alerts, reports, and educational materials, but its effectiveness depends on clear instructions. The guide highlights that well-crafted prompts enable AI systems to generate meaningful content efficiently, reducing review time while maintaining quality.

The organisation advises health institutions to treat prompts as ‘living protocols’ that can be tested and refined to suit different audiences and languages. It also recommends developing prompt libraries to improve consistency across public health operations.

Human oversight remains crucial, especially when AI-generated content could influence public behaviour or policy decisions.

The initiative forms part of PAHO’s broader Digital Literacy Programme, which seeks to strengthen the digital skills of health professionals throughout the Americas. Better prompt design aims to boost communication, accelerate decision-making, and advance digital transformation in healthcare.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Samsung unveils AI-powered redesign of its corporate Newsroom

The South Korean firm, Samsung Electronics, has redesigned its official Newsroom, transforming it into a multimedia platform built around visuals, video and AI-driven features.

A revamped site that aligns with the growing dominance of visual communication, aiming to make corporate storytelling more intuitive, engaging and accessible.

The updated homepage features an expanded horizontal carousel showcasing videos, graphics and feature stories with hover-based summaries for quick insight. Users can browse by theme, play videos directly and enjoy a seamless experience across all Samsung devices.

A redesign by Samsung that also introduces an integrated media hub with improved press tools, content filters and high-resolution downloads. Journalists can now save full articles, videos and images in one click, simplifying access to media materials.

AI integration adds smart summaries and upgraded search capabilities, including tag- and image-based discovery. These tools enhance relevance and retrieval speed, while flexible sorting and keyword highlighting refine user experience.

As Samsung celebrates a decade since launching its Newsroom, such a transformation marks a step toward a more dynamic, interactive communication model designed for both consumers and media professionals in the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta changes WhatsApp terms to block third-party AI assistants

Meta-owned WhatsApp has updated the terms of its Business API to forbid general-purpose AI chatbots from being hosted or distributed via its platform. The change will take effect on 15 January 2026.

Under the revised terms, WhatsApp will not allow providers of AI or machine-learning technologies, including large language models, generative AI platforms, or general-purpose AI assistants, to use the WhatsApp Business Solution when such technologies are the primary functionality being provided.

Meta says the Business API was designed for companies to communicate with their customers, not as a distribution channel for standalone AI assistants. The company emphasises that this update does not affect businesses using AI for defined functions like customer support, reservations or order tracking.

The move is significant for the AI ecosystem. Several startups and major players had offered their assistants via WhatsApp, including the likes of OpenAI (ChatGPT), Perplexity AI and others. These will now have to rethink how they integrate or distribute on WhatsApp.

Meta also notes that the volume of messages from these chatbots imposed strain on WhatsApp’s infrastructure and deviated from the intended business-to-customer messaging model. Furthermore, by limiting such usage Meta retains stronger control over how its platform is monetised.

For third-party AI providers, the implication is clear: WhatsApp will no longer serve as a platform for generic assistants but rather for business workflows or task-specific bots. This redefinition realigns the platform’s strategy and draws a clearer boundary between enterprise usage and public-facing AI services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK actors’ union demands rights as AI uses performers’ likenesses without consent

The British performers’ union Equity has warned of coordinated mass action against technology companies and entertainment producers that use its members’ images, voices or likenesses in artificial-intelligence-generated content without proper consent.

Equity’s general secretary, Paul W Fleming, announced plans to mobilise tens of thousands of actors through subject access requests under data-protection law, compelling companies to disclose whether they have used performers’ data in AI content.

The move follows growing numbers of complaints from actors about alleged mis-use of their likenesses or voices in AI material. One prominent case involves Scottish actor Briony Monroe, who claims her facial features and mannerisms were used to create the synthetic performer ‘Tilly Norwood’. The AI-studio behind the character denies the allegations.

Equity says the strategy is intended to ‘make it so hard for tech companies and producers to not enter into collective rights’ deals. It argues that existing legislation is being circumvented as foundational AI models are trained using data from actors, but with little transparency or compensation.

The trade body Pact, representing studios and producers, acknowledges the importance of AI but counters that without accessing new tools firms may fall behind commercially. Pact complains about the lack of transparency from companies on what data is used to train AI systems.

In essence, the standoff reflects deeper tensions in the creative industries: how to balance innovation, performer rights and transparency in an era when digital likenesses and synthetic ‘actors’ are emerging rapidly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants fund teacher AI training amid classroom chatbot push

Major technology companies are shifting strategic emphasis toward education by funding teacher training in artificial intelligence. Companies such as Microsoft, OpenAI and Anthropic have pledged millions of dollars to train educators and bring chatbots into classrooms.

Under a deal with the American Federation of Teachers (AFT) in the United States, Microsoft will contribute $12.5 million over five years, OpenAI will provide $8 million plus $2 million in technical resources, and Anthropic has pledged $500,000. The AFT plans to build AI training hubs, including one in New York, and aims to train around 400,000 teachers over five years.

At a workshop in San Antonio, dozens of teachers used AI tools such as ChatGPT, Google’s Gemini and Microsoft CoPilot to generate lesson plans, podcasts and bilingual flashcards. One teacher noted how quickly AI could generate materials: ‘It can save you so much time.’

However, the initiative raises critical questions. Educators expressed concerns about being replaced by AI, while unions emphasise that teachers must lead training content and maintain control over learning. Technology companies see this as a way to expand into education, but also face scrutiny over influence and the implications for teaching practice.

As schools increasingly adopt AI tools, experts say training must go beyond technical skills to cover ethical use, student data protection and critical thinking. The reforms reflect a broader push to prepare both teachers and students for a future defined by AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Civil groups question independence of Irish privacy watchdog

More than 40 civil society organisations have asked the European Commission to investigate Ireland’s privacy regulator. Their letter questions whether the Irish Data Protection Commission (DPC) remains independent following the appointment of a former Meta lobbyist as Commissioner.

Niamh Sweeney, previously Facebook’s head of public policy for Ireland, became the DPC’s third commissioner in September. Her appointment has triggered concerns among digital rights groups that oversee compliance with the EU’s General Data Protection Regulation.

The letter calls for a formal work programme to ensure that data protection rules are enforced consistently and free from political or corporate influence. Civil society groups argue that effective oversight is essential to preserve citizens’ trust and uphold the GDPR’s credibility.

The DPC, headquartered in Dublin, supervises major tech firms such as Meta, Apple, and Google under the EU’s privacy regime. Critics have long accused it of being too lenient toward large companies operating in Ireland’s digital sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot