Cloudflare calls for UK action on Google’s AI crawlers

Cloudflare’s chief executive Matthew Prince has urged the UK regulator to curb Google’s AI practices. He met with the Competition and Markets Authority (CMA) in London to argue that Google’s bundled crawlers give it excessive power.

Prince said Google uses the same web crawler to gather data for both search and AI products. Blocking the crawler, he added, can also disrupt advertising systems, leaving websites financially exposed.

Cloudflare, which supplies network services to most major AI companies, has proposed separating Google’s AI and search crawlers. Prince believes the change would create fairer access to online content for smaller AI developers.

He also provided data to the UK CMA showing why rivals cannot easily replicate Google’s infrastructure. Media groups have echoed his concerns, warning that Google’s dominance risks deepening inequalities across the AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube launches likeness detection to protect creators from AI misuse

YouTube has expanded its AI safeguards with a new likeness detection system that identifies AI-generated videos imitating creators’ faces or voices. The tool is now available to eligible members of the YouTube Partner Program after a limited pilot phase.

Creators can review detected videos and request their removal under YouTube’s privacy rules or submit copyright claims.

YouTube said the feature aims to protect users from having their image used to promote products or spread misinformation without consent.

The onboarding process requires identity verification through a short selfie video and photo ID. Creators can opt out at any time, with scanning ending within a day of deactivation.

YouTube has backed recent legislative efforts, such as the NO FAKES Act in the US, which targets deceptive AI replicas. The move highlights growing industry concern over deepfake misuse and the protection of digital identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta strengthens protection for older adults against online scams

The US giant, Meta, has intensified its campaign against online scams targeting older adults, marking Cybersecurity Awareness Month with new safety tools and global partnerships.

Additionally, Meta said it had detected and disrupted nearly eight million fraudulent accounts on Facebook and Instagram since January, many linked to organised scam centres operating across Asia and the Middle East.

The social media giant is joining the National Elder Fraud Coordination Center in the US, alongside partners including Google, Microsoft and Walmart, to strengthen investigations into large-scale fraud operations.

It is also collaborating with law enforcement and research groups such as Graphika to identify scams involving fake customer service pages, fraudulent financial recovery services and deceptive home renovation schemes.

Meta continues to roll out product updates to improve online safety. WhatsApp now warns users when they share screens with unknown contacts, while Messenger is testing AI-powered scam detection that alerts users to suspicious messages.

Across Facebook, Instagram and WhatsApp, users can activate passkeys and complete a Security Checkup to reinforce account protection.

The company has also partnered with organisations worldwide to raise scam awareness among older adults, from digital literacy workshops in Bangkok to influencer-led safety campaigns across Europe and India.

These efforts form part of Meta’s ongoing drive to protect users through a mix of education, advanced technology and cross-industry cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Dutch watchdog warns AI chatbots threaten election integrity

The Dutch data authority warns AI chatbots are biased and unreliable for voting advice ahead of national elections. An AP investigation found chatbots often steered users to the same two parties, ignoring their actual preferences.

In over half of the tests, the bots suggested either Geert Wilders’ far-right Freedom Party (PVV) or the leftwing GroenLinks-PvdA led by Frans Timmermans. Other parties, such as the centre-right CDA, were rarely mentioned even when users’ answers closely matched their platforms.

AP deputy head Monique Verdier said that voters were being steered towards parties that did not necessarily reflect their political views, warning that this undermines the integrity of free and fair elections.

The report comes ahead of the 29 October election, where the PVV currently leads the polls. However, the race remains tight, with GroenLinks-PvdA and CDA still in contention and many voters undecided.

Although the AP noted that the bias was not intentional, it attributed the problem to the way AI chatbots function, highlighting the risks of relying on opaque systems for democratic decisions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers become intelligence coaches in AI-driven learning

AI is reshaping education, pushing teachers to act as intelligence coaches and co-creators instead of traditional instructors.

Experts at an international conference, hosted in Greece, to celebrate Athens College’s centennial, discussed how AI personalises learning and demands a redefined teaching role.

Bill McDiarmid, professor emeritus at the University of North Carolina, said educators must now ask students where they find their information and why they trust it.

Similarly, Yong Zhao of the University of Kansas highlighted that AI enables individualised learning, allowing every student to achieve their full potential.

Speakers agreed AI should serve as a supportive partner, not a replacement, helping schools prepare students for an active role in shaping their futures.

The event, held under Greek President Konstantinos Tasoulas’ auspices, also urged caution when experimenting with AI on minors due to potential long-term risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s ‘Vibes’ feed lets users scroll and remix entirely AI-generated videos

Meta Platforms has introduced Vibes, a new short-form video feed built entirely around AI-generated content, available within its Meta AI app and on the meta.ai website.

The feed allows users to browse videos generated by creators and communities, start videos from scratch via text prompts or upload visual elements, and remix existing videos by adding music or changing styles. Users can then publish these clips to the Vibes feed or cross-post to Instagram Stories, Facebook, and Reels.

Meta says the goal is to make the Meta AI app a hub for creative video generation: ‘You can bring your ideas to life … or remix a video from the feed to make it your own.’ While Meta noted the feature is launching as a preview, it also points to broader ambitions in generative video as part of its AI strategy.

However, media commentary is already acknowledging scepticism. Early feedback has labelled some of the feed’s output as ‘AI slop’, mass-produced synthetic videos that lack authentic human creativity, fueling questions about quality and user demand.

Meta’s timing comes amid heavy investment in its AI efforts and a drive to monetise generative video content and new creator tools. The company sees this as more than experiment, potentially a new vector for engagement and distribution inside its social ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI strengthens controls after Bryan Cranston deepfake incident

Bryan Cranston is grateful that OpenAI tightened safeguards on its video platform Sora 2. The Breaking Bad actor raised concerns after users generated videos using his voice and image without permission.

Reports surfaced earlier this month showing Sora 2 users creating deepfakes of Cranston and other public figures. Several Hollywood agencies criticised OpenAI for requiring individuals to opt out of replication instead of opting in.

Major talent agencies, including UTA and CAA, co-signed a joint statement with OpenAI and industry unions. They pledged to collaborate on ethical standards for AI-generated media and ensure artists can decide how they are represented.

The incident underscores growing tension between entertainment professionals and AI developers. As generative video tools evolve, performers and studios are demanding clear boundaries around consent and digital replication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Roblox faces Dutch investigation over child welfare concerns

Dutch officials will study how the gaming platform affects young users, focusing on safety, mental health, and privacy. The assessment aims to identify both the benefits and risks of Roblox. Authorities say the findings will help guide new policies and support parents in protecting their children online.

Roblox has faced mounting criticism over unsafe content and the presence of online predators. Reports of games containing violent or sexual material have raised alarms among parents and child protection groups.

The US state of Louisiana recently sued Roblox, alleging that it enabled systemic child exploitation through negligence. Dutch experts argue that similar concerns justify a thorough review in the Netherlands.

Previous Dutch investigations have examined platforms such as Instagram, TikTok, and Snapchat under similar children’s rights frameworks. Policymakers hope the Roblox review will set clearer standards for digital child safety across Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT to exit WhatsApp after Meta policy change

OpenAI says ChatGPT will leave WhatsApp on 15 January 2026 after Meta’s new rules banning general-purpose AI chatbots on the platform. ChatGPT will remain available on iOS, Android, and the web, the company said.

Users are urged to link their WhatsApp number to a ChatGPT account to preserve history, as WhatsApp doesn’t support chat exports. OpenAI will also let users unlink their phone numbers after linking.

Until now, users could message ChatGPT on WhatsApp to ask questions, search the web, generate images, or talk to the assistant. Similar third-party bots offered comparable features.

Meta quietly updated WhatsApp’s business API to prohibit AI providers from accessing or using it, directly or indirectly. The change effectively forces ChatGPT, Perplexity, Luzia, Poke, and others to shut down their WhatsApp bots.

The move highlights platform risk for AI assistants and shifts demand toward native apps and web. Businesses relying on WhatsApp AI automations will need alternatives that comply with Meta’s policies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Innovation versus risk shapes Australia’s AI debate

Australia’s business leaders were urged to adopt AI now to stay competitive, despite the absence of hard rules, at the AI Leadership Summit in Brisbane. The National AI Centre unveiled revised voluntary guidelines, and Assistant Minister Andrew Charlton said a national AI plan will arrive later this year.

The guidance sets six priorities, from stress-testing and human oversight to clearer accountability, aiming to give boards practical guardrails. Speakers from NVIDIA, OpenAI, and legal and academic circles welcomed direction but pressed for certainty to unlock stalled investment.

Charlton said the plan will focus on economic opportunity, equitable access, and risk mitigation, noting some harms are already banned, including ‘nudify’ apps. He argued Australia will be poorer if it hesitates, and regulators must be ready to address new threats directly.

The debate centred on proportional regulation: too many rules could stifle innovation, said Clayton Utz partner Simon Newcomb, yet delays and ambiguity can also chill projects. A ‘gap analysis’ announced by Treasurer Jim Chalmers will map which risks existing laws already cover.

CyberCX’s Alastair MacGibbon warned that criminals are using AI to deliver sharper phishing attacks and flagged the return of erotic features in some chatbots as an oversight test. His message echoed across panels: move fast with governance, or risk ceding both competitiveness and safety.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!