YouTube launches likeness detection to protect creators from AI misuse

YouTube has expanded its AI safeguards with a new likeness detection system that identifies AI-generated videos imitating creators’ faces or voices. The tool is now available to eligible members of the YouTube Partner Program after a limited pilot phase.

Creators can review detected videos and request their removal under YouTube’s privacy rules or submit copyright claims.

YouTube said the feature aims to protect users from having their image used to promote products or spread misinformation without consent.

The onboarding process requires identity verification through a short selfie video and photo ID. Creators can opt out at any time, with scanning ending within a day of deactivation.

YouTube has backed recent legislative efforts, such as the NO FAKES Act in the US, which targets deceptive AI replicas. The move highlights growing industry concern over deepfake misuse and the protection of digital identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netherlands and China in talks to resolve Nexperia dispute

The Dutch Economy Minister has spoken with his Chinese counterpart to ease tensions following the Netherlands’ recent seizure of Nexperia, a major Dutch semiconductor firm.

China, where most of Nexperia’s chips are produced and sold, reacted by blocking exports, creating concern among European carmakers reliant on its components.

Vincent Karremans said he had discussed ‘further steps towards reaching a solution’ with Chinese Minister of Commerce Wang Wentao.

Both sides emphasised the importance of finding an outcome that benefits Nexperia, as well as the Chinese and European economies.

Meanwhile, Nexperia’s China division has begun asserting its independence, telling employees they may reject ‘external instructions’.

The firm remains a subsidiary of Shanghai-listed Wingtech, which has faced growing scrutiny from European regulators over national security and strategic technology supply chains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta strengthens protection for older adults against online scams

The US giant, Meta, has intensified its campaign against online scams targeting older adults, marking Cybersecurity Awareness Month with new safety tools and global partnerships.

Additionally, Meta said it had detected and disrupted nearly eight million fraudulent accounts on Facebook and Instagram since January, many linked to organised scam centres operating across Asia and the Middle East.

The social media giant is joining the National Elder Fraud Coordination Center in the US, alongside partners including Google, Microsoft and Walmart, to strengthen investigations into large-scale fraud operations.

It is also collaborating with law enforcement and research groups such as Graphika to identify scams involving fake customer service pages, fraudulent financial recovery services and deceptive home renovation schemes.

Meta continues to roll out product updates to improve online safety. WhatsApp now warns users when they share screens with unknown contacts, while Messenger is testing AI-powered scam detection that alerts users to suspicious messages.

Across Facebook, Instagram and WhatsApp, users can activate passkeys and complete a Security Checkup to reinforce account protection.

The company has also partnered with organisations worldwide to raise scam awareness among older adults, from digital literacy workshops in Bangkok to influencer-led safety campaigns across Europe and India.

These efforts form part of Meta’s ongoing drive to protect users through a mix of education, advanced technology and cross-industry cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers become intelligence coaches in AI-driven learning

AI is reshaping education, pushing teachers to act as intelligence coaches and co-creators instead of traditional instructors.

Experts at an international conference, hosted in Greece, to celebrate Athens College’s centennial, discussed how AI personalises learning and demands a redefined teaching role.

Bill McDiarmid, professor emeritus at the University of North Carolina, said educators must now ask students where they find their information and why they trust it.

Similarly, Yong Zhao of the University of Kansas highlighted that AI enables individualised learning, allowing every student to achieve their full potential.

Speakers agreed AI should serve as a supportive partner, not a replacement, helping schools prepare students for an active role in shaping their futures.

The event, held under Greek President Konstantinos Tasoulas’ auspices, also urged caution when experimenting with AI on minors due to potential long-term risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT to exit WhatsApp after Meta policy change

OpenAI says ChatGPT will leave WhatsApp on 15 January 2026 after Meta’s new rules banning general-purpose AI chatbots on the platform. ChatGPT will remain available on iOS, Android, and the web, the company said.

Users are urged to link their WhatsApp number to a ChatGPT account to preserve history, as WhatsApp doesn’t support chat exports. OpenAI will also let users unlink their phone numbers after linking.

Until now, users could message ChatGPT on WhatsApp to ask questions, search the web, generate images, or talk to the assistant. Similar third-party bots offered comparable features.

Meta quietly updated WhatsApp’s business API to prohibit AI providers from accessing or using it, directly or indirectly. The change effectively forces ChatGPT, Perplexity, Luzia, Poke, and others to shut down their WhatsApp bots.

The move highlights platform risk for AI assistants and shifts demand toward native apps and web. Businesses relying on WhatsApp AI automations will need alternatives that comply with Meta’s policies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Innovation versus risk shapes Australia’s AI debate

Australia’s business leaders were urged to adopt AI now to stay competitive, despite the absence of hard rules, at the AI Leadership Summit in Brisbane. The National AI Centre unveiled revised voluntary guidelines, and Assistant Minister Andrew Charlton said a national AI plan will arrive later this year.

The guidance sets six priorities, from stress-testing and human oversight to clearer accountability, aiming to give boards practical guardrails. Speakers from NVIDIA, OpenAI, and legal and academic circles welcomed direction but pressed for certainty to unlock stalled investment.

Charlton said the plan will focus on economic opportunity, equitable access, and risk mitigation, noting some harms are already banned, including ‘nudify’ apps. He argued Australia will be poorer if it hesitates, and regulators must be ready to address new threats directly.

The debate centred on proportional regulation: too many rules could stifle innovation, said Clayton Utz partner Simon Newcomb, yet delays and ambiguity can also chill projects. A ‘gap analysis’ announced by Treasurer Jim Chalmers will map which risks existing laws already cover.

CyberCX’s Alastair MacGibbon warned that criminals are using AI to deliver sharper phishing attacks and flagged the return of erotic features in some chatbots as an oversight test. His message echoed across panels: move fast with governance, or risk ceding both competitiveness and safety.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chats with ‘Jesus’ spark curiosity and criticism

Text With Jesus, an AI chatbot from Catloaf Software, lets users message figures like ‘Jesus’ and ‘Moses’ for scripture-quoting replies. CEO Stéphane Peter says curiosity is driving rapid growth despite accusations of blasphemy and worries about tech intruding on faith.

Built on OpenAI’s ChatGPT, the app now includes AI pastors and counsellors for questions on scripture, ethics, and everyday dilemmas. Peter, who describes himself as not particularly religious, says the aim is access and engagement, not replacing ministry or community.

Examples range from ‘Do not be anxious…’ (Philippians 4:6) to the Golden Rule (Matthew 7:12), with answers framed in familiar verse. Fans call it a safe, approachable way to explore belief; critics argue only scripture itself should speak.

Faith leaders and commentators have cautioned against mistaking AI outputs for wisdom. The Vatican has stressed that AI is a tool, not truth, and that young people need guidance, not substitution, in spiritual formation.

Reception is sharply split online. Supporters praise convenience and curiosity-spark; detractors cite theological drift, emoji-laden replies, and a ‘Satan’ mode they find chilling. The app holds a 4.7 rating on the Apple App Store from more than 2,700 reviews.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS outage turned a mundane DNS slip into global chaos

Cloudflare’s boss summed up the mood after Monday’s chaos, relieved his firm wasn’t to blame as outages rippled across more than 1,000 companies. Snapchat, Reddit, Roblox, Fortnite, banks, and government portals faltered together, exposing how much of the web leans on Amazon Web Services.

AWS is the backbone for a vast slice of the internet, renting compute, storage, and databases so firms avoid running their own stacks. However, a mundane Domain Name System error in its Northern Virginia region scrambled routing, leaving services online yet unreachable as traffic lost its map.

Engineers call it a classic failure mode: ‘It’s always DNS.’ Misconfigurations, maintenance slips, or server faults can cascade quickly across shared platforms. AWS says teams moved to mitigate, but the episode showed how a small mistake at scale becomes a global headache in minutes.

Experts warned of concentration risk: when one hyperscaler stumbles, many fall. Yet few true alternatives exist at AWS’s scale beyond Microsoft Azure and Google Cloud, with smaller rivals from IBM to Alibaba, and fledgling European plays, far behind.

Calls for UKEU cloud sovereignty are growing, but timelines and costs are steep. Monday’s outage is a reminder that resilience needs multi-region and multi-cloud designs, tested failovers, and clear incident comms, not just faith in a single provider.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic unveils Claude Life Sciences to transform research efficiency

Anthropic has unveiled Claude for Life Sciences, its first major launch in the biotechnology sector.

The new platform integrates Anthropic’s AI models with leading scientific tools such as Benchling, PubMed, 10x Genomics and Synapse.org, offering researchers an intelligent assistant throughout the discovery process.

The system supports tasks from literature reviews and hypothesis development to data analysis and drafting regulatory submissions. According to Anthropic, what once took days of validation and manual compilation can now be completed in minutes, giving scientists more time to focus on innovation.

An initiative that follows the company’s appointment of Eric Kauderer-Abrams as head of biology and life sciences. He described the move as a ‘threshold moment’, signalling Anthropic’s ambition to make Claude a key player in global life science research, much like its role in coding.

Built on the newly released Claude Sonnet 4.5 model, which excels at interpreting lab protocols, the platform connects with partners including AWS, Google Cloud, KPMG and Deloitte.

While Anthropic recognises that AI cannot accelerate physical trials, it aims to transform time-consuming processes and promote responsible digital transformation across the life sciences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China leads the global generative AI adoption with 515 million users

In China, the use of generative AI has expanded unprecedentedly, reaching 515 million users in the first half of 2025.

The figure, released by the China Internet Network Information Centre, shows more than double the number recorded in December and represents an adoption rate of 36.5 per cent.

Such growth is driven by strong digital infrastructure and the state’s determination to make AI a central tool of national development.

The country’s ‘AI Plus’ strategy aims to integrate AI across all sectors of society and the economy. The majority of users rely on domestic platforms such as DeepSeek, Alibaba Cloud’s Qwen and ByteDance’s Doubao, as access to leading Western models remains restricted.

Young and well-educated citizens dominate the user base, underlining the government’s success in promoting AI literacy among key demographics.

Microsoft’s recent research confirms that China has the world’s largest AI market, surpassing the US in total users. While the US adoption has remained steady, China’s domestic ecosystem continues to accelerate, fuelled by policy support and public enthusiasm for generative tools.

China also leads the world in AI-related intellectual property, with over 1.5 million patent applications accounting for nearly 39 per cent of the global total.

The rapid adoption of home-grown AI technologies reflects a strategic drive for technological self-reliance and positions China at the forefront of global digital transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!