OpenAI launches ChatGPT Atlas web browser

OpenAI has launched ChatGPT Atlas, a web browser built around ChatGPT to help users work and explore online more efficiently. The browser lets ChatGPT operate directly on webpages, using past conversations and browsing context to assist with tasks without copying and pasting.

Early testers say it streamlines research, study, and productivity by providing instant AI support alongside the content they are viewing.

Atlas introduces browser memories, letting ChatGPT recall context from visited sites to improve responses and automate tasks. Users stay in control, with the ability to view, archive, or delete memories. 

Agent mode allows ChatGPT to perform tasks such as researching, summarising, or planning events while browsing. Safety is a priority, with safeguards to prevent unauthorised actions and options to operate in logged-out mode.

The browser is available worldwide on macOS for Free, Plus, Pro, and Go users, with Windows, iOS, and Android support coming soon. OpenAI plans to add multi-profile support, better developer tools, and improved app discoverability, advancing an agent-driven web experience with seamless AI integration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube launches likeness detection to protect creators from AI misuse

YouTube has expanded its AI safeguards with a new likeness detection system that identifies AI-generated videos imitating creators’ faces or voices. The tool is now available to eligible members of the YouTube Partner Program after a limited pilot phase.

Creators can review detected videos and request their removal under YouTube’s privacy rules or submit copyright claims.

YouTube said the feature aims to protect users from having their image used to promote products or spread misinformation without consent.

The onboarding process requires identity verification through a short selfie video and photo ID. Creators can opt out at any time, with scanning ending within a day of deactivation.

YouTube has backed recent legislative efforts, such as the NO FAKES Act in the US, which targets deceptive AI replicas. The move highlights growing industry concern over deepfake misuse and the protection of digital identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta strengthens protection for older adults against online scams

The US giant, Meta, has intensified its campaign against online scams targeting older adults, marking Cybersecurity Awareness Month with new safety tools and global partnerships.

Additionally, Meta said it had detected and disrupted nearly eight million fraudulent accounts on Facebook and Instagram since January, many linked to organised scam centres operating across Asia and the Middle East.

The social media giant is joining the National Elder Fraud Coordination Center in the US, alongside partners including Google, Microsoft and Walmart, to strengthen investigations into large-scale fraud operations.

It is also collaborating with law enforcement and research groups such as Graphika to identify scams involving fake customer service pages, fraudulent financial recovery services and deceptive home renovation schemes.

Meta continues to roll out product updates to improve online safety. WhatsApp now warns users when they share screens with unknown contacts, while Messenger is testing AI-powered scam detection that alerts users to suspicious messages.

Across Facebook, Instagram and WhatsApp, users can activate passkeys and complete a Security Checkup to reinforce account protection.

The company has also partnered with organisations worldwide to raise scam awareness among older adults, from digital literacy workshops in Bangkok to influencer-led safety campaigns across Europe and India.

These efforts form part of Meta’s ongoing drive to protect users through a mix of education, advanced technology and cross-industry cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers become intelligence coaches in AI-driven learning

AI is reshaping education, pushing teachers to act as intelligence coaches and co-creators instead of traditional instructors.

Experts at an international conference, hosted in Greece, to celebrate Athens College’s centennial, discussed how AI personalises learning and demands a redefined teaching role.

Bill McDiarmid, professor emeritus at the University of North Carolina, said educators must now ask students where they find their information and why they trust it.

Similarly, Yong Zhao of the University of Kansas highlighted that AI enables individualised learning, allowing every student to achieve their full potential.

Speakers agreed AI should serve as a supportive partner, not a replacement, helping schools prepare students for an active role in shaping their futures.

The event, held under Greek President Konstantinos Tasoulas’ auspices, also urged caution when experimenting with AI on minors due to potential long-term risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI strengthens controls after Bryan Cranston deepfake incident

Bryan Cranston is grateful that OpenAI tightened safeguards on its video platform Sora 2. The Breaking Bad actor raised concerns after users generated videos using his voice and image without permission.

Reports surfaced earlier this month showing Sora 2 users creating deepfakes of Cranston and other public figures. Several Hollywood agencies criticised OpenAI for requiring individuals to opt out of replication instead of opting in.

Major talent agencies, including UTA and CAA, co-signed a joint statement with OpenAI and industry unions. They pledged to collaborate on ethical standards for AI-generated media and ensure artists can decide how they are represented.

The incident underscores growing tension between entertainment professionals and AI developers. As generative video tools evolve, performers and studios are demanding clear boundaries around consent and digital replication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chats with ‘Jesus’ spark curiosity and criticism

Text With Jesus, an AI chatbot from Catloaf Software, lets users message figures like ‘Jesus’ and ‘Moses’ for scripture-quoting replies. CEO Stéphane Peter says curiosity is driving rapid growth despite accusations of blasphemy and worries about tech intruding on faith.

Built on OpenAI’s ChatGPT, the app now includes AI pastors and counsellors for questions on scripture, ethics, and everyday dilemmas. Peter, who describes himself as not particularly religious, says the aim is access and engagement, not replacing ministry or community.

Examples range from ‘Do not be anxious…’ (Philippians 4:6) to the Golden Rule (Matthew 7:12), with answers framed in familiar verse. Fans call it a safe, approachable way to explore belief; critics argue only scripture itself should speak.

Faith leaders and commentators have cautioned against mistaking AI outputs for wisdom. The Vatican has stressed that AI is a tool, not truth, and that young people need guidance, not substitution, in spiritual formation.

Reception is sharply split online. Supporters praise convenience and curiosity-spark; detractors cite theological drift, emoji-laden replies, and a ‘Satan’ mode they find chilling. The app holds a 4.7 rating on the Apple App Store from more than 2,700 reviews.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI still struggles to mimic natural human conversation

A recent study reveals that large language models such as ChatGPT-4, Claude, Vicuna, and Wayfarer still struggle to replicate natural human conversation. Researchers found AI over-imitates, misuses filler words, and struggles with natural openings and closings, revealing its artificial nature.

The research, led by Eric Mayor with contributions from Lucas Bietti and Adrian Bangerter, compared transcripts of human phone conversations with AI-generated ones. AI can speak correctly, but subtle social cues like timing, phrasing, and discourse markers remain hard to mimic.

Misplaced words such as ‘so’ or ‘well’ and awkward conversation transitions make AI dialogue recognisably non-human. Openings and endings also pose a challenge. Humans naturally engage in small talk or closing phrases such as ‘see you soon’ or ‘alright, then,’ which AI systems often fail to reproduce convincingly.

These gaps in social nuance, researchers argue, prevent large language models from consistently fooling people in conversation tests.

Despite rapid progress, experts caution that AI may never fully capture all elements of human interaction, such as empathy and social timing. Advances may narrow the gap, but key differences will likely remain, keeping AI speech subtly distinguishable from real human dialogue.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic unveils Claude Life Sciences to transform research efficiency

Anthropic has unveiled Claude for Life Sciences, its first major launch in the biotechnology sector.

The new platform integrates Anthropic’s AI models with leading scientific tools such as Benchling, PubMed, 10x Genomics and Synapse.org, offering researchers an intelligent assistant throughout the discovery process.

The system supports tasks from literature reviews and hypothesis development to data analysis and drafting regulatory submissions. According to Anthropic, what once took days of validation and manual compilation can now be completed in minutes, giving scientists more time to focus on innovation.

An initiative that follows the company’s appointment of Eric Kauderer-Abrams as head of biology and life sciences. He described the move as a ‘threshold moment’, signalling Anthropic’s ambition to make Claude a key player in global life science research, much like its role in coding.

Built on the newly released Claude Sonnet 4.5 model, which excels at interpreting lab protocols, the platform connects with partners including AWS, Google Cloud, KPMG and Deloitte.

While Anthropic recognises that AI cannot accelerate physical trials, it aims to transform time-consuming processes and promote responsible digital transformation across the life sciences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung unveils AI-powered redesign of its corporate Newsroom

The South Korean firm, Samsung Electronics, has redesigned its official Newsroom, transforming it into a multimedia platform built around visuals, video and AI-driven features.

A revamped site that aligns with the growing dominance of visual communication, aiming to make corporate storytelling more intuitive, engaging and accessible.

The updated homepage features an expanded horizontal carousel showcasing videos, graphics and feature stories with hover-based summaries for quick insight. Users can browse by theme, play videos directly and enjoy a seamless experience across all Samsung devices.

A redesign by Samsung that also introduces an integrated media hub with improved press tools, content filters and high-resolution downloads. Journalists can now save full articles, videos and images in one click, simplifying access to media materials.

AI integration adds smart summaries and upgraded search capabilities, including tag- and image-based discovery. These tools enhance relevance and retrieval speed, while flexible sorting and keyword highlighting refine user experience.

As Samsung celebrates a decade since launching its Newsroom, such a transformation marks a step toward a more dynamic, interactive communication model designed for both consumers and media professionals in the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK actors’ union demands rights as AI uses performers’ likenesses without consent

The British performers’ union Equity has warned of coordinated mass action against technology companies and entertainment producers that use its members’ images, voices or likenesses in artificial-intelligence-generated content without proper consent.

Equity’s general secretary, Paul W Fleming, announced plans to mobilise tens of thousands of actors through subject access requests under data-protection law, compelling companies to disclose whether they have used performers’ data in AI content.

The move follows growing numbers of complaints from actors about alleged mis-use of their likenesses or voices in AI material. One prominent case involves Scottish actor Briony Monroe, who claims her facial features and mannerisms were used to create the synthetic performer ‘Tilly Norwood’. The AI-studio behind the character denies the allegations.

Equity says the strategy is intended to ‘make it so hard for tech companies and producers to not enter into collective rights’ deals. It argues that existing legislation is being circumvented as foundational AI models are trained using data from actors, but with little transparency or compensation.

The trade body Pact, representing studios and producers, acknowledges the importance of AI but counters that without accessing new tools firms may fall behind commercially. Pact complains about the lack of transparency from companies on what data is used to train AI systems.

In essence, the standoff reflects deeper tensions in the creative industries: how to balance innovation, performer rights and transparency in an era when digital likenesses and synthetic ‘actors’ are emerging rapidly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot