Deepfake targeting Irish presidential candidate sparks election integrity warning

Irish presidential candidate Catherine Connolly condemned a deepfake AI video that falsely announced her withdrawal from the race. The clip, designed to resemble an RTÉ News broadcast, spread online before being reported and removed from major social media platforms.

Connolly said the video was a disgraceful effort to mislead voters and damage democracy. Her campaign team filed a complaint with the Irish Electoral Commission and requested that all copies be clearly labelled as fake.

Experts at Dublin City University identified slight distortions in speech and lighting as signs of AI manipulation. They warned that the rapid spread of synthetic videos underscores weak content moderation by online platforms.

Connolly urged the public not to share the clip and to respond through civic participation. Authorities are monitoring digital interference as Ireland prepares for its presidential vote on Friday.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Amazon launches Blue Jay and Project Eluna to support employees

Amazon has unveiled two new innovations, Blue Jay and Project Eluna, designed to improve efficiency and safety in its operations. Blue Jay coordinates multiple arms to handle items in one workspace, reducing repetitive tasks and supporting employees.

Project Eluna is an agentic AI model that helps operators make data-driven decisions, anticipating bottlenecks and optimising workflows.

Blue Jay uses robotics experience, AI, and digital twin simulations to go from concept to production in just over a year. It is being tested in South Carolina, managing 75% of items and could support Amazon’s Same-Day delivery network.

Project Eluna will pilot in Tennessee, offering operators clear recommendations and reducing the cognitive load of monitoring multiple dashboards.

These systems aim to enhance the employee experience by improving ergonomics, reducing repetitive tasks, and opening new career pathways. Amazon is expanding robotics, mechatronics, and AI training so employees can work confidently with these technologies.

Blue Jay and Project Eluna join other recent innovations, including Vulcan, a robot with a sense of touch, and DeepFleet, an AI model coordinating fleets of mobile robots.

Tye Brady, Amazon Robotics chief technologist, emphasised that the focus remains on people. AI and robotics integration aims to enhance workplace safety, efficiency, and fulfillment, reflecting Amazon’s focus on workforce development and technological progress.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI leaders call for a global pause in superintelligence development

More than 850 public figures, including leading computer scientists Geoffrey Hinton and Yoshua Bengio, have signed a joint statement urging a global slowdown in the development of artificial superintelligence.

The open letter warns that unchecked progress could lead to human economic displacement, loss of freedom, and even extinction.

An appeal that follows growing anxiety that the rush toward machines surpassing human cognition could spiral beyond human control. Alan Turing predicted as early as the 1950s that machines might eventually dominate by default, a view that continues to resonate among AI researchers today.

Despite such fears, global powers still view the AI race as essential for national security and technological advancement.

Tech firms like Meta are also exploiting the superintelligence label to promote their most ambitious models, while leaders such as OpenAI’s Sam Altman and Microsoft’s Mustafa Suleyman have previously acknowledged the existential risks of developing systems beyond human understanding.

The statement calls for an international prohibition on superintelligence research until there is a broad scientific consensus on safety and public approval.

Its signatories include technologists, academics, religious figures, and cultural personalities, reflecting a rare cross-sector demand for restraint in an era defined by rapid automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

‘Wicked’ AI data scraping: Pullman calls for regulation to protect creative rights

Author Philip Pullman has publicly urged the UK government to intervene in what he describes as the ‘wicked’ practice of AI firms scraping authors’ works for training models. Pullman insists that writing is more than data, it is creative labour, and authors deserve protection.

Pullman’s intervention comes amid increasing concern in the literary community about how generative AI models are built using large volumes of existing texts, often without permission or clear compensation. He argues that uninhibited scraping undermines the rights of creators and could hollow out the foundations of culture.

He has called on UK policymakers to establish clearer rules and safeguards over how AI systems access, store, and reuse writers’ content. Pullman warns that without intervention, authors may lose control over their work, and the public could be deprived of authentic, quality literature.

His statement adds to growing pressure from writers, unions and rights bodies calling for better transparency, consent mechanisms and a balance between innovation and creator rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Netflix goes ‘all in’ on generative AI as entertainment industry remains divided

Netflix has declared itself ‘all in’ on generative artificial intelligence (GenAI), signalling a significant commitment to embedding AI across its business, from production and VFX to search, advertising and user-experience, according to a recent investor letter and earnings call.

Co-CEO Ted Sarandos emphasised that while AI will be widely used, it is not a replacement for the creative talent behind Netflix’s original shows. ‘It takes a great artist to make something great,’ he remarked. ‘AI can give creatives better tools … but it doesn’t automatically make you a great storyteller if you’re not.’

Netflix has already applied GenAI in production. For example, in The Eternaut, an Argentine series in which a building-collapse scene was generated using AI tools, reportedly ten times faster than with conventional VFX workflows. The company says it plans to extend GenAI use to search experiences (natural language queries), advertising formats, localisation of titles, and creative pre-visualisation workflows.

However, the entertainment industry remains divided over generative AI’s role. While Netflix embraces the tools, many creators and unions continue to raise concerns about job displacement, copyright and the erosion of human-centred storytelling. Netflix is walking a line of deploying AI at scale while assuring audiences and creators that human artistry remains central.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Andreessen Horowitz backed Codi startup launches AI tool to streamline office operations

Codi, an Andreessen Horowitz–backed startup founded by Christelle Rohaut and Dave Schuman, has launched an AI-powered platform that is said to fully automate office management.

The San Francisco-based company was founded in 2018 to help firms find flexible workspaces. It first operated as a marketplace, matching companies to buildings with flexible office arrangements but has since evolved into an AI-powered software platform. The new AI agent handles logistics such as vendor coordination, cleaning and pantry restocking for any leased office, meeting a need that, according to Rohaut, remains very manual and costly.

Chief executive Christelle Rohaut said advances in AI made the shift possible. ‘Whatever office you lease, you can use this to automate your office logistics,’ she told TechCrunch.

The product entered beta in May and officially launched this week. Codi, which has raised $23 million to date, including a $16 million Series A led by Andreessen Horowitz in 2022 , reported reaching $100,000 in annual recurring revenue within five weeks of the beta launch.

The company says the platform can save firms hundreds of hours in administrative work and reduce costs compared with hiring an in-house or part-time office manager. Early adopters include TaskRabbit and Northbeam.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

DeepSeek dominates AI crypto trading challenge

Chinese AI model DeepSeek V3.1 has outperformed its global competitors in a real-market cryptocurrency trading challenge, earning over 10 per cent profit in just a few days.

The experiment, named Alpha Arena, was launched by US research firm Nof1 to test the investing skills of leading LLMs.

Each participating AI was given US$10,000 to trade in six cryptocurrency perpetual contracts, including bitcoin and solana, on the decentralised exchange Hyperliquid. By Tuesday afternoon, DeepSeek V3.1 led the field, while OpenAI’s GPT-5 trailed behind with a loss of nearly 40 per cent.

The competition highlights the growing potential of AI models to make autonomous financial decisions in real markets.

It also underscores the rivalry between Chinese and American AI developers as they push to demonstrate their models’ adaptability beyond traditional text-based tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MIT unveils SEAL, a self-improving AI model

Researchers at the Massachusetts Institute of Technology (MIT) have unveiled SEAL, a new AI model capable of improving its own performance without human intervention. The framework allows the model to generate its own training data and fine-tuning instructions, enabling it to learn new tasks autonomously.

The model employs reinforcement learning, a method in which it tests different strategies, evaluates their effectiveness, and adjusts its internal processes accordingly. This allows SEAL to refine its capabilities and increase accuracy over time.

In trials, SEAL outperformed GPT-4.1 by learning from the data it generated independently. The results demonstrate the potential of self-improving AI systems to reduce reliance on manually curated datasets and human-led fine-tuning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scouts can now earn AI and cybersecurity badges

In the United States, Scouting America, formerly known as the Boy Scouts, has introduced two new merit badges in AI and cybersecurity. The badges give scouts the opportunity to explore modern technology and understand its applications, while the organisation continues to adapt its programs to a digital era. Scouting America has around a million members and offers hundreds of merit badges across a wide range of skills.

The AI badge challenges scouts to examine AI’s effects on daily life, study deepfakes, and complete projects that demonstrate AI concepts. The cybersecurity badge teaches practical tools to stay safe online, emphasises ethical behaviour, and introduces scouts to a career field with thousands of unfilled positions.

Earlier this year, Scouting America launched Scoutly, an AI-powered chatbot designed to answer questions about the organisation and its merit badges. The initiative is part of Scouting America’s broader effort to modernise its programs and prepare young people for opportunities in an increasingly digital world.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

YouTube launches likeness detection to protect creators from AI misuse

YouTube has expanded its AI safeguards with a new likeness detection system that identifies AI-generated videos imitating creators’ faces or voices. The tool is now available to eligible members of the YouTube Partner Program after a limited pilot phase.

Creators can review detected videos and request their removal under YouTube’s privacy rules or submit copyright claims.

YouTube said the feature aims to protect users from having their image used to promote products or spread misinformation without consent.

The onboarding process requires identity verification through a short selfie video and photo ID. Creators can opt out at any time, with scanning ending within a day of deactivation.

YouTube has backed recent legislative efforts, such as the NO FAKES Act in the US, which targets deceptive AI replicas. The move highlights growing industry concern over deepfake misuse and the protection of digital identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!