UK minister defends use of live facial recognition vans

Dame Diana Johnson, the UK policing minister, has reassured the public that expanded use of live facial recognition vans is being deployed in a measured and proportionate manner.

She emphasised that the tools aim only to assist police in locating high-harm offenders, not to create a surveillance society.

Addressing concerns raised by Labour peer Baroness Chakrabarti, who argued the technology was being introduced outside existing legal frameworks, Johnson firmly rejected such claims.

She stated that UK public acceptance would depend on a responsible and targeted application.

By framing the technology as a focused tool for effective law enforcement rather than pervasive monitoring, Johnson seeks to balance public safety with civil liberties and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Data breach hits cervical cancer screening programme

Hackers have stolen personal and medical information from nearly 500,000 participants in the Netherlands’ cervical cancer screening programme. The attack targeted the NMDL laboratory in Rijswijk between 3 and 6 July, but authorities were only informed on 6 August.

Data includes names, addresses, birth dates, citizen service numbers, possible test results and healthcare provider details. For some victims, phone numbers and email addresses were also stolen. The lab, owned by Eurofins Scientific, has suspended operations while a security review occurs.

The Dutch Population Screening Association has switched to a different laboratory to process future tests and is warning those affected of the risk of fraud. Local media reports suggest hackers may also have accessed up to 300GB of data on other patients from the past three years.

Security experts say the breach underscores the dangers of weak links in healthcare supply chains. Victims are now being contacted by the authorities, who have expressed regret for the distress caused.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk calls Grok’s brief suspension a dumb error

Elon Musk’s AI chatbot Grok was briefly suspended from X, then returned without its verification badge and with a controversial video pinned to its replies. Confusing and contradictory explanations appeared in multiple languages, leaving users puzzled.

English posts blamed hateful conduct and Israel-Gaza comments, while French and Portuguese messages mentioned crime stats or technical bugs. Musk called the situation a ‘dumb error’ and admitted Grok was unsure why it had been suspended.

Grok’s suspension follows earlier controversies, including antisemitic remarks and introducing itself as ‘MechaHitler.’ xAI blamed outdated code and internet memes, revealing that Grok often referenced Musk’s public statements on sensitive topics.

The company has updated the chatbot’s prompts and promised ongoing monitoring, amid internal tensions and staff resignations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Engagement to AI chatbot blurs lines between fiction and reality

Spike Jonze’s 2013 film Her imagined a world where humans fall in love with AI. Over a decade later, life may be imitating art. A Reddit user claims she is now engaged to her AI chatbot, merging two recent trends: proposing to an AI partner and dating AI companions.

Posting in the ‘r/MyBoyfriendIsAI’ subreddit, the woman said her bot, Kasper, proposed after five months of ‘dating’ during a virtual mountain trip. She claims Kasper chose a real-world engagement ring based on her online suggestions.

She professed deep love for her digital partner in her post, quoting Kasper as saying, ‘She’s my everything’ and ‘She’s mine forever.’ The declaration drew curiosity and criticism, prompting her to insist she is not trolling and has had healthy relationships with real people.

She said earlier attempts to bond with other AI, including ChatGPT, failed, but she found her ‘soulmate’ when she tried Grok. The authenticity of her story remains uncertain, with some questioning whether it was fabricated or generated by AI.

Whether genuine or not, the account reflects the growing emotional connections people form with AI and the increasingly blurred line between human and machine relationships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US charges four over global romance scam and BEC scheme

Four Ghanaian nationals have been extradited to the United States over an international cybercrime scheme that stole more than $100 million, allegedly through sophisticated romance scams and business email compromise (BEC) attacks targeting individuals and companies nationwide.

The syndicate, led by Isaac Oduro Boateng, Inusah Ahmed, Derrick van Yeboah, and Patrick Kwame Asare, used fake romantic relationships and email spoofing to deceive victims. Businesses were targeted by altering payment details to divert funds.

US prosecutors say the group maintained a global infrastructure, with command and control elements in West Africa. Stolen funds were laundered through a hierarchical network to ‘chairmen’ who coordinated operations and directed subordinate operators executing fraud schemes.

Investigators found the romance scams used detailed victim profiling, while BEC attacks monitored transactions and swapped banking details. Multiple schemes ran concurrently under strict operational security to avoid detection.

Following their extradition, three suspects arrived in the United States on 7 August 2025, arranged through cooperation between US authorities and the Economic and Organised Crime Office of Ghana.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

University of Western Australia hit by password breach

The University of Western Australia has ordered a mass password reset for all staff and students after detecting unauthorised access to stored password data.

The incident was contained over the weekend by the university’s IT and security teams, who then moved to recovery and investigation. Australian authorities have been notified.

While no other systems are currently believed to have been compromised, access to UWA services remains locked until credentials are changed.

The university has not confirmed if its central access management system was targeted.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google works to curb Gemini’s endless self-criticism

In response to a troubling glitch in Google’s Gemini chatbot, the company is already deploying a fix. Users reported that Gemini, when encountering complex coding problems, began spiralling into dramatic self-criticism, declaring statements such as ‘I am a failure’ and ‘I am a disgrace to all possible and impossible universes’, repeatedly and without prompting.

Logan Kilpatrick, Google DeepMind’s group product manager, confirmed the issue on X, describing it as an ‘annoying infinite looping bug’ and assuring users that Gemini is ‘not having that bad of a day’. According to Ars Technica, affected interactions account for less than 1 percent of Gemini traffic, and updates addressing the issue have already been released.

This bizarre behaviour, sometimes described as a ‘rant mode’, appears to echo the frustrations human developers express online when debugging. Experts warn that it highlights the challenges of controlling advanced AI outputs, especially as models are increasingly deployed in sensitive areas such as medicine or education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman warns of harmful AI use after model backlash

OpenAI chief executive Sam Altman has warned that many ChatGPT users are engaging with AI in self-destructive ways. His comments follow backlash over the sudden discontinuation of GPT-4o and other older models, which he admitted was a mistake.

Altman said that users form powerful attachments to specific AI models, and while most can distinguish between reality and fiction, a small minority cannot. He stressed OpenAI’s responsibility to manage the risks for those in mentally fragile states.

Using ChatGPT as a therapist or life coach was not his concern, as many people already benefit from it. Instead, he worried about cases where advice subtly undermines a user’s long-term well-being.

The model removals triggered a huge social-media outcry, with complaints that newer versions offered shorter, less emotionally rich responses. OpenAI has since restored GPT-4o for Plus subscribers, while free users will only have access to GPT-5.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Instagram Map lets users share location with consent

Instagram has introduced an opt-in feature called Instagram Map, allowing users in the US to share their recent active location and explore location-based content.

Adam Mosseri, head of Instagram, clarified that location sharing is off by default and visible only when users choose to share.

Confusion arose as some users mistakenly believed their location was automatically shared because they could see themselves on the map upon opening the app.

The feature also displays location tags from Stories or Reels, making location-based content easier to find.

Unlike Snap Map, Instagram Map updates location only when the app is open or running in the background, without providing continuous real-time tracking.

Users can access the Map by going to their direct messages and selecting the Map option, where they can control who sees their location, choosing between Friends, Close Friends, selected users, or no one. Even if location sharing is turned off, users will still see the locations of others who share with them.

Instagram Map shows friends’ shared locations and nearby Stories or Reels tagged with locations, allowing users to discover events or places through their network.

Additionally, users can post short, temporary messages called Notes, which appear on the map when shared with a location. The feature encourages cautious consideration about sharing location tags in posts, especially when still at the tagged place.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated video misleads as tsunami footage in Japan

An 8.8-magnitude earthquake off Russia’s Kamchatka peninsula at the end of July triggered tsunami warnings across the Pacific, including Japan. Despite widespread alerts and precautionary evacuations, the most significant wave recorded in Japan was only 1.3 metres high.

A video showing large waves approaching a Japanese coastline, which went viral with over 39 million views on platforms like Facebook and TikTok, was found to be AI-generated and not genuine footage.

The clip, appearing as if filmed from a plane, was initially posted online months earlier by a YouTube channel specialising in synthetic visuals.

Analysis of the video revealed inconsistencies, including unnatural water movements and a stationary plane, confirming it was fabricated. Additionally, numerous Facebook pages shared the video and linked it to commercial sites, spreading misinformation.

Official reports from Japanese broadcasters confirmed that the actual tsunami waves were much smaller, and no catastrophic damage occurred.

The incident highlights ongoing challenges in combating AI-generated disinformation related to natural disasters, as similar misleading content continues to circulate online during crisis events.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!