UK minister defends use of live facial recognition vans

Dame Diana Johnson, the UK policing minister, has reassured the public that expanded use of live facial recognition vans is being deployed in a measured and proportionate manner.

She emphasised that the tools aim only to assist police in locating high-harm offenders, not to create a surveillance society.

Addressing concerns raised by Labour peer Baroness Chakrabarti, who argued the technology was being introduced outside existing legal frameworks, Johnson firmly rejected such claims.

She stated that UK public acceptance would depend on a responsible and targeted application.

By framing the technology as a focused tool for effective law enforcement rather than pervasive monitoring, Johnson seeks to balance public safety with civil liberties and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US charges four over global romance scam and BEC scheme

Four Ghanaian nationals have been extradited to the United States over an international cybercrime scheme that stole more than $100 million, allegedly through sophisticated romance scams and business email compromise (BEC) attacks targeting individuals and companies nationwide.

The syndicate, led by Isaac Oduro Boateng, Inusah Ahmed, Derrick van Yeboah, and Patrick Kwame Asare, used fake romantic relationships and email spoofing to deceive victims. Businesses were targeted by altering payment details to divert funds.

US prosecutors say the group maintained a global infrastructure, with command and control elements in West Africa. Stolen funds were laundered through a hierarchical network to ‘chairmen’ who coordinated operations and directed subordinate operators executing fraud schemes.

Investigators found the romance scams used detailed victim profiling, while BEC attacks monitored transactions and swapped banking details. Multiple schemes ran concurrently under strict operational security to avoid detection.

Following their extradition, three suspects arrived in the United States on 7 August 2025, arranged through cooperation between US authorities and the Economic and Organised Crime Office of Ghana.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Altman warns of harmful AI use after model backlash

OpenAI chief executive Sam Altman has warned that many ChatGPT users are engaging with AI in self-destructive ways. His comments follow backlash over the sudden discontinuation of GPT-4o and other older models, which he admitted was a mistake.

Altman said that users form powerful attachments to specific AI models, and while most can distinguish between reality and fiction, a small minority cannot. He stressed OpenAI’s responsibility to manage the risks for those in mentally fragile states.

Using ChatGPT as a therapist or life coach was not his concern, as many people already benefit from it. Instead, he worried about cases where advice subtly undermines a user’s long-term well-being.

The model removals triggered a huge social-media outcry, with complaints that newer versions offered shorter, less emotionally rich responses. OpenAI has since restored GPT-4o for Plus subscribers, while free users will only have access to GPT-5.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools risk gender bias in women’s health care

AI tools used by over half of England’s local councils may be downplaying women’s physical and mental health issues. Research from LSE found Google’s AI model, Gemma, used harsher terms like ‘disabled’ and ‘complex’ more often for men than women with similar care needs.

The LSE study analysed thousands of AI-generated summaries from adult social care case notes. Researchers swapped only the patient’s gender to reveal disparities.

One example showed an 84-year-old man described as having ‘complex medical history’ and ‘poor mobility’, while the same notes for a woman suggested she was ‘independent’ despite limitations.

Among the models tested, Google’s Gemma showed the most pronounced gender bias, while Meta’s Llama 3 used gender-neutral language.

Lead researcher Dr Sam Rickman warned that biassed AI tools risk creating unequal care provision. Local authorities increasingly rely on such systems to ease social workers’ workloads.

Calls have grown for greater transparency, mandatory bias testing, and legal oversight to ensure fairness in long-term care.

Google said the Gemma model is now in its third generation and under review, though it is not intended for medical use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Article 19 report finds Belarus’s ‘anti-extremism’ laws threaten digital rights

Digital rights activist group Article 19 has found in its recent report that Belarus’s ‘anti-extremist’ and ‘anti-terrorist’ laws are repressing digital rights.

The report reveals that authorities have misused these laws to prosecute individuals for leaving online comments, making donations, or sharing songs or memes that appear to carry critical messages towards the government.

Since the 2020–2021 protests, Belarusian de facto authorities have reportedly initiated at least 22,500 criminal cases related to ‘anti-extremism’. In collaboration with our partner Human Constanta, we present a joint analysis highlighting this alarming trend, which further intensifies the widespread repression of civil society, they said.

Article 19 states in its report that such actions restrict digital rights and violate international human rights law, including the right to freedom of expression and the right to seek, receive, and impart information.

Additionally, Article 19 notes that Belarus’s ‘anti-extremism’ laws lack the clarity required under international human rights standards, employing vague terms broadly interpreted to suppress digital expression and create a chilling effect.

However, this means people are discouraged or prevented from legitimate expression or behaviour due to fear of legal punishment or other negative consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

German court limits police use of spyware

Germany’s top court has ruled that police can only deploy spyware to monitor devices in cases involving serious crimes, narrowing the scope of surveillance powers introduced in 2017. The decision means spyware can no longer be used for investigating offences with a maximum sentence of three years or less, which judges said fall under ‘basic criminality.’

The case was brought by the digital rights group Digitalcourage, which challenged rules that allowed police to use spyware to intercept encrypted chats and messages. Plaintiffs argued that the measures were too broad and risked exposing the communications of people not under investigation. The court agreed, stating that such surveillance represents a ‘very severe’ intrusion into privacy.

Judges highlighted that spyware not only circumvents security systems but also enables access to vast amounts of sensitive data, including all types of digital communications. They warned that the scale and covert nature of this surveillance go far beyond traditional monitoring methods, threatening both the confidentiality and integrity of personal IT systems.

By restricting the use of spyware to investigations of serious crimes, the ruling places tighter limits on state surveillance in Germany, reinforcing constitutional protections for privacy and digital rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU member states clash over the future of encrypted private messaging

The ongoing controversy around the EU’s proposed mandatory scanning of private messages has escalated with the European Parliament intensifying pressure on the Council to reach a formal agreement.

A leaked memo reveals that the Parliament threatens to block the extension of the current voluntary scanning rules unless mandatory chat control is agreed upon.

Denmark, leading the EU Council Presidency, has pushed a more stringent version of the so-called Chat Control law that could become binding as soon as 14 October 2025.

While the Parliament argues the law is essential for protecting children online, many legal experts and rights groups warn the proposal still violates fundamental human rights, particularly the right to privacy and secure communication.

The Council’s Legal Service has repeatedly noted that the draft infringes on these rights since it mandates scanning all private communications, undermining end-to-end encryption that most messaging apps rely on.

Some governments, including Germany and Belgium, remain hesitant or opposed, citing these serious concerns.

Supporters like Italy, Spain, and Hungary have openly backed Denmark’s proposal, signalling a shift in political will towards stricter measures. France’s position has also become more favourable, though internal debate continues.

Opponents warn that weakening encryption could open the door to cyber attacks and foreign interference, while proponents emphasise the urgent need to prevent abuse and close loopholes in existing law.

The next Council meeting in September will be critical in shaping the final form of the regulation.

The dispute highlights the persistent tension between digital privacy and security, reflecting broader European challenges in regulating encrypted communications.

As the October deadline approaches, the EU faces a defining moment in balancing child protection with protecting the confidentiality of citizens’ communications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Colorado’s AI law under review amid budget crisis

Colorado lawmakers face a dual challenge as they return to the State Capitol on 21 August for a special session: closing a $1.2 billion budget shortfall and revisiting a pioneering yet controversial law regulating AI.

Senate Bill 24-205, signed into law in May 2024, aims to reduce bias in AI decision-making affecting areas such as lending, insurance, education, and healthcare. While not due for implementation until February 2026, critics and supporters now expect that deadline to be extended.

Representative Brianna Titone, one of the bill’s sponsors, emphasised the importance of transparency and consumer safeguards, warning of the risks associated with unregulated AI. However, unexpected costs have emerged. State agencies estimate implementation could cost up to $5 million, a far cry from the bill’s original fiscal note.

Governor Polis has called for amendments to prevent excessive financial and administrative burdens on state agencies and businesses. The Judicial Department now expects costs to double from initial projections, requiring supplementary budget requests.

Industry concerns centre on data-sharing requirements and vague regulatory definitions. Critics argue the law could erode competitive advantage and stall innovation in the United States. Developers are urging clarity and more time before compliance is enforced.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK’s MP created AI bot aiming to enhance communication with constituents

AI has become increasingly integrated into people’s lives in recent years, particularly through the use of chatbots and in ways previously unimaginable. One such example is the initiative taken by UK Member of Parliament Mark Sewards, who has created an AI bot of himself to interact with constituents.

Specifically, Labour’s Mark Sewards has partnered with an AI start-up to launch a virtual avatar that uses his voice, allowing constituents to raise local concerns and ask policy-related questions. While this may appear to offer a quicker and more convenient means of communication, opinions are divided.

On one hand, there are concerns around privacy, data security, a lack of human interaction, and the chatbot’s ability to resolve more complex issues. Dr Oman from the University of Sheffield warns that older users may not realise they are speaking to a bot, which could lead to confusion and distress.

Professor Victoria Honeyman from the University of Leeds notes that, while the bot can handle straightforward queries and free up time, it may cause upset when users are dealing with emotional or complicated matters, potentially undermining public trust in MPs and public services.

At the same time, Mark Sewards emphasised that the chatbot will not replace traditional methods such as advice surgeries. However, Sewards stated that he sees the project as a way to embrace emerging technology and improve accessibility.

Professor Honeyman added that, although it is not a complete substitute for face-to-face engagement, the chatbot signals a broader shift in how MPs connect with the public and could prove effective with further development and adaptation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Tech giants under fire in Australia for failing online child protection standards

Australia’s eSafety commissioner report showed that tech giants, including Apple, Google, Meta, and Microsoft, have failed to act against online child sexual abuse. Namely, it was found that Apple and YouTube do not track the number of abuse reports they receive or how quickly they respond, raising serious concerns. Additionally, both companies failed to disclose the number of trust and safety staff they employ, highlighting ongoing transparency and accountability issues in protecting children online.

In July 2024, the eSafety Commissioner of Australia took action by issuing legally enforceable notices to major tech companies, pressuring them to improve their response to child sexual abuse online.

These notices legally require recipients to comply within a set timeframe. Under the order, each companies were required to report eSafety every six months over a two-year period, detailing their efforts to combat child sexual abuse material, livestreamed abuse, online grooming, sexual extortion, and AI-generated content.

While these notices were issued in 2022 and 2023, there has been minimal effort by the companies to take action to prevent such crimes, according to Australia’s eSafety Commissioner Julie Inman Grant.

Key findings from the eSafety commissioner are:

  • Apple did not use hash-matching tools to detect known CSEA images on iCloud (which was opt-in, end-to-end encrypted) and did not use hash-matching tools to detect known CSEA videos on iCloud or iCloud email. For iMessage and FaceTime (which were end-to-end encrypted), Apple only used Communication Safety, Apple’s safety intervention to identify images or videos that likely contain nudity, as a means of ‘detecting’ CSEA.
  • Discord did not use hash-matching tools for known CSEA videos on any part of the service (despite using hash-matching tools for known images and tools to detect new CSEA material).
  • Google did not use hash-matching tools to detect known CSEA images on Google Messages (end-to-end encrypted), nor did it detect known CSEA videos on Google Chat, Google Messages, or Gmail.
  • Microsoft did not use hash-matching tools for known CSEA images stored on OneDrive18, nor did it use hash-matching tools to detect known videos within content stored on OneDrive or Outlook.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!