EU member states clash over the future of encrypted private messaging

The ongoing controversy around the EU’s proposed mandatory scanning of private messages has escalated with the European Parliament intensifying pressure on the Council to reach a formal agreement.

A leaked memo reveals that the Parliament threatens to block the extension of the current voluntary scanning rules unless mandatory chat control is agreed upon.

Denmark, leading the EU Council Presidency, has pushed a more stringent version of the so-called Chat Control law that could become binding as soon as 14 October 2025.

While the Parliament argues the law is essential for protecting children online, many legal experts and rights groups warn the proposal still violates fundamental human rights, particularly the right to privacy and secure communication.

The Council’s Legal Service has repeatedly noted that the draft infringes on these rights since it mandates scanning all private communications, undermining end-to-end encryption that most messaging apps rely on.

Some governments, including Germany and Belgium, remain hesitant or opposed, citing these serious concerns.

Supporters like Italy, Spain, and Hungary have openly backed Denmark’s proposal, signalling a shift in political will towards stricter measures. France’s position has also become more favourable, though internal debate continues.

Opponents warn that weakening encryption could open the door to cyber attacks and foreign interference, while proponents emphasise the urgent need to prevent abuse and close loopholes in existing law.

The next Council meeting in September will be critical in shaping the final form of the regulation.

The dispute highlights the persistent tension between digital privacy and security, reflecting broader European challenges in regulating encrypted communications.

As the October deadline approaches, the EU faces a defining moment in balancing child protection with protecting the confidentiality of citizens’ communications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ShinyHunters breach Google’s Salesforce database

Google has confirmed a data breach during its investigation into the ShinyHunters group, revealing the tech giant was also affected. The attackers accessed a Salesforce database used for storing small business customer information.

The breach exposed business names and contact details during a short window before access was revoked. Google stated no highly sensitive or personal data was compromised.

ShinyHunters used phishing and vishing tactics to trick users into authorising malicious Salesforce apps disguised as legitimate tools. The technique mirrors previous high-profile breaches involving firms like Santander and Ticketmaster.

Google warned the group may escalate operations by launching a data leak site. Organisations are urged to tighten their cybersecurity measures and access controls, train staff and apply multi-factor authentication across all accounts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scientists use AI to teach drones to program themselves

A computer scientist has shown that robots can now write the brains of other robots, thanks to generative AI.

Professor Peter Burke from the University of California, Irvine, has demonstrated a drone capable of creating and hosting its own control system using AI-written code, significantly reducing the time usually needed to build such infrastructure.

The project used several AI models and coding tools to prompt the creation of a real-time, web-based command centre hosted on the drone itself. The final system, which runs on a Raspberry Pi Zero 2 W, allows the drone to operate independently while remaining accessible over the internet.

Unlike traditional systems, where ground control is handled externally, the drone manages its own mission planning and navigation through a built-in AI-generated website.

Burke’s team used tools such as Claude, Gemini, ChatGPT, Cursor, and Windsurf to build the system across several sprints. Despite context limitations in each model, the final version was completed in just over 100 hours, around twenty times faster than a previous project of similar complexity.

The final codebase consisted of 10,000 lines and included everything from flight commands to map-based interaction and GPS tracking.

Although the technology shows promising potential in fields like aerial imagery and spatial AI, experts have raised safety concerns.

While a manual override system was included in the experiment, the ability for robots to self-generate control logic introduces new ethical and operational challenges, especially as such systems evolve to operate in unpredictable environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Visa boosts cyber defence support for clients

Visa has launched a new Cybersecurity Advisory Practice to support businesses in identifying and countering growing cyber risks. The initiative aims to provide practical insights tailored to clients of all sizes.

The practice will be powered by Visa Consulting & Analytics, which brings together a global team of consultants, product specialists and data scientists. Services include training, threat analysis and cybersecurity maturity assessments.

Jeremiah Dewey, a veteran with over 20 years of experience in the field, has been named global head of cyber products. He will lead product development and build strategic partnerships.

Visa says the goal is to offer scalable solutions to both small businesses and large enterprises, enabling them to stay resilient in an evolving digital threat market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Colorado’s AI law under review amid budget crisis

Colorado lawmakers face a dual challenge as they return to the State Capitol on 21 August for a special session: closing a $1.2 billion budget shortfall and revisiting a pioneering yet controversial law regulating AI.

Senate Bill 24-205, signed into law in May 2024, aims to reduce bias in AI decision-making affecting areas such as lending, insurance, education, and healthcare. While not due for implementation until February 2026, critics and supporters now expect that deadline to be extended.

Representative Brianna Titone, one of the bill’s sponsors, emphasised the importance of transparency and consumer safeguards, warning of the risks associated with unregulated AI. However, unexpected costs have emerged. State agencies estimate implementation could cost up to $5 million, a far cry from the bill’s original fiscal note.

Governor Polis has called for amendments to prevent excessive financial and administrative burdens on state agencies and businesses. The Judicial Department now expects costs to double from initial projections, requiring supplementary budget requests.

Industry concerns centre on data-sharing requirements and vague regulatory definitions. Critics argue the law could erode competitive advantage and stall innovation in the United States. Developers are urging clarity and more time before compliance is enforced.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tech giants under fire in Australia for failing online child protection standards

Australia’s eSafety commissioner report showed that tech giants, including Apple, Google, Meta, and Microsoft, have failed to act against online child sexual abuse. Namely, it was found that Apple and YouTube do not track the number of abuse reports they receive or how quickly they respond, raising serious concerns. Additionally, both companies failed to disclose the number of trust and safety staff they employ, highlighting ongoing transparency and accountability issues in protecting children online.

In July 2024, the eSafety Commissioner of Australia took action by issuing legally enforceable notices to major tech companies, pressuring them to improve their response to child sexual abuse online.

These notices legally require recipients to comply within a set timeframe. Under the order, each companies were required to report eSafety every six months over a two-year period, detailing their efforts to combat child sexual abuse material, livestreamed abuse, online grooming, sexual extortion, and AI-generated content.

While these notices were issued in 2022 and 2023, there has been minimal effort by the companies to take action to prevent such crimes, according to Australia’s eSafety Commissioner Julie Inman Grant.

Key findings from the eSafety commissioner are:

  • Apple did not use hash-matching tools to detect known CSEA images on iCloud (which was opt-in, end-to-end encrypted) and did not use hash-matching tools to detect known CSEA videos on iCloud or iCloud email. For iMessage and FaceTime (which were end-to-end encrypted), Apple only used Communication Safety, Apple’s safety intervention to identify images or videos that likely contain nudity, as a means of ‘detecting’ CSEA.
  • Discord did not use hash-matching tools for known CSEA videos on any part of the service (despite using hash-matching tools for known images and tools to detect new CSEA material).
  • Google did not use hash-matching tools to detect known CSEA images on Google Messages (end-to-end encrypted), nor did it detect known CSEA videos on Google Chat, Google Messages, or Gmail.
  • Microsoft did not use hash-matching tools for known CSEA images stored on OneDrive18, nor did it use hash-matching tools to detect known videos within content stored on OneDrive or Outlook.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China flags crypto iris scans as national security risk

China’s top security agency has raised concerns over crypto-related projects collecting biometric data, warning they may threaten national security. A recent MSS bulletin warned that crypto firms trading tokens for iris scans could misuse personal data.

While the agency didn’t explicitly mention Worldcoin, the description aligns with its practice of exchanging tokens for biometric scans in over 160 countries.

Officials described iris recognition as a sensitive form of identification that, once leaked, cannot be changed. The bulletin warned that fake facial data may be used by foreign agencies for espionage and infiltration.

In response to privacy concerns, Ethereum co-founder Vitalik Buterin recently proposed a pluralistic identity system. The concept combines multiple sources of verification rather than relying on a single, centralised ID.

He argued that current models risk eliminating anonymity and may favour wealthy participants in verification systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Trump Media trials new AI search engine with help from Perplexity

Trump Media and Technology Group has begun testing a new AI-powered search engine called Truth Search AI on its Truth Social platform.

Developed in partnership with AI company Perplexity, the feature is intended to enhance access to information for users of the platform.

Devin Nunes, CEO and Chairman of Trump Media, said the tool will strengthen Truth Social’s position in the so-called ‘Patriot Economy’.

Perplexity’s Chief Business Officer, Dmitry Shevelenko, added that the collaboration brings powerful AI to users who are seeking answers to significant questions.

The search engine is already live on the platform and has responded to politically sensitive queries with measured language.

When asked whether Donald Trump was a liar, the tool noted that the label often depends on context, but acknowledged that fact-checkers have documented many misleading claims.

A similar question about Nancy Pelosi prompted the response that such a claim was partisan rather than factual.

Trump Media plans to expand the feature to its iOS and Android apps shortly. The launch is part of a wider strategy to broaden the company’s digital offerings, which also include ventures in cryptocurrency and finance, such as a proposed Bitcoin ETF in partnership with Crypto.com and Yorkville America Digital.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US court system suffers sweeping cyber intrusion

A sweeping cyberattack has compromised the federal court filing system across multiple US states, exposing sensitive case data and informant identities. The breach affects core systems used by legal professionals and the public.

Sources say the Administrative Office of the US Courts first realised the scale of the hack in early July, with authorities still assessing the damage. Nation-state-linked actors or organised crime are suspected.

Critical systems like CM/ECF and PACER were impacted, raising fears over sealed indictments, search warrants and cooperation records now exposed. A dozen dockets were reportedly tampered with in at least one district.

Calls to modernise the ageing court infrastructure have intensified, with officials warning of rising cyber threats and the urgent need for system replacements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US agencies to adopt ChatGPT to modernise government operations

The US government has finalised a deal with OpenAI to integrate ChatGPT Enterprise across all federal agencies. Each agency will access ChatGPT for $1 to support AI adoption and modernise operations.

According to the General Services Administration, the move aligns with the White House’s AI Action Plan, which aims to make the US a global leader in AI development. The plan promotes AI integration, innovation, and regulation across public institutions.

However, privacy advocates and cybersecurity experts have raised concerns over the risks of centralised AI in government. Critics cite the potential for mass surveillance, narrative control, and sensitive data exposure.

Sam Altman, CEO of OpenAI, has cautioned users that AI conversations are not protected under privacy laws and could be used in legal proceedings. Storing data on centralised servers via large language models raises concerns over civil liberties and government overreach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot