France telecom exposes millions of customer records

A cyber‑attack on Bouygues Telecom has compromised the personal data of 6.4 million customers. The firm disclosed that a third party accessed personal and contractual information related to certain subscriptions.

Attackers gained access on 4 August and were blocked swiftly after detection, increasing the monitoring of the systems. Exposed data includes contact details, contractual and civil status information, business records for professional clients, and IBANs for affected users.

The cybersecurity breach did not include credit card numbers or passwords. Bouygues sent impacted customers notifications via email or text and advised vigilance against scam calls and messages.

The French data protection authority, the CNIL, has been informed, and a formal complaint has been filed. The company warned that perpetrators face up to five years in prison and a fine of €150,000 under French law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

German court limits police use of spyware

Germany’s top court has ruled that police can only deploy spyware to monitor devices in cases involving serious crimes, narrowing the scope of surveillance powers introduced in 2017. The decision means spyware can no longer be used for investigating offences with a maximum sentence of three years or less, which judges said fall under ‘basic criminality.’

The case was brought by the digital rights group Digitalcourage, which challenged rules that allowed police to use spyware to intercept encrypted chats and messages. Plaintiffs argued that the measures were too broad and risked exposing the communications of people not under investigation. The court agreed, stating that such surveillance represents a ‘very severe’ intrusion into privacy.

Judges highlighted that spyware not only circumvents security systems but also enables access to vast amounts of sensitive data, including all types of digital communications. They warned that the scale and covert nature of this surveillance go far beyond traditional monitoring methods, threatening both the confidentiality and integrity of personal IT systems.

By restricting the use of spyware to investigations of serious crimes, the ruling places tighter limits on state surveillance in Germany, reinforcing constitutional protections for privacy and digital rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Healthcare cybersecurity failures put patient safety at risk, Modat warns

Over 1.2 million internet-connected healthcare devices and systems that expose patient data have been identified in research by Modat. The United States, South Africa, and Australia topped the list, with vulnerable systems including MRI scanners, CT machines, and hospital management platforms.

Using its Modat Magnify platform, the company identified misconfigurations, weak passwords, and unpatched software as common risks. Some devices had no authentication, while others used factory-default passwords such as ‘admin’ or ‘123456’. Sensitive MRI, dental X-ray, and blood test records were accessed.

Modat worked with Health-ISAC and Dutch CERT Z-CERT for responsible disclosure, alerting organisations to secure exposed systems. CEO Soufian El Yadmani said devices should never be open to the internet without safeguards, warning that remote access must be secure.

The research stressed that healthcare cybersecurity is a patient safety issue. Outdated or unprotected devices could enable fraud, extortion, or network breaches. Regular security checks, asset inventories, and monitoring were recommended to reduce risks.

Founded in 2024, Modat uses its Device DNA dataset to catalogue internet-connected devices globally. It aims to help healthcare and other sectors close the gap between rising cyber threats and effective resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU member states clash over the future of encrypted private messaging

The ongoing controversy around the EU’s proposed mandatory scanning of private messages has escalated with the European Parliament intensifying pressure on the Council to reach a formal agreement.

A leaked memo reveals that the Parliament threatens to block the extension of the current voluntary scanning rules unless mandatory chat control is agreed upon.

Denmark, leading the EU Council Presidency, has pushed a more stringent version of the so-called Chat Control law that could become binding as soon as 14 October 2025.

While the Parliament argues the law is essential for protecting children online, many legal experts and rights groups warn the proposal still violates fundamental human rights, particularly the right to privacy and secure communication.

The Council’s Legal Service has repeatedly noted that the draft infringes on these rights since it mandates scanning all private communications, undermining end-to-end encryption that most messaging apps rely on.

Some governments, including Germany and Belgium, remain hesitant or opposed, citing these serious concerns.

Supporters like Italy, Spain, and Hungary have openly backed Denmark’s proposal, signalling a shift in political will towards stricter measures. France’s position has also become more favourable, though internal debate continues.

Opponents warn that weakening encryption could open the door to cyber attacks and foreign interference, while proponents emphasise the urgent need to prevent abuse and close loopholes in existing law.

The next Council meeting in September will be critical in shaping the final form of the regulation.

The dispute highlights the persistent tension between digital privacy and security, reflecting broader European challenges in regulating encrypted communications.

As the October deadline approaches, the EU faces a defining moment in balancing child protection with protecting the confidentiality of citizens’ communications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ShinyHunters breach Google’s Salesforce database

Google has confirmed a data breach during its investigation into the ShinyHunters group, revealing the tech giant was also affected. The attackers accessed a Salesforce database used for storing small business customer information.

The breach exposed business names and contact details during a short window before access was revoked. Google stated no highly sensitive or personal data was compromised.

ShinyHunters used phishing and vishing tactics to trick users into authorising malicious Salesforce apps disguised as legitimate tools. The technique mirrors previous high-profile breaches involving firms like Santander and Ticketmaster.

Google warned the group may escalate operations by launching a data leak site. Organisations are urged to tighten their cybersecurity measures and access controls, train staff and apply multi-factor authentication across all accounts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scientists use AI to teach drones to program themselves

A computer scientist has shown that robots can now write the brains of other robots, thanks to generative AI.

Professor Peter Burke from the University of California, Irvine, has demonstrated a drone capable of creating and hosting its own control system using AI-written code, significantly reducing the time usually needed to build such infrastructure.

The project used several AI models and coding tools to prompt the creation of a real-time, web-based command centre hosted on the drone itself. The final system, which runs on a Raspberry Pi Zero 2 W, allows the drone to operate independently while remaining accessible over the internet.

Unlike traditional systems, where ground control is handled externally, the drone manages its own mission planning and navigation through a built-in AI-generated website.

Burke’s team used tools such as Claude, Gemini, ChatGPT, Cursor, and Windsurf to build the system across several sprints. Despite context limitations in each model, the final version was completed in just over 100 hours, around twenty times faster than a previous project of similar complexity.

The final codebase consisted of 10,000 lines and included everything from flight commands to map-based interaction and GPS tracking.

Although the technology shows promising potential in fields like aerial imagery and spatial AI, experts have raised safety concerns.

While a manual override system was included in the experiment, the ability for robots to self-generate control logic introduces new ethical and operational challenges, especially as such systems evolve to operate in unpredictable environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Visa boosts cyber defence support for clients

Visa has launched a new Cybersecurity Advisory Practice to support businesses in identifying and countering growing cyber risks. The initiative aims to provide practical insights tailored to clients of all sizes.

The practice will be powered by Visa Consulting & Analytics, which brings together a global team of consultants, product specialists and data scientists. Services include training, threat analysis and cybersecurity maturity assessments.

Jeremiah Dewey, a veteran with over 20 years of experience in the field, has been named global head of cyber products. He will lead product development and build strategic partnerships.

Visa says the goal is to offer scalable solutions to both small businesses and large enterprises, enabling them to stay resilient in an evolving digital threat market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Colorado’s AI law under review amid budget crisis

Colorado lawmakers face a dual challenge as they return to the State Capitol on 21 August for a special session: closing a $1.2 billion budget shortfall and revisiting a pioneering yet controversial law regulating AI.

Senate Bill 24-205, signed into law in May 2024, aims to reduce bias in AI decision-making affecting areas such as lending, insurance, education, and healthcare. While not due for implementation until February 2026, critics and supporters now expect that deadline to be extended.

Representative Brianna Titone, one of the bill’s sponsors, emphasised the importance of transparency and consumer safeguards, warning of the risks associated with unregulated AI. However, unexpected costs have emerged. State agencies estimate implementation could cost up to $5 million, a far cry from the bill’s original fiscal note.

Governor Polis has called for amendments to prevent excessive financial and administrative burdens on state agencies and businesses. The Judicial Department now expects costs to double from initial projections, requiring supplementary budget requests.

Industry concerns centre on data-sharing requirements and vague regulatory definitions. Critics argue the law could erode competitive advantage and stall innovation in the United States. Developers are urging clarity and more time before compliance is enforced.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tech giants under fire in Australia for failing online child protection standards

Australia’s eSafety commissioner report showed that tech giants, including Apple, Google, Meta, and Microsoft, have failed to act against online child sexual abuse. Namely, it was found that Apple and YouTube do not track the number of abuse reports they receive or how quickly they respond, raising serious concerns. Additionally, both companies failed to disclose the number of trust and safety staff they employ, highlighting ongoing transparency and accountability issues in protecting children online.

In July 2024, the eSafety Commissioner of Australia took action by issuing legally enforceable notices to major tech companies, pressuring them to improve their response to child sexual abuse online.

These notices legally require recipients to comply within a set timeframe. Under the order, each companies were required to report eSafety every six months over a two-year period, detailing their efforts to combat child sexual abuse material, livestreamed abuse, online grooming, sexual extortion, and AI-generated content.

While these notices were issued in 2022 and 2023, there has been minimal effort by the companies to take action to prevent such crimes, according to Australia’s eSafety Commissioner Julie Inman Grant.

Key findings from the eSafety commissioner are:

  • Apple did not use hash-matching tools to detect known CSEA images on iCloud (which was opt-in, end-to-end encrypted) and did not use hash-matching tools to detect known CSEA videos on iCloud or iCloud email. For iMessage and FaceTime (which were end-to-end encrypted), Apple only used Communication Safety, Apple’s safety intervention to identify images or videos that likely contain nudity, as a means of ‘detecting’ CSEA.
  • Discord did not use hash-matching tools for known CSEA videos on any part of the service (despite using hash-matching tools for known images and tools to detect new CSEA material).
  • Google did not use hash-matching tools to detect known CSEA images on Google Messages (end-to-end encrypted), nor did it detect known CSEA videos on Google Chat, Google Messages, or Gmail.
  • Microsoft did not use hash-matching tools for known CSEA images stored on OneDrive18, nor did it use hash-matching tools to detect known videos within content stored on OneDrive or Outlook.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China flags crypto iris scans as national security risk

China’s top security agency has raised concerns over crypto-related projects collecting biometric data, warning they may threaten national security. A recent MSS bulletin warned that crypto firms trading tokens for iris scans could misuse personal data.

While the agency didn’t explicitly mention Worldcoin, the description aligns with its practice of exchanging tokens for biometric scans in over 160 countries.

Officials described iris recognition as a sensitive form of identification that, once leaked, cannot be changed. The bulletin warned that fake facial data may be used by foreign agencies for espionage and infiltration.

In response to privacy concerns, Ethereum co-founder Vitalik Buterin recently proposed a pluralistic identity system. The concept combines multiple sources of verification rather than relying on a single, centralised ID.

He argued that current models risk eliminating anonymity and may favour wealthy participants in verification systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot