Judge halts OPM data sharing with DOGE amid privacy concerns

A federal judge in New York ordered the US Office of Personnel Management (OPM) to stop sharing sensitive personal data with the Department of Government Efficiency (DOGE) agents.

The preliminary injunction, issued on 6 June by Judge Denise Cote, cited a strong likelihood that OPM and DOGE violated both the Privacy Act of 1974 and the Administrative Procedures Act.

The lawsuit, led by the Electronic Frontier Foundation and several advocacy groups, alleges that OPM unlawfully disclosed information from one of the largest federal employee databases to DOGE, a controversial initiative reportedly linked to billionaire Elon Musk.

The database includes names, social security numbers, health and financial data, union affiliations, and background check records for millions of federal employees, applicants, and retirees.

Union representatives and privacy advocates called the ruling a significant win for data protection and government accountability. AFGE President Everett Kelley criticised the involvement of ‘Musk’s DOGE cronies’, arguing that unelected individuals should not have access to such sensitive material.

The legal action also seeks to delete any data handed over to DOGE. The case comes amid ongoing concerns about federal data security following OPM’s 2015 breach, which compromised information on more than 22 million people.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT and generative AI have polluted the internet — and may have broken themselves

The explosion of generative AI tools like ChatGPT has flooded the internet with low-quality, AI-generated content, making it harder for future models to learn from authentic human knowledge.

As AI continues to train on increasingly polluted data, a loop forms in which AI imitates already machine-made content, leading to a steady drop in originality and usefulness. The worrying trend is referred to as ‘model collapse’.

To illustrate the risk, researchers compare clean pre-AI data to ‘low-background steel’ — a rare kind of steel made before nuclear testing in 1945, which remains vital for specific medical and scientific uses.

Just as modern steel became contaminated by radiation, modern data is being tainted by artificial content. Cambridge researcher Maurice Chiodo notes that pre-2022 data is now seen as ‘safe, fine, clean’, while everything after is considered ‘dirty’.

A key concern is that techniques like retrieval-augmented generation, which allow AI to pull real-time data from the internet, risk spreading even more flawed content. Some research already shows that it leads to more ‘unsafe’ outputs.

If developers rely on such polluted data, scaling models by adding more information becomes far less effective, potentially hitting a wall in progress.

Chiodo argues that future AI development could be severely limited without a clean data reserve. He and his colleagues urge the introduction of clear labelling and tighter controls on AI content.

However, industry resistance to regulation might make meaningful reform difficult, raising doubts about whether the pollution can be reversed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Indonesia’s cyber push faces capacity challenges in the provinces

Indonesia is decentralising its approach to cybersecurity, launching eight regional Cyber Crime Directorates within provincial police forces in September 2024.

These directorates, located in areas including Jakarta, East Java, Bali, and Papua, aim to boost local responses to increasingly complex cyber threats—from data breaches and financial fraud to hacktivism and disinformation.

The move marks a shift from Jakarta-led cybersecurity efforts toward a more distributed model, aligning with Indonesia’s broader decentralisation goals. It reflects the state’s recognition that digital threats are not only national in scope, but deeply rooted in local contexts.

However, experts warn that regionalising cyber governance comes with significant challenges. Provincial police commands often lack specialised personnel, digital forensics capabilities, and adaptive institutional structures.

Many still rely on rotations from central agencies or basic training programs—insufficient for dealing with fast-moving and technically advanced cyberattacks.

Moreover, the culture of rigid hierarchy and limited cross-agency collaboration may further hinder rapid response and innovation at the local level. Without reforms to increase flexibility, autonomy, and inter-agency cooperation, these new directorates risk becoming symbolic rather than operationally impactful.

The inclusion of provinces like Central Sulawesi and Papua also reveals a political dimension. These regions are historically security-sensitive, and the presence of cyber directorates could serve both policing and state surveillance functions, raising concerns over the balance between security and civil liberties.

To be effective, the initiative requires more than administrative expansion. It demands sustained investment in talent development, modern infrastructure, and trusted partnerships with local stakeholders—including the private sector and academia.

If these issues are not addressed, Indonesia’s push to regionalise cybersecurity may reinforce old hierarchies rather than build meaningful local capacity. Stronger, smarter institutions—not just new offices—will determine whether Indonesia can secure its digital future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Graphite spyware used against European reporters, experts warn

A new surveillance scandal has emerged in Europe as forensic evidence confirms that an Israeli spyware firm Paragon used its Graphite tool to target journalists through zero-click attacks on iOS devices. The attacks, requiring no user interaction, exposed sensitive communications and location data.

Citizen Lab and reports from Schneier on Security identified the spyware on multiple journalists’ devices on April 29, 2025. The findings mark the first confirmed use of Paragon’s spyware against members of the press, raising alarms over digital privacy and press freedom.

Backed by US investors, Paragon has operated outside of Israel under claims of aiding national security. But its spyware is now at the center of a widening controversy, particularly in Italy, where the government recently ended its contract with the company after two journalists were targeted.

Experts warn that such attacks undermine the confidentiality crucial to journalism and could erode democratic safeguards. Even Apple’s secure devices proved vulnerable, according to Bleeping Computer, highlighting the advanced nature of Graphite.

The incident has sparked calls for tighter international regulation of spyware firms. Without oversight, critics argue, tools meant for fighting crime risk being used to silence dissent and target civil society.

The Paragon case underscores the urgent need for transparency, accountability, and stronger protections in an age of powerful, invisible surveillance tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New cyberattack method poses major threat to smart grids, study finds

A new study published in ‘Engineering’ highlights a growing cybersecurity threat to smart grids as they become more complex due to increased integration of distributed energy sources.

The research, conducted by Zengji Liu, Mengge Liu, Qi Wang, and Yi Tang, focuses on a sophisticated form of cyberattack known as a false data injection attack (FDIA) that targets data-driven algorithms used in smart grid operations.

As modern power systems adopt technologies like battery storage and solar panels, they rely more heavily on algorithms to manage energy distribution and grid stability. However, these algorithms can be exploited.

The study introduces a novel black-box FDIA method that injects false data directly at the measurement modules of distributed power supplies, using generative adversarial networks (GANs) to produce stealthy attack vectors.

What makes this method particularly dangerous is that it doesn’t require detailed knowledge of the grid’s internal workings, making it more practical and harder to detect in real-world scenarios.

The researchers also proposed an approach to estimate controller and filter parameters in distributed energy systems, making it easier to launch these attacks.

To test the method, the team simulated attacks on the New England 39-bus system, specifically targeting a deep learning model used for transient stability prediction. Results showed a dramatic drop in accuracy—from 98.75% to 56%—after the attack.

The attack also proved effective across multiple neural network models and on larger grid systems, such as IEEE’s 118-bus and 145-bus networks.

These findings underscore the urgent need for better cybersecurity defenses in the evolving smart grid landscape. As systems grow more complex and reliant on AI-driven management, developing robust protection against FDIA threats will be critical.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum computing threatens Bitcoin: Experts debate timeline

Recent breakthroughs in quantum computing have revived fears about the long-term security of Bitcoin (BTC).

With IBM aiming to release the first fault-tolerant quantum computer, the IBM Quantum Starling, by 2029, experts are increasingly concerned that such advancements could undermine Bitcoin’s cryptographic backbone.

Bitcoin currently relies on elliptic curve cryptography (ECC) and the SHA-256 hashing algorithm to secure wallets and transactions. However, both are potentially vulnerable to Shor’s algorithm, which a sufficiently powerful quantum computer could exploit.

Google quantum researcher Craig Gidney warned in May 2025 that quantum resources required to break RSA encryption had been significantly overestimated. Although Bitcoin uses ECC, not RSA, Gidney’s research hinted at a threat window between 2030 and 2035 for crypto systems.

Opinions on the timeline vary. Adam Back, Blockstream CEO and early Bitcoin advocate, believes a quantum threat is still at least two decades away. However, he admitted that future progress could force users to migrate coins to quantum-safe wallets—potentially even Satoshi Nakamoto’s dormant holdings.

Others are more alarmed. David Carvalho, CEO of Naoris Protocol, claimed in a June 2025 op-ed that Bitcoin could be cracked within five years, pointing to emerging technologies like Microsoft’s Majorana chip. He estimated that nearly 30% of BTC is stored in quantum-vulnerable addresses.

‘Just one breach could destroy trust in the entire ecosystem,’ Carvalho warned, noting that BlackRock has already acknowledged the quantum risk in its Bitcoin ETF filings.

Echoing this urgency, billionaire investor Chamath Palihapitiya said in late 2024 that SHA-256 could be broken within two to five years if companies scale quantum chips like Google’s 105-qubit Willow. He urged the crypto industry to start updating encryption protocols before it’s too late.

While truly fault-tolerant quantum machines capable of breaking Bitcoin are not yet available, the accelerating pace of research suggests that preparing for a quantum future is no longer optional—it’s a necessity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI health tools need clinicians to prevent serious risks, Oxford study warns

The University of Oxford has warned that AI in healthcare, primarily through chatbots, should not operate without human oversight.

Researchers found that relying solely on AI for medical self-assessment could worsen patient outcomes instead of improving access to care. The study highlights how these tools, while fast and data-driven, fall short in delivering the judgement and empathy that only trained professionals can offer.

The findings raise alarm about the growing dependence on AI to fill gaps caused by doctor shortages and rising costs. Chatbots are often seen as scalable solutions, but without rigorous human-in-the-loop validation, they risk providing misleading or inconsistent information, particularly to vulnerable groups.

Rather than helping, they might increase health disparities by delaying diagnosis or giving patients false reassurance.

Experts are calling for safer, hybrid approaches that embed clinicians into the design and ongoing use of AI tools. The Oxford researchers stress that continuous testing, ethical safeguards and clear protocols must be in place.

Instead of replacing clinical judgement, AI should support it. The future of digital healthcare hinges not just on innovation but on responsibility and partnership between technology and human care.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cyberattack on Nova Scotia Power exposes sensitive data of 280,000 customers

Canada’s top cyber-defence official has spoken out following the ransomware attack that compromised the personal data of 280,000 Nova Scotia Power customers.

The breach, which occurred on 19 March but went undetected until 25 April, affected over half of the utility’s customer base. Stolen data included names, addresses, birthdates, driver’s licences, social insurance numbers, and banking details.

Rajiv Gupta, head of the Canadian Centre for Cyber Security, confirmed that Nova Scotia Power had contacted the agency following the incident.

While he refrained from discussing operational details or attributing blame, he highlighted the rising frequency of ransomware attacks against critical infrastructure across Canada.

He explained how criminal groups use double extortion tactics — stealing data and locking systems — to pressure organisations into paying ransoms, often without guaranteeing system restoration or data confidentiality.

Although the utility declined to pay the ransom, the fallout has led to a wave of scrutiny. Gupta warned that interconnectivity and integrating legacy systems with internet-facing platforms have increased vulnerability.

He urged utilities and other infrastructure operators to build defences based on worst-case scenarios and to adopt recommended cyber hygiene practices and the Centre’s ransomware playbook.

In response to the breach, the Nova Scotia Energy Board has approved a $1.8 million investment in cybersecurity upgrades.

The Canadian cyber agency, although lacking regulatory authority, continues to provide support and share lessons from such incidents with other organisations to raise national resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

German state leaves Microsoft Teams for digital sovereignty

In a bold move highlighting growing concerns over digital sovereignty, the German state of Schleswig-Holstein is cutting ties with Microsoft. Announced by Digitalisation Minister Dirk Schroedter, the state is uninstalling the tech giant’s ubiquitous software across its entire administration.

‘We’re done with Teams!’ declared Minister Schroedter, signalling a complete shift away from Microsoft products like Word, Excel, Outlook, and eventually the Windows operating system itself. Instead, Schleswig-Holstein is turning to open-source alternatives like LibreOffice and Linux.

The reason? A strong desire to ‘take back control’ of its data and reduce reliance on US tech giants. Minister Schroedter emphasised that recent geopolitical tensions, particularly following Donald Trump’s return to the White House and rising US-EU friction, have ‘strengthened interest’ in their path.

‘The war in Ukraine revealed our energy dependencies,’ he noted, ‘and now we see there are also digital dependencies.’ The transition, affecting all 60,000 public servants, including police, judges, and eventually teachers, begins in less than three months.

Data will also move away from Microsoft-controlled clouds to German infrastructure. Beyond sovereignty, the state expects significant cost savings – potentially tens of millions of euros – compared to licensing fees and mandatory updates, which experts say can leave organisations feeling taken ‘by the throat’. The move also references long-standing antitrust concerns, like the EU’s investigation into Microsoft bundling Teams.

Microsoft was earlier accused of blocking the email of ICC Chief Prosecutor Karim Khan in compliance with US sanctions—an action it denied, noting the ICC had reportedly switched to ProtonMail. The incident raised fresh questions about digital sovereignty and the risks of foreign cloud dependency.

Why does it matter?

While challenges exist, like potential staff resistance highlighted by past struggles in Munich, Schleswig-Holstein is forging ahead. They join other entities like France’s gendarmerie and are watched by cities like Copenhagen and Aarhus. Bolstered by the new EU ‘Interoperable Europe Act‘, Schleswig-Holstein aims to be a pioneer, proving that governments can successfully reclaim control of their digital destiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google pushes users to move away from passwords

Google urges users to move beyond passwords, citing widespread reuse and vulnerability to phishing attacks. The company is now promoting alternatives like passkeys and social sign-ins as more secure and user-friendly options.

Data from Google shows that half of users reuse passwords, while the rest either memorise or write them down. Gen Z is leading the shift and is significantly more likely to adopt passkeys and social logins than older generations.

Passkeys, stored on user devices, eliminate traditional password input and reduce phishing risks by relying on biometrics or device PINs for authentication. However, limited app support and difficulty syncing across devices remain barriers to broader adoption.

Google highlights that while social sign-ins offer convenience, they come with privacy trade-offs by giving large companies access to more user activity data. Users still relying on passwords are advised to adopt app-based two-factor authentication over SMS or email, which are far less secure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!