FTC says Amazon misused legal privilege to dodge scrutiny

Federal regulators have accused Amazon of deliberately concealing incriminating evidence in an ongoing antitrust case by abusing privilege claims. The Federal Trade Commission (FTC) said Amazon wrongly withheld nearly 70,000 documents, withdrawing 92% of its claims after a judge forced a re-review.

The FTC claims Amazon marked non-legal documents as privileged to keep them from scrutiny. Internal emails suggest staff were told to mislabel communications by including legal teams unnecessarily.

One email reportedly called former CEO Jeff Bezos the ‘chief dark arts officer,’ referring to questionable Prime subscription tactics.

The documents revealed issues such as widespread involuntary Prime sign-ups and efforts to manipulate search results in favour of Amazon’s products. Regulators said these practices show Amazon intended to hide evidence rather than make honest errors.

The FTC is now seeking a 90-day extension for discovery and wants Amazon to cover the additional legal costs. It claims the delay and concealment gave Amazon an unfair strategic advantage instead of allowing a level playing field.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New EU regulation to track crypto transfers and ban privacy coins

The European Union is set to introduce new measures under its Anti-Money Laundering Regulation (AMLR) to track cryptocurrency transfers. The EU aims to gather data on both senders and recipients of funds, expanding transparency within crypto-asset service providers.

From 1 July 2027, cryptocurrency exchanges and custodial services will be prohibited from dealing with anonymous wallets and privacy coins. The regulation also mandates ‘intrusive checks’ for self-hosted wallets, requiring verification for transactions over €1,000.

However, this move has sparked concerns within the cryptocurrency industry, with critics arguing that it could limit privacy and push the sector into less transparent markets.

Monero developer Riccardo Spagni and other industry figures fear the regulations could drive privacy-focused firms to relocate to jurisdictions that support privacy rights.

They warn that the EU’s approach could hinder innovation and push parts of the crypto economy into the black market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft bans DeepSeek app for staff use

Microsoft has confirmed it does not allow employees to use the DeepSeek app, citing data security and propaganda concerns.

Speaking at a Senate hearing, company president Brad Smith explained the decision stems from fears that data shared with DeepSeek could end up on Chinese servers and be exposed to state surveillance laws.

Although DeepSeek is open source and widely available, Microsoft has chosen not to list the app in its own store.

Smith warned that DeepSeek’s answers may be influenced by Chinese government censorship and propaganda, and its privacy policy confirms data is stored in China, making it subject to local intelligence regulations.

Interestingly, Microsoft still offers DeepSeek’s R1 model via its Azure cloud service. The company argued this is a different matter, as customers can host the model on their servers instead of relying on DeepSeek’s infrastructure.

Even so, Smith admitted Microsoft had to alter the model to remove ‘harmful side effects,’ although no technical details were provided.

While Microsoft blocks DeepSeek’s app for internal use, it hasn’t imposed a blanket ban on all chatbot competitors. Apps like Perplexity are available in the Windows store, unlike those from Google.

The stance against DeepSeek marks a rare public move by Microsoft as the tech industry navigates rising tensions over AI tools with foreign links.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LockBit ransomware platform breached again

LockBit, one of the most notorious ransomware groups of recent years, has suffered a significant breach of its dark web platform. Its admin and affiliate panels were defaced and replaced with a message linking to a leaked MySQL database, seemingly exposing sensitive operational details.

The message mocked the gang with the line ‘Don’t do crime CRIME IS BAD xoxo from Prague,’ raising suspicions of a rival hacker or vigilante group behind the attack.

The leaked database, first flagged by a threat actor known as Rey, contains 20 tables revealing details about LockBit’s affiliate network, tactics, and operations. Among them are nearly 60,000 Bitcoin addresses, payload information tied to specific targets, and thousands of extortion chat messages.

A ‘users’ table lists 75 affiliate and admin identities, many with passwords stored in plain text—some comically weak, like ‘Weekendlover69.’

While a LockBit spokesperson confirmed the breach via Tox chat, they insisted no private keys were exposed and that losses were minimal. However, the attack echoes a recent breach of the Everest ransomware site, suggesting the same actor may be responsible.

Combined with past law enforcement actions—such as Operation Cronos, which dismantled parts of LockBit’s infrastructure in 2024—the new leak could harm the group’s credibility with affiliates.

LockBit has long operated under a ransomware-as-a-service model, providing malware to affiliates in exchange for a cut of ransom profits. It has targeted both Linux and Windows systems, used double extortion tactics, and accounted for a large share of global ransomware attacks in 2022.

Despite ongoing pressure from authorities, the group has continued its operations—though this latest breach could prove harder to recover from.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini Nano boosts scam detection on Chrome

Google has released a new report outlining how it is using AI to better protect users from online scams across its platforms.

The company says AI is now actively fighting scams in Chrome, Search and Android, with new tools able to detect and neutralise threats more effectively than before.

At the heart of these efforts is Gemini Nano, Google’s on-device AI model, which has been integrated into Chrome to help identify phishing and fraudulent websites.

The report claims the upgraded systems can now detect 20 times more harmful websites, many of which aim to deceive users by creating a false sense of urgency or offering fake promotions. These scams often involve phishing, cryptocurrency fraud, clone websites and misleading subscriptions.

Search has also seen major improvements. Google’s AI-powered classifiers are now better at spotting scam-related content before users encounter it. For example, the company says it has reduced scams involving fake airline customer service agents by over 80 per cent, thanks to its enhanced detection tools.

Meanwhile, Android users are beginning to see stronger safeguards as well. Chrome on Android now warns users about suspicious website notifications, offering the choice to unsubscribe or review them safely.

Google has confirmed plans to extend these protections even further in the coming months, aiming to cover a broader range of online threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches data residency in India for ChatGPT enterprise

OpenAI has announced that enterprise and educational customers in India using ChatGPT can now store their data locally instead of relying on servers abroad.

The move, aimed at complying with India’s upcoming data localisation rules under the Digital Personal Data Protection Act, allows conversations, uploads, and prompts to remain within the country. Similar options are now available in Japan, Singapore, and South Korea.

Data stored under this new residency option will be encrypted and kept secure, according to the company. OpenAI clarified it will not use this data for training its models unless customers choose to share it.

The change may also influence a copyright infringement case against OpenAI in India, where the jurisdiction was previously questioned due to foreign server locations.

Alongside this update, OpenAI has unveiled a broader international initiative, called OpenAI for Countries, as part of the US-led $500 billion Stargate project.

The plan involves building AI infrastructure in partner countries instead of centralising development, allowing nations to create localised versions of ChatGPT tailored to their languages and services.

OpenAI says the goal is to help democracies develop AI on their own terms instead of adopting centralised, authoritarian systems.

The company and the US government will co-invest in local data centres and AI models to strengthen economic growth and digital sovereignty across the globe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LockBit ransomware Bitcoin addresses exposed

Nearly 60,000 Bitcoin addresses linked to LockBit’s ransomware operations have been exposed following a major breach of the group’s dark web affiliate panel.

The leak, which included a MySQL database dump, was shared publicly online and could assist blockchain analysts in tracing LockBit’s financial activity instead of leaving such transactions untracked.

Despite the scale of the breach, no private keys were leaked. A LockBit representative reportedly confirmed the incident in a message, stating that no sensitive access data was compromised.

However, the exposed database included 20 tables, such as one labelled ‘builds’ that contained details about ransomware created by affiliates and their targeted companies.

Another table, ‘chats,’ revealed over 4,400 messages from negotiations between victims and LockBit operators, offering a rare glimpse into the inner workings of ransomware extortion tactics.

Analysts believe the hack may be connected to a separate breach of the Everest ransomware site, as both featured identical messages, hinting at a possible link.

The incident has again underscored the central role of cryptocurrency in the ransomware economy. Each victim is typically given a unique address for payments, making tracking difficult.

Instead of remaining hidden, these addresses now give law enforcement and blockchain experts a chance to trace payments and potentially link them to previously unidentified actors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FutureHouse unveils Finch AI tool for biology research

FutureHouse, a nonprofit backed by Eric Schmidt, has introduced Finch, an AI tool designed to assist biological research. Finch analyses biology data and research papers, generating figures and insights much like a first-year graduate student might.

FutureHouse aims to automate aspects of scientific discovery, though no significant breakthroughs have yet been reported.

Despite optimism from tech leaders, many scientists doubt AI’s current value in guiding complex research.

Finch, while promising, can still make errors, prompting FutureHouse to recruit bioinformaticians and computational biologists to help refine the tool. The platform remains in closed beta as development continues.

The biotech AI market is expanding, yet previous ventures have suffered clinical trial setbacks. Finch represents a cautious step forward, balancing potential with careful human oversight. Interested experts are invited to participate in its ongoing evaluation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta wins $168 million verdict against NSO Group in landmark spyware case

Meta has secured a major legal victory against Israeli surveillance company NSO Group, with a California jury awarding $168 million in damages.

The ruling concludes a six-year legal battle over the unlawful deployment of NSO’s Pegasus spyware, which targeted journalists, human rights activists, and other individuals through a vulnerability in WhatsApp.

The verdict includes $444,719 in compensatory damages and $167.3 million in punitive damages.

Meta hailed the decision as a milestone for privacy, calling it ‘the first victory against the development and use of illegal spyware that threatens the safety and privacy of everyone’. NSO, meanwhile, said it would review the outcome and consider further legal steps, including an appeal.

The case, launched by WhatsApp in 2019, exposed the far-reaching use of Pegasus. Between 2018 and 2020, NSO generated $61.7 million in revenue from a single exploited vulnerability, with profits potentially reaching $40 million.

Court documents revealed that Pegasus was deployed against 1,223 individuals across 51 countries, with the highest number of victims in Mexico, India, Bahrain, Morocco, and Pakistan. Spain, where officials were targeted in 2022, ranked as the highest Western democracy on the list.

While NSO has long maintained that its spyware is sold exclusively to governments for counterterrorism purposes, the data highlighted its extensive use in authoritarian and semi-authoritarian regimes.

A former NSO employee testified that the company attempted to sell Pegasus to United States police forces, though those efforts were unsuccessful.

Beyond the financial penalty, the ruling exposed NSO’s internal operations. The company runs a 140-person research team with a $50 million budget dedicated to discovering smartphone vulnerabilities. Clients have included Saudi Arabia, Mexico, and Uzbekistan.

However, the firm’s conduct drew harsh criticism from Judge Phyllis Hamilton, who accused NSO of withholding evidence and ignoring court orders. Israeli officials reportedly intervened last year to prevent sensitive documents from reaching the US courts.

Privacy advocates welcomed the decision. Natalia Krapiva, a senior lawyer at Access Now, said it sends a strong message to the spyware industry. ‘This will hopefully show spyware companies that there will be consequences if you are careless, if you are brazen, and if you act as NSO did in these cases,’ she said.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft adds AI assistant to Windows 11 settings

Microsoft is bringing more AI to Windows 11 with a new AI assistant built into the Settings app. This smart agent can adjust system settings like mouse precision, help users navigate the interface, and even troubleshoot problems—all by request.

With the user’s permission, it can also make changes automatically instead of relying on manual adjustments.

The AI assistant will first roll out to testers in the Windows Insider programme on Snapdragon-powered Copilot+ PCs, followed by support for x86-based systems.

Although Microsoft has not confirmed a release date for the general public, this feature marks a major step in making Windows settings more intuitive and responsive.

Several other AI-powered updates are on the way, including smarter tools in File Explorer and the Snipping Tool, plus dynamic lighting in the Photos app.

Copilot will also gain a new ‘Vision’ feature, letting it see shared windows for better in-app assistance instead of being limited to text prompts alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!