Microsoft bans DeepSeek app for staff use

Microsoft has confirmed it does not allow employees to use the DeepSeek app, citing data security and propaganda concerns.

Speaking at a Senate hearing, company president Brad Smith explained the decision stems from fears that data shared with DeepSeek could end up on Chinese servers and be exposed to state surveillance laws.

Although DeepSeek is open source and widely available, Microsoft has chosen not to list the app in its own store.

Smith warned that DeepSeek’s answers may be influenced by Chinese government censorship and propaganda, and its privacy policy confirms data is stored in China, making it subject to local intelligence regulations.

Interestingly, Microsoft still offers DeepSeek’s R1 model via its Azure cloud service. The company argued this is a different matter, as customers can host the model on their servers instead of relying on DeepSeek’s infrastructure.

Even so, Smith admitted Microsoft had to alter the model to remove ‘harmful side effects,’ although no technical details were provided.

While Microsoft blocks DeepSeek’s app for internal use, it hasn’t imposed a blanket ban on all chatbot competitors. Apps like Perplexity are available in the Windows store, unlike those from Google.

The stance against DeepSeek marks a rare public move by Microsoft as the tech industry navigates rising tensions over AI tools with foreign links.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LockBit ransomware platform breached again

LockBit, one of the most notorious ransomware groups of recent years, has suffered a significant breach of its dark web platform. Its admin and affiliate panels were defaced and replaced with a message linking to a leaked MySQL database, seemingly exposing sensitive operational details.

The message mocked the gang with the line ‘Don’t do crime CRIME IS BAD xoxo from Prague,’ raising suspicions of a rival hacker or vigilante group behind the attack.

The leaked database, first flagged by a threat actor known as Rey, contains 20 tables revealing details about LockBit’s affiliate network, tactics, and operations. Among them are nearly 60,000 Bitcoin addresses, payload information tied to specific targets, and thousands of extortion chat messages.

A ‘users’ table lists 75 affiliate and admin identities, many with passwords stored in plain text—some comically weak, like ‘Weekendlover69.’

While a LockBit spokesperson confirmed the breach via Tox chat, they insisted no private keys were exposed and that losses were minimal. However, the attack echoes a recent breach of the Everest ransomware site, suggesting the same actor may be responsible.

Combined with past law enforcement actions—such as Operation Cronos, which dismantled parts of LockBit’s infrastructure in 2024—the new leak could harm the group’s credibility with affiliates.

LockBit has long operated under a ransomware-as-a-service model, providing malware to affiliates in exchange for a cut of ransom profits. It has targeted both Linux and Windows systems, used double extortion tactics, and accounted for a large share of global ransomware attacks in 2022.

Despite ongoing pressure from authorities, the group has continued its operations—though this latest breach could prove harder to recover from.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta plans new blockchain-based payment system

Meta is assessing the use of stablecoins to facilitate cross-border payments. The company is particularly focused on low-cost transfers for digital content producers on platforms such as Instagram.

The move reflects a renewed interest in integrating blockchain technology following the company’s unsuccessful Diem initiative.

Reportedly in early talks with several cryptocurrency infrastructure providers, the firm has yet to commit to a specific stablecoin issuer.

However, the project is reportedly aimed at enabling low-value international payments for creators and freelancers operating across multiple markets.

Leading the effort is Ginger Baker, Meta’s vice president of product. She previously held senior roles at fintech firm Plaid and currently serves on the board of the Stellar Development Foundation.

The initiative aligns with broader financial sector trends, as companies like Visa, Fidelity, and Bank of America explore the use of stablecoins in regulated digital payment systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gemini Nano boosts scam detection on Chrome

Google has released a new report outlining how it is using AI to better protect users from online scams across its platforms.

The company says AI is now actively fighting scams in Chrome, Search and Android, with new tools able to detect and neutralise threats more effectively than before.

At the heart of these efforts is Gemini Nano, Google’s on-device AI model, which has been integrated into Chrome to help identify phishing and fraudulent websites.

The report claims the upgraded systems can now detect 20 times more harmful websites, many of which aim to deceive users by creating a false sense of urgency or offering fake promotions. These scams often involve phishing, cryptocurrency fraud, clone websites and misleading subscriptions.

Search has also seen major improvements. Google’s AI-powered classifiers are now better at spotting scam-related content before users encounter it. For example, the company says it has reduced scams involving fake airline customer service agents by over 80 per cent, thanks to its enhanced detection tools.

Meanwhile, Android users are beginning to see stronger safeguards as well. Chrome on Android now warns users about suspicious website notifications, offering the choice to unsubscribe or review them safely.

Google has confirmed plans to extend these protections even further in the coming months, aiming to cover a broader range of online threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China launches advanced Tianji 4.0 quantum control system

A Chinese startup, Origin Quantum, has unveiled Tianji 4.0, a cutting-edge superconducting quantum measurement and control system capable of supporting quantum computers with over 500 qubits.

Built in Hefei, Tianji 4.0 enhances scalability, integration, stability and automation, offering major advances over its previous version that powered China’s third-generation superconducting quantum computer, Origin Wukong.

The system, described as the ‘nerve centre’ of quantum computers, improves the precision and speed of controlling quantum chips.

Kong Weicheng, who leads the development team, highlighted that Tianji 4.0 will streamline quantum computer R&D and accelerate delivery timelines significantly.

Since launching in early 2024, Origin Wukong has served users in 139 countries, completing more than 380,000 tasks across industries such as finance and biomedicine. The release of Tianji 4.0 signals China’s growing leadership in quantum computing technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches data residency in India for ChatGPT enterprise

OpenAI has announced that enterprise and educational customers in India using ChatGPT can now store their data locally instead of relying on servers abroad.

The move, aimed at complying with India’s upcoming data localisation rules under the Digital Personal Data Protection Act, allows conversations, uploads, and prompts to remain within the country. Similar options are now available in Japan, Singapore, and South Korea.

Data stored under this new residency option will be encrypted and kept secure, according to the company. OpenAI clarified it will not use this data for training its models unless customers choose to share it.

The change may also influence a copyright infringement case against OpenAI in India, where the jurisdiction was previously questioned due to foreign server locations.

Alongside this update, OpenAI has unveiled a broader international initiative, called OpenAI for Countries, as part of the US-led $500 billion Stargate project.

The plan involves building AI infrastructure in partner countries instead of centralising development, allowing nations to create localised versions of ChatGPT tailored to their languages and services.

OpenAI says the goal is to help democracies develop AI on their own terms instead of adopting centralised, authoritarian systems.

The company and the US government will co-invest in local data centres and AI models to strengthen economic growth and digital sovereignty across the globe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CrowdStrike cuts jobs amid AI shift

Cybersecurity firm CrowdStrike is laying off 500 employees—5% of its workforce—as it shifts towards an AI-led operating model to boost efficiency and hit a $10 billion annual revenue goal.

In a letter to staff, CEO George Kurtz described AI as a ‘force multiplier’ meant to reduce hiring needs instead of expanding headcount.

The restructure, expected to cost up to $53 million through mid-2026, will still see hiring in customer-facing and engineering roles.

Yet despite its optimism, the company’s regulatory filings flag notable risks in depending on AI, such as faulty outputs, legal uncertainty, and the challenge of managing fast-moving systems. Analysts have also linked the shift to wider market pressures, not merely strategic innovation.

Principal analyst Sofia Ali warned that the AI-first approach may backfire if transparency, governance, and human oversight are not prioritised. Over-reliance on automation—especially in threat detection or customer support—could erode user trust instead of reinforcing it, particularly during critical incidents.

CrowdStrike’s move mirrors a broader tech trend: over 52,000 tech jobs were cut in early 2025 as firms embraced AI to replace automatable roles. For cybersecurity leaders, the challenge now lies in balancing AI’s promise with the human expertise essential to trust and resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI spending remains strong despite Trump’s tariffs, says Goldman Sachs

President Donald Trump’s new tariffs may force companies to adjust staffing and marketing budgets, but spending on AI will likely remain protected. That is according to Eric Sheridan, co-business unit leader for technology, media, and telecommunications at Goldman Sachs.

Speaking on the Goldman Sachs Exchange podcast, Sheridan said the latest tariffs are expected to create more volatility in operational costs, particularly affecting head count, marketing, and long-term projects.

However, he predicted that investment in AI would not suffer the same impact. ‘Given the sheer number of players investing both offensively and defensively at AI, I think this spend will get protected for a little longer,’ he explained.

Sheridan cited Meta as a prime example. In its recent first-quarter earnings, Meta raised its annual capital expenditure guidance to between $64 and $72 billion, up from a previous range of $60 to $65 billion.

CEO Mark Zuckerberg reaffirmed that AI remains the company’s top priority, even as Meta cut other expenses such as salaries and marketing.

‘We continue to find ways to find efficiencies inside the organization, but we are not at a point where we want to sacrifice long-duration investments,’ Sheridan noted, summarising Meta’s stance.

The broader business environment is shifting as companies respond to Trump’s ‘Liberation Day’ tariffs, announced on April 2. These include a 10% baseline levy and additional ‘reciprocal tariffs.’

While most reciprocal tariffs are paused for 90 days as negotiations continue, China faces a hefty 145% tariff. United States and Chinese officials are set to meet for trade talks this weekend in Switzerland, potentially shaping the next phase of global trade dynamics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk denies OpenAI’s sabotage claims in court battle

Elon Musk has denied accusations from OpenAI that he is waging a campaign to undermine the startup, asserting that his legal actions are justified.

In a recent court filing, Musk’s lawyer dismissed claims that he used lawsuits, social media and press attacks to sabotage OpenAI, stating the real issue lies in the company’s alleged abandonment of its original nonprofit mission.

Musk’s attorney argued that this move fails to address concerns about OpenAI prioritising profit over its charitable goals, labelling the nonprofit structure an ‘inconvenience’ to CEO Sam Altman’s ambitions.

The US legal battle, set for trial in March 2026, stems from Musk’s accusations that OpenAI strayed from its founding principles after taking significant investment from Microsoft.

Meanwhile, OpenAI has countersued, claiming Musk is actively working to harm the company and its relationships with investors and customers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Indian stock exchanges curb foreign access amid cybersecurity concerns

India’s two largest stock exchanges, the National Stock Exchange (NSE) and BSE Ltd, have temporarily restricted overseas access to their websites amid rising concerns over cyber threats. The move does not affect foreign investors’ ability to trade on Indian markets.

Sources familiar with the matter confirmed the decision followed a joint meeting between the exchanges, although no recent direct attack has been specified.

Despite the restrictions, market operations remain fully functional, with officials emphasising that the measures are purely preventive.

The precautionary step comes during heightened regional tensions between India and Pakistan, though no link to the geopolitical situation has been confirmed. The NSE has yet to comment publicly on the situation.

A BSE spokesperson noted that the exchanges are monitoring cyber risks both domestically and internationally and that website access is now granted selectively to protect users and infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!