OpenAI launches data residency in India for ChatGPT enterprise

OpenAI has announced that enterprise and educational customers in India using ChatGPT can now store their data locally instead of relying on servers abroad.

The move, aimed at complying with India’s upcoming data localisation rules under the Digital Personal Data Protection Act, allows conversations, uploads, and prompts to remain within the country. Similar options are now available in Japan, Singapore, and South Korea.

Data stored under this new residency option will be encrypted and kept secure, according to the company. OpenAI clarified it will not use this data for training its models unless customers choose to share it.

The change may also influence a copyright infringement case against OpenAI in India, where the jurisdiction was previously questioned due to foreign server locations.

Alongside this update, OpenAI has unveiled a broader international initiative, called OpenAI for Countries, as part of the US-led $500 billion Stargate project.

The plan involves building AI infrastructure in partner countries instead of centralising development, allowing nations to create localised versions of ChatGPT tailored to their languages and services.

OpenAI says the goal is to help democracies develop AI on their own terms instead of adopting centralised, authoritarian systems.

The company and the US government will co-invest in local data centres and AI models to strengthen economic growth and digital sovereignty across the globe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta wins $168 million verdict against NSO Group in landmark spyware case

Meta has secured a major legal victory against Israeli surveillance company NSO Group, with a California jury awarding $168 million in damages.

The ruling concludes a six-year legal battle over the unlawful deployment of NSO’s Pegasus spyware, which targeted journalists, human rights activists, and other individuals through a vulnerability in WhatsApp.

The verdict includes $444,719 in compensatory damages and $167.3 million in punitive damages.

Meta hailed the decision as a milestone for privacy, calling it ‘the first victory against the development and use of illegal spyware that threatens the safety and privacy of everyone’. NSO, meanwhile, said it would review the outcome and consider further legal steps, including an appeal.

The case, launched by WhatsApp in 2019, exposed the far-reaching use of Pegasus. Between 2018 and 2020, NSO generated $61.7 million in revenue from a single exploited vulnerability, with profits potentially reaching $40 million.

Court documents revealed that Pegasus was deployed against 1,223 individuals across 51 countries, with the highest number of victims in Mexico, India, Bahrain, Morocco, and Pakistan. Spain, where officials were targeted in 2022, ranked as the highest Western democracy on the list.

While NSO has long maintained that its spyware is sold exclusively to governments for counterterrorism purposes, the data highlighted its extensive use in authoritarian and semi-authoritarian regimes.

A former NSO employee testified that the company attempted to sell Pegasus to United States police forces, though those efforts were unsuccessful.

Beyond the financial penalty, the ruling exposed NSO’s internal operations. The company runs a 140-person research team with a $50 million budget dedicated to discovering smartphone vulnerabilities. Clients have included Saudi Arabia, Mexico, and Uzbekistan.

However, the firm’s conduct drew harsh criticism from Judge Phyllis Hamilton, who accused NSO of withholding evidence and ignoring court orders. Israeli officials reportedly intervened last year to prevent sensitive documents from reaching the US courts.

Privacy advocates welcomed the decision. Natalia Krapiva, a senior lawyer at Access Now, said it sends a strong message to the spyware industry. ‘This will hopefully show spyware companies that there will be consequences if you are careless, if you are brazen, and if you act as NSO did in these cases,’ she said.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google faces DOJ’s request to sell key ad platforms

The US Department of Justice (DOJ) has moved to break up Google’s advertising technology business after a federal judge ruled that the company holds illegal monopolies across two markets.

The DOJ is seeking the sale of Google’s AdX digital advertising marketplace and its DFP platform, which helps publishers manage their ad inventory.

It follows a ruling in April by Federal Judge Leonie Brinkema, who found that Google’s dominance in the online advertising market violated antitrust laws.

AdX and DFP were key acquisitions for Google, particularly the purchase of DoubleClick in 2008 for $3.1 billion. The DOJ argues that Google used monopolistic tactics, such as acquisitions and customer lock-ins, to control the ad tech market and stifle competition.

In response, Google has disputed the DOJ’s move, claiming the proposed sale of its advertising tools exceeds the court’s findings and could harm publishers and advertisers.

The DOJ’s latest filing also comes amid a separate legal action over Google’s Chrome browser, and the company is facing additional scrutiny in the UK for its dominance in the online search market.

The UK’s Competition and Markets Authority (CMA) has found that Google engaged in anti-competitive practices in open-display advertising technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MoJ explores AI for criminal court transcripts

The UK government is actively examining the use of AI to produce official transcripts of criminal court proceedings, but ministers have stressed that any technology must meet the high standards currently achieved by human professionals.

The Ministry of Justice (MoJ) is considering introducing AI-driven transcription services in the Crown Court to help reduce costs, according to Sarah Sackman, the minister responsible for court reform, AI, and digitisation.

Sackman, responding to a parliamentary question from MP David Davis, emphasised that accuracy remains the top priority. She explained that transcripts must be of an extremely high standard to protect the interests of parties, witnesses, and victims.

At present, transcription is delivered manually by third-party suppliers who are contractually required to achieve 99.5% accuracy.

AI-based solutions would need to meet a similar threshold before being adopted. Sackman added that while the MoJ is actively exploring the technology, reducing costs cannot come at the expense of reliability.

In 2023, the Ministry established a four-year, £20 million framework agreement for court reporting and transcription services.

Eight suppliers, including Appen, Epiq, and Opus 2, are providing services across three categories: remote transcription from recordings, on-site transcription refined into final documents, and real-time transcription for instant use.

Although AI could eventually transform how transcripts are created, any new systems will need to prove they can match the performance and accuracy of human transcribers before replacing existing methods.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands developer tools with Windsurf purchase

OpenAI, the creator of ChatGPT, is reportedly set to acquire Windsurf, an AI-powered coding assistant formerly known as Codeium, for $3 billion, according to Bloomberg. If confirmed, it would be OpenAI’s largest acquisition to date.

The deal is still pending closure, but it follows recent investment talks Windsurf held with major backers such as General Catalyst and Kleiner Perkins, valuing the startup at the same amount.

Windsurf was last valued at $1.25 billion in 2024 after a $150 million funding round. Instead of raising more capital independently, the company now appears poised to join OpenAI, which is looking to bolster its suite of developer tools within ChatGPT.

The acquisition reflects OpenAI’s efforts to remain competitive in the fast-evolving AI coding landscape, following earlier purchases like Rockset and Multi last year.

OpenAI also revealed it would scale back a planned restructuring, abandoning its proposal to become a for-profit entity.

The decision comes amid growing scrutiny and legal challenges, including a high-profile lawsuit from Elon Musk, who accused the firm of drifting from its founding mission to develop AI that serves humanity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump signals new extension for TikTok divestment deadline

President Donald Trump indicated he would extend the deadline set for the Chinese-owned company ByteDance to sell TikTok’s US operations if negotiations remain unfinished by 19 June.

The popular short-video app, used by around 170 million Americans, played a significant role in Trump’s appeal to younger voters during his 2024 election campaign. Trump described TikTok positively, hinting at protective measures rather than outright prohibition.

Originally mandated by Congress, the TikTok ban was supposed to be enforced starting on 19 January. Trump, however, has twice extended this deadline amid ongoing negotiations.

A potential agreement to spin off TikTok’s US operations into a new, US-majority-owned firm was suspended after China objected, a reaction spurred by Trump’s substantial tariffs on Chinese goods.

Democratic senators have challenged Trump’s authority to postpone the deadline further, arguing that the proposed spin-off arrangement does not satisfy legal conditions outlined in the original legislation.

Insiders indicate negotiations continue behind the scenes, though a resolution remains dependent on settling broader trade conflicts between the US and China.

Trump remains firm about maintaining high tariffs on China, now at 145%, which he insists significantly impacts the Chinese economy.

Yet, he has left the door open to eventually lowering these tariffs within a more comprehensive trade agreement, acknowledging China’s strong desire to resume business with the U.S.

Despite multiple extensions, the fate of TikTok’s US operations remains uncertain, as political and economic factors continue shaping negotiations. Trump’s willingness to extend deadlines reflects broader geopolitical dynamics between Washington and Beijing, linking digital platform regulation closely with international trade policy.

Google admits using opted-out content for AI training

Google has admitted in court that it can use website content to train AI features in its search products, even when publishers have opted out of such training.

Although Google offers a way for sites to block their data from being used by its AI lab, DeepMind, the company confirmed that its broader search division can still use that data for AI-powered tools like AI Overviews.

An initiative like this has raised concern among publishers who seek reduced traffic as Google’s AI summarises answers directly at the top of search results, diverting users from clicking through to original sources.

Eli Collins, a vice-president at Google DeepMind, acknowledged during a Washington antitrust trial that Google’s search team could train AI using data from websites that had explicitly opted out.

The only way for publishers to fully prevent their content from being used in this way is by opting out of being indexed by Google Search altogether—something that would effectively make them invisible on the web.

Google’s approach relies on the robots.txt file, a standard that tells search bots whether they are allowed to crawl a site.

The trial is part of a broader effort by the US Department of Justice to address Google’s dominance in the search market, which a judge previously ruled had been unlawfully maintained.

The DOJ is now asking the court to impose major changes, including forcing Google to sell its Chrome browser and stop paying to be the default search engine on other devices. These changes would also apply to Google’s AI products, which the DOJ argues benefit from its monopoly.

Testimony also revealed internal discussions at Google about how using extensive search data, such as user session logs and search rankings, could significantly enhance its AI models.

Although no model was confirmed to have been built using that data, court documents showed that top executives like DeepMind CEO Demis Hassabis had expressed interest in doing so.

Google’s lawyers have argued that competitors in AI remain strong, with many relying on direct data partnerships instead of web scraping.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump administration eyes overhaul of Biden-era AI chip export rules

The Trump administration is reviewing a Biden-era rule that restricts global access to US-made advanced AI chips, with discussions underway to eliminate the current tiered system that governs chip exports, according to sources familiar with the matter.

The existing rule, known as the Framework for Artificial Intelligence Diffusion, was introduced by the US Department of Commerce in January and is set to take effect on 15 May.

It divides the world into three groups: trusted allies (like the EU and Taiwan) with unlimited access, Tier 2 countries with chip quotas, and restricted countries such as China, Russia, Iran and North Korea.

Officials are considering replacing this structure with a global licensing regime based on government-to-government agreements—aligning with Donald Trump’s broader trade strategy of negotiating bilateral deals and using US-made chips as leverage.

Other possible changes include tightening export thresholds: under current rules, orders under the equivalent of 1,700 Nvidia H100 chips only require notification, not a licence. The new proposal could reduce that threshold to around 500 chips.

Supporters of the change argue it would increase US bargaining power and simplify enforcement. Critics, however, warn that scrapping the tier system may complicate compliance and drive countries toward Chinese chip alternatives.

Tech firms such as Oracle and Nvidia, along with several US lawmakers, have criticised the current framework, saying it risks harming American competitiveness and pushing international buyers toward cheaper, unregulated Chinese substitutes.

The Commerce Department declined to comment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK refuses to include Online Safety Act in US trade talks

The UK government has ruled out watering down the Online Safety Act as part of any trade negotiations with the US, despite pressure from American tech giants.

Speaking to MPs on the Science, Innovation and Technology Committee, Baroness Jones of Whitchurch, the parliamentary under-secretary for online safety, stated unequivocally that the legislation was ‘not up for negotiation’.

‘There have been clear instructions from the Prime Minister,’ she said. ‘The Online Safety Act is not part of the trade deal discussions. It’s a piece of legislation — it can’t just be negotiated away.’

Reports had suggested that President Donald Trump’s administration might seek to make loosening the UK’s online safety rules a condition of a post-Brexit trade agreement, following lobbying from large US-based technology firms.

However, Baroness Jones said the legislation was well into its implementation phase and that ministers were ‘happy to reassure everybody’ that the government is sticking to it.

The Online Safety Act will require tech platforms that host user-generated content, such as social media firms, to take active steps to protect users — especially children — from harmful and illegal content.

Non-compliant companies may face fines of up to £18 million or 10% of global turnover, whichever is greater. In extreme cases, platforms could be blocked from operating in the UK.

Mark Bunting, a representative of Ofcom, which is overseeing enforcement of the new rules, said the regulator would have taken action had the legislation been in force during last summer’s riots in Southport, which were exacerbated by online misinformation.

His comments contrasted with tech firms including Meta, TikTok and X, which claimed in earlier hearings that little would have changed under the new rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Big Tech accused of undue influence over EU AI Code

The European Commission is facing growing criticism after a joint investigation revealed that Big Tech companies had disproportionate influence over the drafting of the EU’s Code of Practice on General Purpose AI.

The report, published by Corporate Europe Observatory and LobbyControl, claims firms such as Google, Microsoft, Meta, Amazon, and OpenAI were granted privileged access to shaping the voluntary code, which aims to help companies comply with the upcoming AI Act.

While 13 Commission-appointed experts led the process and over 1,000 participants were involved in feedback workshops, civil society groups and smaller stakeholders were largely side-lined.

Their input was often limited to reacting through emojis on an online platform instead of engaging in meaningful dialogue, the report found.

The US government also waded into the debate, sending a letter to the Commission opposing the Code. The Trump administration argued the EU’s digital regulations would stifle innovation.

Critics meanwhile say the EU’s current approach opens the door to Big Tech lobbying, potentially weakening the Code’s effectiveness just as it nears finalisation.

Although the Code was due in early May, it is now expected by June or July, just before new rules on general-purpose AI tools come into force in August.

The Commission has yet to confirm the revised timeline.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!