UK announces £2.5 billion investment in AI and quantum technologies

Plans to accelerate technological leadership have been outlined by the HM Treasury and the Department for Science, Innovation and Technology, with a £2.5 billion investment targeting AI and quantum computing.

Ambition has been reinforced by Rachel Reeves, who positioned AI as a central driver of economic growth, alongside closer European ties and regional development. Strategy aims to secure the fastest adoption of AI across the G7 while supporting domestic innovation ecosystems.

Significant funding in the UK will be directed towards a Sovereign AI initiative, quantum infrastructure and research capacity. Plans include procurement of large-scale quantum systems and targeted investment in startups, helping companies scale while strengthening national capabilities in advanced technologies.

Expectations surrounding quantum computing are framed as transformative, with potential to reshape industries from healthcare to energy. Combined investment reflects a broader effort to align innovation policy with long-term economic growth and global competitiveness.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Anthropic dispute pushes Pentagon toward new AI providers

The Pentagon is accelerating efforts to replace Anthropic after the company was designated a supply-chain risk, marking a sharp shift in US defence AI strategy. The move follows a breakdown in talks over safeguards governing military use of AI, particularly around surveillance and autonomous weapons.

Cameron Stanley, the Pentagon’s chief digital and AI officer, said engineering work is underway to deploy alternative large language models in government-controlled environments. He indicated that while transitioning from Anthropic’s tools could take more than a month, new systems are expected to be operational soon.

The decision threatens a $200 million contract and could exclude Anthropic from future defence partnerships. The US administration has set a six-month timeline for federal agencies to shift away from the company, signalling a broader push to diversify AI suppliers and reduce dependency risks.

Rival providers are already stepping in. OpenAI and xAI have been approved for classified work, while Google is introducing Gemini AI tools across the Pentagon workforce, initially on unclassified networks before expanding into sensitive environments.

Anthropic has challenged the designation in court, arguing it violates constitutional protections and could harm its business. Despite the legal dispute, defence officials have made clear they are moving forward with an ‘AI-first’ strategy to accelerate the adoption of advanced models across military operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

GDPR changes debated as EU seeks balance on data protection rules

Debate over potential updates to the GDPR is intensifying, as Marina Kaljurand advocates a focused ‘fitness check’ rather than sweeping legislative changes in an omnibus package.

Concerns raised in the European Parliament highlight risks associated with altering foundational elements of the regulation, particularly its definitions to personal data. Preserving these core principles is seen as essential to maintaining the integrity of the EU’s data protection framework.

Ongoing discussions reflect broader policy tensions within the EU, where efforts to reduce regulatory complexity must be balanced against the need to uphold strong privacy safeguards. Proposals for simplification are therefore facing scrutiny from lawmakers prioritising stability and legal clarity.

Future developments are likely to shape how the EU adapts its data protection rules to evolving digital markets, while ensuring that existing protections remain effective in a rapidly changing technological environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI in filmmaking raises job fears as creative roles face pressure

Growing concern over AI in filmmaking emerged at a major conference, where veteran director Steven Spielberg rejected its use as a replacement for human creativity. He emphasised that storytelling should remain in human hands rather than being driven by automation.

Rapid advances in AI video tools have unsettled the industry, raising fears among editors and visual effects workers. Joshua Davies, chief innovation officer at a video platform, pointed to concerns over jobs, copyright and future production methods.

Current tools remain limited, particularly when handling complex camera movements or maintaining consistency across scenes. AI is instead being used to support production by filling gaps where footage cannot be filmed due to time or budget limits.

Studios are already exploring how AI can be integrated into production pipelines following recent disruptions. A fast and low-cost Super Bowl advert highlighted its potential, although human creative input remained essential.

Lower production costs are expected, but full automation is still unlikely in the near term. AI could help independent creators compete, while strong storytelling continues to define success.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN calls for global action against online scam networks

Online scam networks operating across Southeast Asia are defrauding victims worldwide, using AI, impersonation techniques, and complex cyber tools to steal billions of dollars.

At the Global Fraud Summit in Vienna, the UN Office on Drugs and Crime (UNODC) and INTERPOL brought together governments, law enforcement, and private-sector actors to strengthen international cooperation against these crimes.

Victims include individuals from diverse backgrounds, often highly educated and financially experienced. One Australian couple, Kim and Allan Sawyer, lost more than $2.5 million after engaging with what appeared to be a legitimate investment opportunity. ‘The scammer was extraordinarily believable,’ Kim Sawyer said. ‘He had a British accent, used all the right financial market terms and knew how to induce us by appearing credible every time.’

UNODC officials warn that these operations extend beyond fraud, forming part of a broader criminal ecosystem driven by organised scam networks, involving human trafficking, corruption, and money laundering.

‘We need to be looking into prosecuting high-level criminals, following the money through financial investigations and identifying the giant networks that operate behind these operations,’ said Delphine Schantz, UNODC’s regional representative for Southeast Asia and the Pacific.

Authorities say the scale and complexity of these crimes require a coordinated global response to dismantle scam networks effectively. ‘The complexity of these crimes requires an equally complex, whole-of-government approach and enhanced coordination among governments, financial intelligence units and digital banks,’ Schantz added.

Investigations in countries such as the Philippines and Cambodia have revealed how scam networks operate on the ground. In Manila, a former scam compound uncovered facilities used to control trafficked workers and evidence of corruption linked to local officials. ‘How do you prove a cybercrime in 36 hours? It is not possible,’ said the Philippines’ Presidential Anti-Organised Crime Commission (PAOCC) operations director, recalling the challenges investigators faced during early raids.

In Cambodia, international prosecutors and investigators have focused on improving cooperation mechanisms, including extradition, asset recovery, and the handling of digital evidence. These efforts are seen as critical in addressing the cross-border nature of scam networks.

Despite increased enforcement efforts, these networks continue to adapt and relocate, maintaining a global reach. At recent international meetings, including a summit in Bangkok involving nearly 60 countries and major technology firms, officials agreed on the need for shared intelligence, joint investigations and coordinated prosecutions.

Victims continue to call for stronger responses. ‘The scammer works twice: they take your money, and they take your soul. They really do. They take your self-worth. And then, you feel like you’re being scammed again, by the authorities’ lack of response,’ Sawyer said.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Writer files lawsuit against Grammarly over AI feature using experts’ identities

A journalist has filed a class action lawsuit against Grammarly after the company introduced an AI feature that generated editorial feedback by imitating well-known writers and public figures without their permission.

The legal complaint was submitted by investigative journalist Julia Angwin, who argued that the tool unlawfully used the identities and reputations of authors and commentators.

The feature, known as ‘Expert Review’, produced automated critiques presented as if they came from figures such as Stephen King, Carl Sagan and technology journalist Kara Swisher.

Such a feature was available to subscribers paying an annual fee and was designed to simulate professional editorial guidance.

Critics quickly questioned both the quality of the generated feedback and the decision to associate the tool with real individuals who had not authorised the use of their names or expertise.

Technology writer Casey Newton tested the system by submitting one of his own articles and receiving automated feedback attributed to an AI version of Swisher. The response appeared generic, casting doubt on the value of linking such commentary to prominent personalities.

Following criticism from writers and researchers, the feature was disabled. Shishir Mehrotra, chief executive of Grammarly’s parent company Superhuman, issued a public apology while defending the broader concept behind the tool.

The lawsuit reflects growing tensions around AI systems that replicate creative styles or professional expertise.

As generative AI technologies expand across writing and publishing industries, questions surrounding consent, intellectual labour and identity rights are becoming increasingly prominent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU competition regulators expand scrutiny across the entire AI ecosystem

Competition authorities in the EU are broadening their oversight of the AI sector, examining every layer of the technology’s value chain.

Speaking at a conference in Berlin, Teresa Ribera explained that regulators are analysing the full ‘AI stack’ instead of focusing solely on consumer applications.

According to the competition chief, scrutiny extends beyond visible AI tools to the systems that support them. Investigations are assessing underlying models, the data used to train those models, as well as cloud infrastructure and energy resources that power AI systems.

Regulatory attention has already reached the application layer.

The European Commission opened an investigation in 2025 involving Meta after concerns emerged that the company could restrict competing AI assistants on its messaging platform WhatsApp.

Following regulatory pressure, Meta proposed allowing rival AI chatbots on the platform in exchange for a fee. European regulators are now assessing the proposal to determine whether additional intervention is necessary to preserve fair competition in rapidly evolving digital markets.

Authorities have also examined concentration risks across other parts of the AI ecosystem, including the infrastructure layer dominated by companies such as Nvidia.

Regulators argue that effective competition oversight must address the entire technology stack as AI markets expand quickly.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Anthropic lawsuit gains Big Tech support in AI dispute

Several major US technology companies have backed Anthropic in its lawsuit challenging the US Department of Defence’s decision to label the AI company a national security ‘supply chain risk’.

Google, Amazon, Apple, and Microsoft have filed legal briefs supporting Anthropic’s attempt to overturn the designation issued by Defence Secretary Pete Hegseth. Anthropic argues the decision was retaliation after the company declined to allow its AI systems to be used for mass surveillance or autonomous weapons.

In court filings, the companies warned that the government’s action could have wider consequences for the technology sector. Microsoft said the decision could have ‘broad negative ramifications for the entire technology sector’.

Microsoft, which works closely with the US government and the Department of Defence, said it agreed with Anthropic’s position that AI systems should not be used to conduct domestic mass surveillance or enable autonomous machines to initiate warfare.

A joint amicus brief supporting Anthropic was also submitted by the Chamber of Progress, a technology policy organisation funded by companies including Google, Apple, Amazon and Nvidia. The group said it was concerned about the government penalising a company for its public statements.

The brief described the designation as ‘a potentially ruinous sanction’ for businesses and warned it could create a climate in which companies fear government retaliation for expressing views.

Anthropic’s lawsuit claims the government violated its free speech rights by retaliating against the company for comments made by its leadership. The dispute escalated after Anthropic declined to remove contractual restrictions preventing its AI models from being used for mass surveillance or autonomous weapons.

The company had previously introduced safeguards in government contracts to limit certain uses of its technology. Negotiations over revised contract language continued for several weeks before the disagreement became public.

Former military officials and technology policy advocates have also filed supporting briefs, warning that the decision could discourage companies from participating in national security projects if they fear retaliation for voicing concerns. The case is currently being heard in federal court in San Francisco.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Netflix AI filmmaking push grows with InterPositive acquisition

A deal valued at up to $600 million will see Netflix acquire InterPositive, the AI filmmaking company founded by actor and director Ben Affleck, according to people familiar with the matter.

The transaction, paid in cash, is expected to become one of the largest acquisitions made by the streaming company. The final upfront amount is reportedly lower, with additional payments tied to performance targets. Netflix has not publicly disclosed the financial terms of the deal.

The acquisition is intended to accelerate the use of AI in film production. InterPositive has developed software tools that enable filmmakers to modify existing footage, including removing unwanted elements or adjusting scene backgrounds. Director David Fincher has already used the technology in work on an upcoming film starring Brad Pitt.

The deal reflects a broader trend among entertainment companies exploring AI technologies to streamline production and improve efficiency. Companies including Netflix and Amazon are experimenting with AI tools in film and television production, while Disney has established a partnership with OpenAI.

The growing use of AI in Hollywood has raised concerns among industry workers. Some fear the technology could reduce jobs or allow studios to use creative work to train AI systems without compensation.

Affleck has said the InterPositive technology is designed to support filmmakers rather than replace them. The system requires directors first to shoot original footage before the software can train on the material. The tools can then assist with editing tasks, but do not generate films independently.

Netflix has traditionally avoided large-scale acquisitions, focusing instead on developing its technology internally. Even so, the purchase of InterPositive signals a step toward strengthening the company’s AI capabilities in film production.

‘The filmmaking process, really, since its inception, has been one long technological progression,’ Affleck said in a video released by Netflix. ‘We’ve always been seeking to make it feel more realistic, more honest, and InterPositive, I hope, is another iteration or step in keeping with that long and storied history.’

Affleck founded InterPositive with backing from investment firm RedBird Capital Partners and began seeking investment in 2025 before the company attracted interest from Netflix.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU lawmakers call for stronger copyright safeguards in AI training

The European Parliament has adopted a report urging policymakers to establish a long-term framework protecting copyrighted works used in AI training.

These recommendations aim to ensure that creative industries retain transparency and fair treatment as generative AI technologies expand.

Among the central proposals is the creation of a European register managed by the European Union Intellectual Property Office. The database would list copyrighted works used to train AI systems and identify creators who have chosen to exclude their content from such use.

Lawmakers in the EU are also calling for greater transparency from AI developers, including disclosure of the websites from which training data has been collected. According to the report, failing to meet transparency requirements could raise questions about compliance with existing copyright rules.

The recommendations have received mixed reactions from industry stakeholders.

Organisations representing creators argue that stronger safeguards are necessary to ensure fair remuneration and legal clarity, while technology sector groups caution that additional requirements could create complexity for companies developing AI systems.

The report is not legally binding but signals the political direction of ongoing European discussions on copyright and AI governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!