Anthropic lawsuit gains Big Tech support in AI dispute

Several major US technology companies have backed Anthropic in its lawsuit challenging the US Department of Defence’s decision to label the AI company a national security ‘supply chain risk’.

Google, Amazon, Apple, and Microsoft have filed legal briefs supporting Anthropic’s attempt to overturn the designation issued by Defence Secretary Pete Hegseth. Anthropic argues the decision was retaliation after the company declined to allow its AI systems to be used for mass surveillance or autonomous weapons.

In court filings, the companies warned that the government’s action could have wider consequences for the technology sector. Microsoft said the decision could have ‘broad negative ramifications for the entire technology sector’.

Microsoft, which works closely with the US government and the Department of Defence, said it agreed with Anthropic’s position that AI systems should not be used to conduct domestic mass surveillance or enable autonomous machines to initiate warfare.

A joint amicus brief supporting Anthropic was also submitted by the Chamber of Progress, a technology policy organisation funded by companies including Google, Apple, Amazon and Nvidia. The group said it was concerned about the government penalising a company for its public statements.

The brief described the designation as ‘a potentially ruinous sanction’ for businesses and warned it could create a climate in which companies fear government retaliation for expressing views.

Anthropic’s lawsuit claims the government violated its free speech rights by retaliating against the company for comments made by its leadership. The dispute escalated after Anthropic declined to remove contractual restrictions preventing its AI models from being used for mass surveillance or autonomous weapons.

The company had previously introduced safeguards in government contracts to limit certain uses of its technology. Negotiations over revised contract language continued for several weeks before the disagreement became public.

Former military officials and technology policy advocates have also filed supporting briefs, warning that the decision could discourage companies from participating in national security projects if they fear retaliation for voicing concerns. The case is currently being heard in federal court in San Francisco.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google outlines roadmap for safer generative AI for young users

Google has presented a strategy for developing generative AI systems designed to protect younger users better better while supporting learning and creativity.

The approach emphasises building conversational AI experiences that balance innovation with safeguards tailored to children and teenagers.

The company’s framework rests on three pillars: protecting young people online, respecting the role of families in digital environments and enabling youth to explore AI technologies responsibly.

According to Google, safety policies prohibit harmful content, including material linked to child exploitation, violent extremism and self-harm, while additional restrictions target age-inappropriate topics.

Safeguards are integrated throughout the AI development lifecycle, from user input to model responses. Systems use specialised classifiers to detect potentially harmful queries and prevent inappropriate outputs.

These protections are also applied to models such as Gemini, which incorporates defences against prompt manipulation and cyber misuse.

Beyond preventing harm, Google aims to support responsible AI adoption through educational initiatives.

Resources designed for families encourage discussions about responsible technology use, while tools such as Guided Learning in Gemini seek to help students explore complex topics through structured explanations and interactive learning support.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-driven adaptive malware highlights new cyber threat landscape

Google’s cybersecurity division, Mandiant, has warned about the growing threat of AI-driven adaptive malware, highlighting how AI is reshaping the cyber threat landscape.

According to a recent report, adaptive malware can modify its behaviour and code in response to the environment it encounters, thereby evading traditional security tools. By analysing the security systems protecting a target, the malware can rewrite parts of its code to bypass detection.

Unlike traditional malware, which typically follows fixed instructions, adaptive malware can adjust its behaviour during an attack. This capability makes it more difficult for conventional cybersecurity tools to detect and block malicious activity.

Mandiant noted that such malware is increasingly associated with advanced persistent threat (APT) groups that conduct long-term, targeted cyber operations. These groups often pursue espionage objectives or financial gain while maintaining prolonged access to compromised systems.

AI is also being used to automate elements of cyberattacks. Machine learning algorithms allow malicious software to anticipate defensive measures and adjust its behaviour in real time. In some cases, attackers are integrating AI into broader automated attack chains. AI-driven malware can gather information, adapt its strategy, and continue operating with minimal human intervention.

Security researchers say autonomous AI agents may be capable of managing multiple stages of an attack, including reconnaissance, exploitation, and persistence, while remaining undetected.

To address these evolving threats, Mandiant recommends that organisations strengthen their cybersecurity strategies by deploying advanced detection and response tools, including AI-based systems that can identify anomalous behaviour. As AI capabilities continue to develop, cybersecurity experts say understanding adaptive malware and automated attack techniques will be essential for organisations seeking to protect their systems and data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Netflix AI filmmaking push grows with InterPositive acquisition

A deal valued at up to $600 million will see Netflix acquire InterPositive, the AI filmmaking company founded by actor and director Ben Affleck, according to people familiar with the matter.

The transaction, paid in cash, is expected to become one of the largest acquisitions made by the streaming company. The final upfront amount is reportedly lower, with additional payments tied to performance targets. Netflix has not publicly disclosed the financial terms of the deal.

The acquisition is intended to accelerate the use of AI in film production. InterPositive has developed software tools that enable filmmakers to modify existing footage, including removing unwanted elements or adjusting scene backgrounds. Director David Fincher has already used the technology in work on an upcoming film starring Brad Pitt.

The deal reflects a broader trend among entertainment companies exploring AI technologies to streamline production and improve efficiency. Companies including Netflix and Amazon are experimenting with AI tools in film and television production, while Disney has established a partnership with OpenAI.

The growing use of AI in Hollywood has raised concerns among industry workers. Some fear the technology could reduce jobs or allow studios to use creative work to train AI systems without compensation.

Affleck has said the InterPositive technology is designed to support filmmakers rather than replace them. The system requires directors first to shoot original footage before the software can train on the material. The tools can then assist with editing tasks, but do not generate films independently.

Netflix has traditionally avoided large-scale acquisitions, focusing instead on developing its technology internally. Even so, the purchase of InterPositive signals a step toward strengthening the company’s AI capabilities in film production.

‘The filmmaking process, really, since its inception, has been one long technological progression,’ Affleck said in a video released by Netflix. ‘We’ve always been seeking to make it feel more realistic, more honest, and InterPositive, I hope, is another iteration or step in keeping with that long and storied history.’

Affleck founded InterPositive with backing from investment firm RedBird Capital Partners and began seeking investment in 2025 before the company attracted interest from Netflix.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Spain expands digital oversight of online hate

Spain has launched a digital system designed to track hate speech and disinformation across social media platforms. Prime Minister Pedro Sánchez presented the tool in Madrid as part of a wider effort to improve oversight of online platforms.

The platform known as HODIO will analyse public posts and measure the spread and reach of hateful content. Authorities in Spain say the project will publish regular reports examining how platforms respond to harmful material.

The monitoring initiative is managed by Spain’s Observatory on Racism and Xenophobia. Officials in Spain say the data will help citizens understand the scale of online hate and assess how social networks address abusive content.

The initiative forms part of a broader digital policy agenda in Spain that also includes measures to protect minors online. Policymakers in Spain have discussed proposals such as restrictions on social media use by children under 16.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and quantum computing reshape the global cybersecurity landscape

Cybersecurity risks are increasing as digital connectivity expands across governments, businesses and households.

According to Thales Group, a growing number of connected devices and digital services has significantly expanded the potential entry points for cyberattacks.

AI is reshaping the cybersecurity landscape by enabling attackers to identify vulnerabilities at unprecedented speed.

Security specialists increasingly describe the environment as a contest in which defensive systems must deploy AI to counter adversaries using similar technologies to exploit weaknesses in digital infrastructure.

Security concerns also extend beyond large institutions. Connected devices in homes, including smart cameras and speakers, often lack robust security protections, increasing exposure for individuals and networks.

Policymakers in Europe are responding through measures such as the Cyber Resilience Act, which will introduce mandatory security requirements for connected products sold in the EU.

Long-term risks are also emerging from advances in quantum computing.

Experts warn that powerful future machines could eventually break widely used encryption systems that currently protect communications, financial data and government networks, prompting organisations to adopt quantum-resistant security methods.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU lawmakers call for stronger copyright safeguards in AI training

The European Parliament has adopted a report urging policymakers to establish a long-term framework protecting copyrighted works used in AI training.

These recommendations aim to ensure that creative industries retain transparency and fair treatment as generative AI technologies expand.

Among the central proposals is the creation of a European register managed by the European Union Intellectual Property Office. The database would list copyrighted works used to train AI systems and identify creators who have chosen to exclude their content from such use.

Lawmakers in the EU are also calling for greater transparency from AI developers, including disclosure of the websites from which training data has been collected. According to the report, failing to meet transparency requirements could raise questions about compliance with existing copyright rules.

The recommendations have received mixed reactions from industry stakeholders.

Organisations representing creators argue that stronger safeguards are necessary to ensure fair remuneration and legal clarity, while technology sector groups caution that additional requirements could create complexity for companies developing AI systems.

The report is not legally binding but signals the political direction of ongoing European discussions on copyright and AI governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UNESCO and African network advance AI in justice

AI is increasingly shaping Africa’s courts, from translation tools to legal search engines. As AI becomes more integrated, judicial actors face new questions around transparency, accountability, and human rights.

Thirty-one members of the African Network of Judicial Trainers (ANJT) gathered in Maputo for a regional workshop on AI, Justice, and Human Rights.

Participants included judicial directors, Supreme Court justices and senior magistrates who shared strategies for responsibly integrating AI into courts. UNESCO highlighted the importance of keeping justice human-centred amid technological change.

Discussions examined the benefits of AI-assisted translation and data analysis, alongside risks such as bias, discrimination, and opacity.

UNESCO introduced practical resources, including the Guidelines for the Use of AI in Courts and Tribunals and AI Essentials for Judges, to help judicial professionals implement ethical practices.

Workshop participants committed to adapting these materials into national training curricula, aiming to multiply knowledge across African judicial systems. ANJT and UNESCO emphasised that AI adoption should enhance efficiency without compromising fairness or the rule of law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Dutch firms rank among EU leaders in sustainable ICT

Businesses in the Netherlands rank among the leading adopters of sustainable ICT practices in the EU, according to data from Statistics Netherlands and Eurostat. Around one quarter of companies use digital tools to reduce material consumption and improve resource efficiency.

The Netherlands ranked fourth in the EU for the use of technology to reduce waste and improve sustainability. Sectors including energy, water and waste management showed the strongest adoption of these ICT solutions.

Sustainable disposal of electronic equipment is also widespread among businesses in the Netherlands. About 9 in 10 companies recycle or return obsolete ICT equipment through approved e-waste collection systems.

Across the EU, more than three-quarters of businesses now dispose of outdated technology in environmentally responsible ways. Analysts say progress highlights growing corporate efforts to integrate the sustainability of e-waste into digital operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot