Anthropic lawsuit gains Big Tech support in AI dispute

Several major US technology companies have backed Anthropic in its lawsuit challenging the US Department of Defence’s decision to label the AI company a national security ‘supply chain risk’.

Google, Amazon, Apple, and Microsoft have filed legal briefs supporting Anthropic’s attempt to overturn the designation issued by Defence Secretary Pete Hegseth. Anthropic argues the decision was retaliation after the company declined to allow its AI systems to be used for mass surveillance or autonomous weapons.

In court filings, the companies warned that the government’s action could have wider consequences for the technology sector. Microsoft said the decision could have ‘broad negative ramifications for the entire technology sector’.

Microsoft, which works closely with the US government and the Department of Defence, said it agreed with Anthropic’s position that AI systems should not be used to conduct domestic mass surveillance or enable autonomous machines to initiate warfare.

A joint amicus brief supporting Anthropic was also submitted by the Chamber of Progress, a technology policy organisation funded by companies including Google, Apple, Amazon and Nvidia. The group said it was concerned about the government penalising a company for its public statements.

The brief described the designation as ‘a potentially ruinous sanction’ for businesses and warned it could create a climate in which companies fear government retaliation for expressing views.

Anthropic’s lawsuit claims the government violated its free speech rights by retaliating against the company for comments made by its leadership. The dispute escalated after Anthropic declined to remove contractual restrictions preventing its AI models from being used for mass surveillance or autonomous weapons.

The company had previously introduced safeguards in government contracts to limit certain uses of its technology. Negotiations over revised contract language continued for several weeks before the disagreement became public.

Former military officials and technology policy advocates have also filed supporting briefs, warning that the decision could discourage companies from participating in national security projects if they fear retaliation for voicing concerns. The case is currently being heard in federal court in San Francisco.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI and quantum computing reshape the global cybersecurity landscape

Cybersecurity risks are increasing as digital connectivity expands across governments, businesses and households.

According to Thales Group, a growing number of connected devices and digital services has significantly expanded the potential entry points for cyberattacks.

AI is reshaping the cybersecurity landscape by enabling attackers to identify vulnerabilities at unprecedented speed.

Security specialists increasingly describe the environment as a contest in which defensive systems must deploy AI to counter adversaries using similar technologies to exploit weaknesses in digital infrastructure.

Security concerns also extend beyond large institutions. Connected devices in homes, including smart cameras and speakers, often lack robust security protections, increasing exposure for individuals and networks.

Policymakers in Europe are responding through measures such as the Cyber Resilience Act, which will introduce mandatory security requirements for connected products sold in the EU.

Long-term risks are also emerging from advances in quantum computing.

Experts warn that powerful future machines could eventually break widely used encryption systems that currently protect communications, financial data and government networks, prompting organisations to adopt quantum-resistant security methods.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU lawmakers call for stronger copyright safeguards in AI training

The European Parliament has adopted a report urging policymakers to establish a long-term framework protecting copyrighted works used in AI training.

These recommendations aim to ensure that creative industries retain transparency and fair treatment as generative AI technologies expand.

Among the central proposals is the creation of a European register managed by the European Union Intellectual Property Office. The database would list copyrighted works used to train AI systems and identify creators who have chosen to exclude their content from such use.

Lawmakers in the EU are also calling for greater transparency from AI developers, including disclosure of the websites from which training data has been collected. According to the report, failing to meet transparency requirements could raise questions about compliance with existing copyright rules.

The recommendations have received mixed reactions from industry stakeholders.

Organisations representing creators argue that stronger safeguards are necessary to ensure fair remuneration and legal clarity, while technology sector groups caution that additional requirements could create complexity for companies developing AI systems.

The report is not legally binding but signals the political direction of ongoing European discussions on copyright and AI governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Writers publish protest book to challenge AI use of copyrighted works

Thousands of writers have joined a symbolic protest against AI companies by publishing a book that contains no traditional content.

The work, titled “Don’t Steal This Book,” lists only the names of roughly 10,000 contributors who oppose the use of their writing to train AI systems without their permission.

An initiative that was organised by composer and campaigner Ed Newton-Rex and distributed during the London Book Fair. Contributors include prominent authors such as Kazuo Ishiguro, Philippa Gregory and Richard Osman, along with thousands of other writers and creative professionals.

Campaigners argue that generative AI systems are trained on vast collections of copyrighted material gathered from the internet without authorisation or compensation.

According to organisers, such practices allow AI tools to compete with the creators whose works were used to develop them.

The protest arrives as the UK Government prepares an economic assessment of potential copyright reforms related to AI. Proposals under discussion include allowing AI developers to use copyrighted material unless rights holders explicitly opt out.

Many writers and artists oppose that approach and demand stronger copyright protections. In parallel, the publishing sector is preparing a licensing initiative through Publishers’ Licensing Services to provide AI developers with legal access to books while ensuring authors receive compensation.

The dispute reflects a growing global debate over how copyright law should apply to generative AI systems that rely on massive datasets to develop chatbots and other digital tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Sustainable AI discussed by UNESCO and Saudi leaders under Vision 2030

Leaders from government, academia, and industry gathered to emphasise that sustainable AI must shape efficient, inclusive, and environmentally responsible systems. The discussion focused on embedding sustainability, ethics, and human-centred principles throughout the AI lifecycle by adopting a sustainable-by-design approach.

The workshop was built on Saudi Arabia’s expanding role in AI and digital transformation through the Saudi Data & AI Authority (SDAIA) and the National Strategy for Data and AI (NSDAI). The efforts are supported by significant investments in cloud infrastructure and data centres under the Kingdom’s Vision 2030 programme. Participants highlighted that sustainable AI must become a core principle in the development of emerging digital infrastructure and AI-powered services.

Abdulrahman Habib, Director of the International Centre for Artificial Intelligence Research and Ethics (ICAIRE), highlighted Saudi Arabia’s growing leadership in AI ethics and governance. With national AI Ethics Principles and a maturing regulatory landscape, the Kingdom is positioning itself as a global contributor to responsible AI dialogue, translating principles into operational governance systems rather than just policy statements.

Leona Verdadero of UNESCO highlighted two core concepts: Greening with AI, which uses AI to accelerate sustainability, and Greening of AI, which ensures systems are energy-efficient, ethical, and human-centred. She stressed that effective AI governance requires collaboration and industry leadership at every stage of development.

Per Ola Kristensson from the University of Cambridge urged action beyond rhetoric, stressing that true AI sustainability means developing technology to augment, not replace, human potential. Industry presentations reinforced that sustainable AI drives real-world progress. Companies like RECYCLEE optimise resource recovery, Remedium reduces environmental impacts in healthcare and infrastructure, and IDOM strengthens sustainability reporting through AI-enhanced design.

UNESCO supports Saudi Arabia’s drive for inclusive, ethical, and sustainable AI ecosystems, framing sustainable AI as critical in the global transition to green digital transformation.

Faisal Al Azib, Executive Director of the UN Global Compact Network Saudi Arabia, stated: ‘As the Kingdom advances its digital transformation under Vision 2030, we have a responsibility to ensure that innovation advances hand in hand with sustainability and human dignity.’

Al Azib concluded: ‘Sustainable AI is central to building resilient, future-ready businesses. Through partnerships with UNESCO and our local ecosystem, we aim to equip companies with the governance tools to embed responsible, energy-efficient, and human-centred AI into their core strategies.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU explores AI image generation safeguards

The Council of the European Union is examining a compromise proposal that could introduce restrictions on certain AI systems capable of generating sensitive synthetic images.

The discussions form part of ongoing adjustments to the EU AI Act.

A proposed measure that would primarily address AI tools that generate illegal material, particularly content involving the exploitation of minors.

Policymakers are considering ways to prevent the development or deployment of systems that could produce such material while maintaining proportionate rules for legitimate AI applications.

Early indications suggest the proposal may not apply to images depicting people in standard clothing contexts, such as swimwear. The distinction reflects policymakers’ effort to define the scope of restrictions without imposing unnecessary limits on common image-generation uses.

The debate highlights broader regulatory challenges linked to generative AI technologies. European institutions are seeking to strengthen protections against harmful uses of AI while preserving space for innovation and lawful digital services.

Further negotiations among the EU institutions are expected as lawmakers continue refining how these provisions could fit within the broader European framework governing AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Japan expands strategic investment in AI, quantum computing, and drones

Japan has identified dozens of advanced technologies as priority investment targets as part of an economic strategy led by Sanae Takaichi.

The plan aims to channel public and private capital into industries expected to drive long-term economic growth.

Government officials selected 61 technologies and products for support across 17 strategic sectors. The list includes emerging fields such as AI, quantum computing, regenerative medicine and marine drones.

Many of these technologies are still in early development, but are considered important for economic security and global competitiveness.

The strategy forms a central pillar of Takaichi’s broader economic agenda to strengthen Japan’s industrial base and encourage investment in high-growth sectors. Authorities plan to release spending estimates and implementation timelines by summer as part of a detailed investment roadmap.

Japan has also set ambitious market goals in several sectors. Officials aim to secure more than 30% of the global AI robotics market by 2040 while increasing annual sales of domestically produced semiconductors to ¥40 trillion.

Several Japanese technology companies could benefit from the policy direction. Firms such as Fanuc, Yaskawa Electric and Mitsubishi Electric are integrating AI into industrial robots, while Sony Group produces sensors used in robotic systems.

Chipmakers, including Rohm, Kioxia and Renesas Electronics, may also benefit from increased investment in semiconductor manufacturing and related supply chains.

Despite strong investor interest, analysts note uncertainty about how the programme will be financed, particularly as Japan faces rising spending pressures from social security, defence and public debt.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Dutch court increases pressure on Meta over non-profiling social media feeds

A court in the Netherlands has increased potential penalties against Meta after ruling that changes to social media timelines must be implemented urgently.

The decision raises the potential fine for non-compliance from €5 million to €10 million if required adjustments are not applied to Facebook and Instagram feeds.

Judges at the Amsterdam Court of Appeals said users must be able to select a timeline that does not rely on profiling-based recommendations.

The ruling follows a legal challenge from the digital rights organisation Bits of Freedom, which argued that users who switched away from algorithmic feeds were automatically returned to them after navigating the platform or reopening the application.

The court concluded that the automatic resetting mechanism represents a deceptive design practice known as a ‘dark pattern’.

Such practices are prohibited under the EU’s Digital Services Act, which requires large online platforms to provide greater transparency and user control over recommendation systems.

Judges acknowledged that Meta had already introduced several technical changes, although not all required measures were fully implemented. The company must ensure that the non-profiling timeline option remains active once selected, rather than reverting to algorithmic recommendations.

The dispute also highlights regulatory tensions within the European framework. Before turning to the courts, Bits of Freedom submitted a complaint to Coimisiún na Meán, the national authority responsible for overseeing Meta’s compliance with the EU rules.

According to the organisation, the lack of progress from regulators encouraged legal action in Dutch courts.

Meta indicated that the company intends to challenge the decision and pursue further legal proceedings. The case could become an important test of how the Digital Services Act is enforced against major online platforms across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Digital sovereignty in Asia moves beyond US versus non-US cloud debate

AI, cloud computing, and cross-border data flows have made questions about control and jurisdiction increasingly important for governments and businesses. In Asia, the debate around digital sovereignty often focuses on ‘US versus non-US cloud’ providers or data localisation.

Such simplifications miss the practical challenges organisations face when choosing hosting locations or training AI models while navigating diverse regulatory regimes.

At the same time, Asia’s digital economy is building its own regulatory foundations. In Vietnam and Indonesia, new rules such as Vietnam’s Decree 53 and Indonesia’s data protection framework show how governments are shaping data governance while still relying on global cloud and AI platforms. Most organisations across the region continue to operate using a mix of local, regional, and international providers.

Organisations must address key questions about data jurisdiction and workload mobility when risks change. They must also control who can access sensitive systems during incidents. Digital sovereignty is clearer when seen through three pillars: data sovereignty, technical sovereignty, and operational sovereignty.

Data sovereignty is about jurisdiction, not just data storage. As AI regulation expands, businesses need to know which authorities can access their data and how it may be used. Technical sovereignty is the ability to move or redesign systems as regulations or geopolitics shift. Multi-cloud and hybrid strategies help organisations remain adaptable.

Operational sovereignty focuses on governance and control. It addresses who can access systems, from where, and under what safeguards, thus linking sovereignty directly to cybersecurity and incident response.

For Asia-Pacific organisations, digital sovereignty should not be a simple procurement checklist. Instead, it should guide cloud and AI strategies from the start, ensuring legal clarity, technical flexibility, and operational trust as the digital landscape evolves.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU draft regulation aims to create new legal framework for startups

A draft initiative from the European Commission seeks to introduce a new legal structure designed to simplify how companies operate across the EU.

The proposal, often referred to as the ‘EU Inc’ initiative, explores the creation of a so-called ’28th regime’ that would exist alongside national corporate frameworks used by member states.

A concept that aims to provide startups and technology firms with a single legal structure that applies across the EU.

Instead of navigating different national rules in each country, companies could operate under a unified regulatory model intended to reduce administrative barriers and encourage cross-border innovation.

According to the draft, the initiative may rely on an EU regulation rather than separate national legislation. Such an approach could enable faster implementation, as the EU regulations apply directly across all member states without requiring domestic transposition.

However, the legal basis of the proposal could raise institutional concerns. Using a regulation as the primary mechanism may constitute an unconventional shortcut in the EU lawmaking, potentially sparking debate among policymakers over the approach’s scope and legitimacy.

The initiative reflects broader efforts within the Union to simplify regulatory frameworks and strengthen the competitiveness of European startups. If adopted, the ‘EU Inc’ model could reshape how young companies expand across the single market.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!