Anthropic’s Pentagon dispute and military AI governance in 2026

On 28 February 2026, Anthropic’s Claude rose to No. 1 in Apple’s US App Store free rankings, overtaking OpenAI’s ChatGPT. The surge came shortly after OpenAI announced a partnership with the US Department of Defense (DoD), making its technology available to the US Army. The development prompted discussion among users and observers about whether concerns over military partnerships were influencing the shift to alternative AI tools.

Mere hours before the USD $200 million OpenAI-DoD deal was finalised, Anthropic was informed that its potential deal with the Pentagon had fallen through, largely because the AI company refused to relinquish total control of its technology for domestic mass surveillance. According to reporting, discussions broke down after Anthropic declined to grant the US government unrestricted control over its models, particularly for potential uses related to large-scale surveillance.

Following the breakdown of negotiations, US officials reportedly designated Anthropic as a ‘supply chain risk to national security’. The decision effectively limited the company’s participation in certain defence-related projects and highlighted growing tensions between AI developers’ safety policies and government expectations regarding national security technologies.

The debate over military partnerships sparked internal and industry-wide discussion. Caitlin Kalinowski, the former head of AR glasses hardware at Meta and the hardware leader at OpenAI, resigned soon after the US DoD deal, citing ethical concerns about the company’s involvement in military AI applications.

AI has driven recent technological innovation, with companies like Anduril and Palantir collaborating with the US DoD to deploy AI on and off the battlefield. The debate over AI’s role in military operations, surveillance, and security has intensified, especially as Middle East conflicts highlight its potential uses and risks.

Against this backdrop, the dispute between Anthropic and the Pentagon reflects a wider debate on how AI should be used in security and defence. Governments are increasingly relying on private tech companies to develop the systems that shape modern military capabilities, while those same companies are trying to set limits on how their technologies can be used.

As AI becomes more deeply integrated into security strategies around the world, the challenge may no longer be whether the technology will be used, but how it should be governed. The question is: who should ultimately decide where the limits of military AI lie?

Anthropic’s approach to military AI

Anthropic’s approach is closely tied to its concept of ‘constitutional AI’, a training method that guides how the model behaves by embedding a set of principles directly into its responses. Such principles are intended to reduce harmful outputs and ensure the system avoids unsafe or unethical uses. While such safeguards are intended to improve reliability and trust, they can also limit how the technology can be deployed in more sensitive contexts such as military operations.

Anthropic’s Constitution says its AI assistant should be ‘genuinely helpful’ to people and society, while avoiding unsafe, unethical, or deceptive actions. The document reflects the company’s broader effort to build safeguards into model deployment. In practice, Anthropic has set limits on certain applications of its technology, including uses related to large-scale surveillance or military operations.

Anthropic presents these safeguards as proof of its commitment to responsible AI. Reports indicate that concerns over unrestricted model access led to the breakdown in talks with the US DoD.

At the same time, Anthropic clarifies that its concerns are specific to certain uses of its technology. The company does not generally oppose cooperation with national security institutions. In a statement following the Pentagon’s designation of the company as a ‘supply chain risk to national security’, CEO Dario Amodei said, ‘Anthropic has much more in common with the US DoD than we have differences.’ He added that the company remains committed to ‘advancing US national security and defending the American people.’

The episode, therefore, highlights a nuanced position. Anthropic appears open to defence partnerships but seeks to maintain clearer limits on the deployment of its AI systems. The disagreement with the Pentagon ultimately reflects not a fundamental difference in goals, but rather different views on how far military institutions should be able to control and use advanced AI technologies.

Anthropic’s position illustrates a broader challenge facing governments and tech companies as AI becomes increasingly integrated into national security systems. While military and security institutions are eager to deploy advanced AI tools to support intelligence analysis, logistics, and operational planning, the companies developing these technologies are also seeking to establish safeguards for their use. Anthropic’s willingness to step back from a major defence partnership and challenge the Pentagon’s response underscores how some AI developers are trying to set limits on military uses of their systems.

Defence partnerships that shape the AI industry

While Anthropic has taken a cautious approach to military deployment of AI, other technology companies have pursued closer partnerships with defence institutions. One notable example is Palantir, the US data analytics firm co-founded by Peter Thiel that has longstanding relationships with numerous government agencies. Documents leaked in 2013 suggested that the company had contracts with at least 12 US government bodies. More recently, Palantir has expanded its defence offering through its Artificial Intelligence Platform (AIP), designed to support intelligence analysis and operational decision-making for military and security institutions.

Another prominent player is Anduril Industries, a US defence technology company focused on developing AI-enabled defence systems. The firm produces autonomous and semi-autonomous technologies, including unmanned aerial systems and surveillance platforms, which it supplies to the US DoD.

Shield AI, meanwhile, is developing autonomous flight software designed to operate in environments where GPS and communications may be unavailable. Its Hivemind AI platform powers drones that can navigate buildings and complex environments without human control. The company has worked with the US military to test these systems in training exercises and operational scenarios, including aircraft autonomy projects aimed at supporting fighter pilots.

The aforementioned partnerships illustrate how the US government has increasingly embraced AI as a key pillar of national defence and future military operations. In many cases, these technologies are already being used in operational contexts. Palantir’s Gotham and AIP, for instance, have supported US military and intelligence operations by processing satellite imagery, drone footage, and intercepted communications to help analysts identify patterns and potential threats.

Other companies are contributing to defence capabilities through autonomous systems development and hardware integration. Anduril supplies the US DoD with AI-enabled surveillance, drone, and counter-air systems designed to detect and respond to potential threats. At the same time, OpenAI’s technology is increasingly being integrated into national security and defence projects through growing collaboration with US defence institutions.

Such developments show that AI is no longer a supporting tool but a fundamental part of military infrastructure, influencing how defence organisations process information and make decisions. As governments deepen their reliance on private-sector AI, the emerging interplay among innovation, operational effectiveness, and oversight will define the central debate on military AI adoption.

The potential benefits of military AI

The debate over Anthropic’s restrictions on military AI use highlights the reasons governments invest in such technologies: defence institutions are drawn to AI because it processes vast amounts of information much faster than human analysts. Military operations generate massive data streams from satellites, drones, sensors, and communication networks, and AI systems can analyse them in near real time.

In 2017, the US DoD launched Project Maven to apply machine learning to drone and satellite imagery, enabling analysts to identify objects, movements, and potential threats on the battlefield faster than with traditional manual methods.

AI is increasingly used in military logistics and operational planning. It helps commanders anticipate equipment failures, enables predictive maintenance, optimises supply chains, and improves field asset readiness.

Recent conflicts have shown that AI-driven tools can enhance military intelligence and planning. In Ukraine, for example, forces reportedly used software to analyse satellite imagery, drone footage, and battlefield data. Key benefits include more efficient target identification, real-time tracking of troop movements, and clearer battlefield awareness through the integration of multiple data sources.

AI-assisted analysis has been used in intelligence and targeting during the Gaza conflict. Israeli defence systems use AI tools to rapidly process large datasets for surveillance and intelligence operations. The tools help analysts identify potential militant infrastructure, track movements, and prioritise key intelligence, thus speeding up information processing for teams during periods of high operational activity.

More broadly, AI is transforming the way militaries coordinate across land, air, sea, and cyber domains. AI integrates data from diverse sources, equipping commanders to interpret complex operational situations and enabling faster, informed decision-making. The advances reinforce why many governments see AI as essential for future defence planning.

Ethical concerns and Anthropic’s limits on military AI

Despite the operational advantages of military AI, its growing role in national defence systems has raised ethical concerns. Critics warn that overreliance on AI for intelligence analysis, targeting, or operational planning could introduce risks if the systems produce inaccurate outputs or are deployed without sufficient human oversight. Even highly capable models can generate misleading or incomplete information, which in high-stakes military contexts could have serious consequences.

Concerns about the reliability of AI systems are also linked to the quality of the data they learn from. Many models still struggle to distinguish authentic information from synthetic or manipulated content online. As generative AI becomes more widespread, the risk that systems may absorb inaccurate or fabricated data increases, potentially affecting how these tools interpret intelligence or analyse complex operational environments.

Questions about autonomy have also become a major issue in discussions around military AI. As AI systems become increasingly capable of analysing battlefield data and identifying potential targets, debates have emerged over how much decision-making authority they should be given. Many experts argue that decisions involving the use of lethal force should remain under meaningful human control to prevent unintended consequences or misidentification of targets.

Another area of concern relates to the potential expansion of surveillance capabilities. AI systems can analyse satellite imagery, communications data, and online activity at a scale beyond the capacity of human analysts alone. While such tools may help intelligence agencies detect threats more efficiently, critics warn that they could also enable large-scale monitoring if deployed without clear legal and institutional safeguards.

It is within this ethical landscape that Anthropic has attempted to position itself as a more cautious actor in the AI industry. Through initiatives such as Claude’s Constitution and its broader emphasis on AI safety, the company argues that powerful AI systems should include safeguards that limit harmful or unethical uses. Anthropic’s reported refusal to grant the Pentagon unrestricted control over its models during negotiations reflects this approach.

The disagreement between Anthropic and the US DoD therefore highlights a broader tension in the development of military AI. Governments increasingly view AI as a strategic technology capable of strengthening defence and intelligence capabilities, while some developers seek to impose limits on how their systems are deployed. As AI becomes more deeply embedded in national security strategies, the question may no longer be whether these technologies will be used, but who should define the boundaries of their use.

Military AI and the limits of corporate control

Anthropic’s dispute with the Pentagon shows that the debate over military AI is no longer only about technological capability. Questions of speed, efficiency, and battlefield advantage now collide with concerns over surveillance, autonomy, human oversight, and corporate responsibility. Governments increasingly see AI as a strategic asset, while companies such as Anthropic are trying to draw boundaries around how far their systems can go once they enter defence environments.

Contrasting approaches across the industry make the tension even clearer. Palantir, Anduril, Shield AI, and OpenAI have moved closer to defence partnerships, reflecting a broader push to integrate advanced AI into military infrastructure. Anthropic, by comparison, has tried to keep one foot in national security cooperation while resisting uses it views as unsafe or unethical. A divide of that kind suggests that the future of military AI may be shaped as much by company policies as by government strategy.

The growing reliance on private firms to build national security technologies has made governance harder to define. Military institutions want flexibility, scale, and operational control, while AI developers increasingly face pressure to decide whether they are simply suppliers or active gatekeepers of how their models are deployed. Anthropic’s position does not outright defence cooperation, but it does expose how fragile the relationship becomes when state priorities and corporate safeguards no longer align.

Military AI will continue to expand, whether through intelligence analysis, logistics, surveillance, or autonomous systems. Governance, however, remains the unresolved issue at the centre of that expansion. As AI becomes more deeply embedded in defence policy and military planning, should governments alone decide how far these systems can go, or should companies like Anthropic retain the power to set limits on their use?

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Qualcomm and NEURA Robotics partner to accelerate physical AI and cognitive robotics

NEURA Robotics and Qualcomm have formed a long-term strategic collaboration to advance physical AI and next-generation robotics platforms.

A partnership that aims to bring intelligent robots into real-world environments more rapidly by combining advanced AI processors with full-stack robotic systems.

The cooperation focuses on developing ‘Brain + Nervous System’ reference architectures that integrate high-level cognition, such as perception, reasoning and planning, with ultra-low-latency control systems.

Qualcomm’s robotics processors, including the Dragonwing IQ10 Series, will provide AI compute and connectivity, while NEURA contributes robotic hardware platforms and embodied AI software.

Both companies intend to support deployment across multiple robotic forms, including robotic arms, mobile robots, service machines and humanoid platforms.

NEURA’s cloud environment, Neuraverse, will serve as a shared platform for simulation, training and lifecycle management of robotic intelligence, allowing innovations developed by one robot to spread across entire fleets.

The collaboration also aims to establish a global developer ecosystem for robotics applications. Standardised runtime environments and deployment interfaces are expected to simplify how AI workloads move from development into production while maintaining reliability and safety.

Executives from both companies emphasised that robotics represents one of the most demanding AI environments, as decisions must be made instantly and locally.

By combining edge AI processing with cognitive robotic systems, the partnership aims to accelerate commercial deployment of humanoid and general-purpose robots capable of operating safely alongside humans across industries.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Smart Classrooms initiative transforms learning in 10 Thai pilot schools

Ten pilot schools in Buriram and Si Sa Ket provinces have launched Smart Classrooms under the UNESCO–Huawei TEOSA initiative, supporting Thailand’s drive to expand digital education.

Led by UNESCO Bangkok in partnership with Thailand’s Ministry of Education and Huawei Technologies Co., Ltd, the Smart Classrooms initiative aims to strengthen digital learning environments, equip teachers with digital and AI competencies, and support policy development for AI in education. The programme also supports Thailand’s ‘Transforming Education in the Digital Era’ policy and the National AI Strategy and Action Plan (2022–2027).

Each province has one designated ‘mother school’ that serves as a regional digital hub, supporting four surrounding ‘child schools’ by sharing resources, training, and expertise. The ten pilot schools in total have received high-speed internet, interactive digital displays, and collaborative learning platforms that support real-time content sharing and blended learning. Forty-five teachers from the pilot schools also participated in hands-on demonstrations of Smart Classrooms systems on 4–5 March.

‘This new technology will help translate theory into practice, allowing students to experiment, test strategies, and see results immediately,’ said Pathanapong Momprakhon, Principal of Paisan Pittayakom School. UNESCO Bangkok’s Deputy Director and Chief of Education, Marina Patrier, highlighted the importance of combining infrastructure with teacher capacity-building.

‘At UNESCO, we are committed to promoting the ethical and inclusive use of AI in ways that empower teachers and expand opportunities for every learner,’ Ms Patrier said at the launch. ‘While Smart Classrooms provide important tools, it is teachers’ creativity, professional judgement and leadership that ultimately bring these innovations to life.’

Chitralada Chanyaem of the Thai National Commission for UNESCO highlighted the importance of collaboration in advancing digital education.

‘The UNESCO–Huawei Funds-in-Trust Project on Technology-Enabled Open Schools for All stands as a powerful example of collaboration dedicated to transforming education into a system that is open, inclusive, flexible, and resilient in the face of a rapidly changing world, she said. ‘As the future of education cannot be confined within classroom walls, it must bridge sectors and communities, working collaboratively to create equitable and sustainable opportunities for all.’

Teachers observed Huawei technical staff and master teachers demonstrate how digital tools and AI-supported applications can be used in everyday lessons. Ms Piyaporn Kidsirianan, Public Relations Manager at Huawei Technologies (Thailand) Co., Ltd, said the initiative aims to reduce digital inequality.

‘The Open Schools for All initiative represents a commitment to using technology as a bridge to deliver quality education to remote and underserved communities.’ The TEOSA Smart Classrooms initiative combines policy support, digital infrastructure upgrades, and teacher training to help translate Thailand’s digital education ambitions into practical impact at the school level.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Samsung’s AI smart glasses are coming to take on Meta Ray-Ban

Samsung has confirmed key details about its upcoming AI smart glasses, including a camera positioned at ‘eye level’ and smartphone connectivity, ahead of a planned 2026 launch.

The device is being developed in partnership with Qualcomm and Google, building on the same ecosystem that produced the Galaxy XR headset, and will be powered by Google’s Gemini AI.

Samsung executive Jay Kim indicated that the glasses will be able to understand ‘where you’re looking at’, allowing the AI to analyse objects or scenes in the user’s field of view and provide contextual information in real time.

Processing is expected to take place on a connected smartphone rather than within the glasses themselves, and Samsung has not confirmed whether a built-in display will be included, suggesting multiple versions may be in development.

The announcement puts Samsung on a direct collision course with Meta, whose Ray-Ban Meta Gen 2 glasses are already on the market, offering 3K video recording and up to eight hours of battery life. Meta has also launched the Oakley Meta HSTN glasses, aimed at sports and outdoor users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Online scams rise as Parkin urges Dubai residents to stay vigilant

Dubai’s parking provider, Parkin, has warned residents to stay alert as online scams targeting digital service users continue to rise, urging people to take immediate steps to protect their digital identities.

In an advisory, the company stressed that official entities will never ask users to log in or disclose sensitive information through unsolicited messages, emails, or phone calls. The warning comes amid growing concerns about phishing attempts and other online scams targeting users of digital platforms.

Parkin said residents should exercise caution if they receive unexpected requests for personal details, passwords, or verification codes. Users are strongly advised not to respond to suspicious links, attachments, or messages from unknown sources, which are commonly used in online scams.

The operator also urged the public to verify the authenticity of communications before taking any action. Residents who are unsure about the legitimacy of a message should check official websites or contact customer service channels directly. The advice applies to messages claiming to come from Parkin or other service providers.

Authorities and service providers across the UAE have repeatedly warned that cybercriminals often impersonate trusted organisations in online scams designed to steal sensitive information. Such attacks can lead to identity theft, financial losses, or unauthorised access to personal accounts.

Parkin encouraged residents who receive suspicious communications to report them through official channels so that appropriate action can be taken. The company added that staying vigilant and safeguarding personal data remain essential to preventing online scams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Codex Security expands OpenAI’s push into cybersecurity tools

OpenAI has launched Codex Security, an AI-powered application security agent that detects hard-to-find software vulnerabilities and proposes fixes through advanced reasoning. By providing detailed context about a system’s architecture, the tool identifies security risks that are often missed by conventional automation.

The system uses advanced models to analyse repositories, construct project-specific threat models, and prioritise vulnerabilities based on their potential real-world impact. By combining automated validation with system-level context, Codex Security aims to reduce the number of false positives that security teams must review while highlighting high-confidence findings.

Initially developed under the name Aardvark, the tool has been tested in private deployments over the past year. During early use, OpenAI said it uncovered several critical vulnerabilities, including a cross-tenant authentication flaw and a server-side request forgery issue, allowing internal teams to quickly patch affected systems.

The company says improvements during the beta phase significantly reduced noise in vulnerability reports. In some repositories, unnecessary alerts fell by 84 percent, while over-reported severity dropped by more than 90 percent, and false positives declined by more than half.

Codex Security is now rolling out in research preview for ChatGPT Pro, Enterprise, Business, and Edu customers. OpenAI also plans to expand access to open-source maintainers through a dedicated programme that offers security scanning and support to help identify and remediate vulnerabilities across widely used projects.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Data breach hits fintech lender Figure exposing nearly 1 million accounts

Fintech lender Figure Technology Solutions has disclosed a data breach after hackers exposed personal information from nearly one million accounts. Details from 967,200 accounts, including names, email addresses, phone numbers, home addresses, and dates of birth, were compromised.

Figure Technology Solutions, founded in 2018, operates a blockchain-based lending platform built on the Provenance blockchain. The company says it has facilitated more than $22 billion in home equity transactions through partnerships with banks, credit unions, and fintech firms. Despite blockchain security claims, attackers reportedly gained access by manipulating a staff member rather than breaking the underlying technology.

‘We recently identified that an employee was socially engineered, and that allowed an actor to download a limited number of files through their account,’ a company spokesperson said. ‘We acted quickly to block the activity and retained a forensic firm to investigate what files were affected. We understand the importance of these matters and are communicating with partners and those impacted as appropriate.’

Security researchers say the data breach follows a pattern used by groups such as ShinyHunters, who impersonate IT support staff and pressure employees into revealing login credentials through convincing phishing portals.

Once access to corporate single sign-on systems, which allow users to log in to multiple internal applications with a single set of credentials, is obtained, attackers can move across multiple internal platforms, often including services linked to major providers such as Microsoft and Google.

Experts warn that the data breach highlights a wider cybersecurity problem: even advanced technologies such as blockchain cannot prevent attacks that target human behaviour. Criminals can use exposed personal information to launch convincing phishing campaigns or financial scams, reinforcing the need for stronger employee training and security awareness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

TikTok rejects end-to-end encryption citing safety concerns

TikTok will not adopt end-to-end encryption for direct messages. The company explained that using this technology could hinder safety teams’ and law enforcement’s efforts to detect harmful content in private messages, which the company believes could make users less safe online.

Encrypted messaging ensures that only the sender and recipient can read a conversation and is widely used across the social media industry. Rivals including Facebook, Instagram, Messenger, and X have adopted the technology, saying protecting private communication is central to user privacy.

The issue has become more sensitive because the platform has long faced scrutiny over possible links between its parent company, ByteDance, and the government of the People’s Republic of China, something the company has repeatedly denied. Reflecting these concerns, earlier this year, US lawmakers ordered the separation of TikTok’s US operations from its global business.

The company told the BBC that encrypted messaging would make it impossible for police and platform safety teams to read direct messages when needed. TikTok emphasised that this decision was made to enhance user protection, with a particular focus on the safety of younger users, and that it sees monitoring capabilities as crucial for addressing harmful behaviour.

Industry analyst Matt Navarra said the platform’s decision to ‘swim against the tide’ is ‘notable’ but presents ‘challenging optics’. He noted, ‘Grooming and harassment risks are present in DMs [direct messages], so TikTok can state it is prioritising proactive safety over privacy absolutism,’ though he added that the decision ‘places TikTok out of alignment with global privacy expectations’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Online privacy faces new pressures in the age of social media

Online privacy is eroding as digital services collect ever-growing personal data and surveillance becomes part of daily technology use. The debate has intensified as social media platforms, advertisers, and connected devices expand their ability to track behaviour, preferences, and habits.

Analysts say younger generations have adapted to this reality rather than resisting it. ‘In 2026, online privacy is a luxury, not a right,’ says Thomas Bunting, an analyst at the UK innovation think tank Nesta. He argues many people have grown up accepting data collection as a trade-off for access to online services, noting: ‘We’ve been taught how to deal with it.’

Advocates warn that the erosion of online privacy could have wider social consequences. Cybersecurity expert Prof Alan Woodward from the University of Surrey says the issue goes beyond personal privacy. ‘People should care about online privacy because it shapes who has power over their lives,’ he says, arguing that privacy is ‘about having something to protect: freedom of thought, experimentation, dissent and personal development without permanent surveillance.’

Despite a growing number of privacy tools and regulations, data exposure remains widespread. According to Statista, more than 1.35 billion people were affected by data breaches, hacks, or exposure in 2024 alone. At the same time, more than 160 countries now have privacy legislation, while users regularly encounter cookie consent prompts that govern how their data is collected online.

Experts say frustration with privacy controls reflects a broader ‘privacy paradox’, in which people express concern about data protection but rarely change their behaviour. Cisco’s Consumer Privacy Survey found that while 89% of respondents said they care about privacy, only 38% actively take steps to protect their data.

As philosopher Carissa Véliz notes, the challenge is not simply awareness but a sense of agency: ‘Mostly, people don’t feel like they have control.’ She argues that protecting privacy requires stronger regulation, responsible technology design, and cultural change, adding: ‘It’s about having [access to] the right tech, but also using it.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Chrome Gemini vulnerability allowed camera and file access

A high-severity vulnerability in Chrome’s integrated Gemini AI assistant exposed users to the potential activation of the camera and microphone, local file access, and phishing attacks. The issue, tracked as CVE-2026-0628, was disclosed by Palo Alto Networks’ Unit 42 and patched by Google in January 2026.

Gemini Live operates as a privileged AI panel embedded within the browser, capable of web page summarisation and task automation. To enable multimodal functionality, the panel is granted elevated permissions, including access to screenshots, local files, and device hardware.

Researchers identified inconsistent handling of the declarativeNetRequest API when gemini.google.com was loaded inside the AI side panel rather than a standard browser tab. While extensions could inject JavaScript in both cases, the panel context inherited browser-level privileges.

A malicious extension exploiting this distinction could hijack the trusted panel and execute arbitrary code with elevated access. Potential impacts included silent activation of a camera or microphone, screenshot capture, local file exfiltration, and high-credibility phishing attacks.

Google released a fix on 5 January 2026 following responsible disclosure. Users running the latest version of Chrome are protected, and organisations are advised to ensure updates are applied across all endpoints.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!