AI safety push sees Anthropic and OpenAI recruit explosives specialists

Anthropic and OpenAI are recruiting chemical and explosives experts to strengthen safeguards for their AI systems, reflecting growing concern about the potential misuse of advanced models.

Anthropic is seeking a policy specialist to design and monitor guardrails governing how its systems respond to prompts involving chemical weapons and explosives. The role includes assessing high-risk scenarios and responding to potential escalation signals in real time.

OpenAI is expanding its Preparedness team, hiring researchers and a threat modeller to identify and forecast risks linked to frontier AI systems. The positions focus on evaluating catastrophic risks and aligning technical, policy, and governance responses.

The recruitment drive comes amid heightened scrutiny of AI safety and national security implications. Anthropic is currently challenging a US government designation that labels it a supply-chain risk, while tensions have emerged over restrictions on the military use of AI systems.

At the same time, OpenAI has secured agreements to deploy its technology in classified environments under defined constraints. The parallel developments highlight how AI firms are balancing commercial expansion with increasing pressure to implement robust safety controls.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic dispute pushes Pentagon toward new AI providers

The Pentagon is accelerating efforts to replace Anthropic after the company was designated a supply-chain risk, marking a sharp shift in US defence AI strategy. The move follows a breakdown in talks over safeguards governing military use of AI, particularly around surveillance and autonomous weapons.

Cameron Stanley, the Pentagon’s chief digital and AI officer, said engineering work is underway to deploy alternative large language models in government-controlled environments. He indicated that while transitioning from Anthropic’s tools could take more than a month, new systems are expected to be operational soon.

The decision threatens a $200 million contract and could exclude Anthropic from future defence partnerships. The US administration has set a six-month timeline for federal agencies to shift away from the company, signalling a broader push to diversify AI suppliers and reduce dependency risks.

Rival providers are already stepping in. OpenAI and xAI have been approved for classified work, while Google is introducing Gemini AI tools across the Pentagon workforce, initially on unclassified networks before expanding into sensitive environments.

Anthropic has challenged the designation in court, arguing it violates constitutional protections and could harm its business. Despite the legal dispute, defence officials have made clear they are moving forward with an ‘AI-first’ strategy to accelerate the adoption of advanced models across military operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Stryker cyberattack wipes devices via Microsoft environment without malware

A major cyber incident has impacted Stryker Corporation, where attackers targeted its internal Microsoft environment and remotely wiped tens of thousands of employee devices without deploying traditional malware.

Access to systems was reportedly achieved through a compromised administrator account, allowing attackers to issue remote wipe commands via Microsoft Intune.

As a result, large parts of the company’s internal infrastructure were disrupted, with some services remaining offline and business operations affected.

Responsibility has been claimed by Handala, a group often associated with broader geopolitical cyber activity. The incident reflects a growing trend of cyber operations blending disruption, data theft and strategic messaging.

Despite the scale of the attack, the company confirmed that its medical devices and patient-facing technologies were not impacted.

The case highlights increasing risks linked to identity compromise and cloud-based management tools, where attackers can cause significant damage without relying on conventional malware techniques.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI agents test limits of EU rules

AI agents are rapidly gaining traction, raising questions about whether existing EU rules can keep pace. Unlike chatbots, these systems can act autonomously and interact with digital tools on behalf of users.

Experts warn that AI agents require deeper access to personal data and online services to function effectively. Regulators in Europe are monitoring potential risks as the technology becomes more integrated into daily life.

Lawmakers are examining whether current legislation, such as the AI Act and GDPR, adequately covers agent-based systems. Legal experts highlight challenges around contracts, liability and accountability when AI acts independently.

Despite concerns, many governments remain reluctant to introduce new rules, citing regulatory fatigue. Policymakers may rely on existing frameworks unless major incidents force a reassessment of AI oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU adopts cyber-related sanctions on companies based in China and Iran

The European Union imposed sanctions on two China-based companies and one Iranian company in connection with cyber operations targeting the EU member states. The Council’s official press release does not specify the underlying operations. The designated entities are Integrity Technology Group and Anxun Information Technology, both based in China, and Emennet Pasargad, based in Iran.

According to an EU statement, Integrity Technology is assessed to have facilitated the compromise of over 65,000 devices across six member states. Anxun is assessed to have provided offensive cyber capabilities targeting critical infrastructure, and two of the company’s co-founders have been individually designated for their roles in these operations.

Emennet is assessed to have a compromised digital advertising infrastructure to disseminate disinformation during the 2024 Paris Olympics.

The sanctions entail an asset freeze and a travel ban for the listed individuals. The EU citizens and entities are additionally prohibited from making funds available to the designated companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers target WhatsApp and Signal in global encrypted messaging attacks

Foreign state-backed hackers are targeting accounts on WhatsApp and Signal used by government officials, diplomats, military personnel, and other high-value individuals, according to a security alert issued by the Portuguese Security Intelligence Service (SIS).

Portuguese authorities described the activity as part of a global cyber-espionage campaign aimed at gaining access to sensitive communications and extracting privileged information from Portugal and allied countries. The advisory did not identify the origin of the suspected attackers.

The warning follows similar alerts from other European intelligence agencies. Earlier this week, Dutch authorities reported that hackers linked to Russia were conducting a global campaign targeting the messaging accounts of officials, military personnel, and journalists.

Security agencies say the attackers are not exploiting vulnerabilities in the messaging platforms themselves. Both WhatsApp and Signal rely on end-to-end encryption designed to protect the content of messages from interception.

Instead, the campaign focuses on social engineering tactics that trick users into granting access to their accounts. According to the SIS report, attackers use phishing messages, malicious links, fake technical support requests, QR-code lures, and impersonation of trusted contacts.

The agency also warned that AI tools are increasingly being used to make such attacks more convincing. AI can help impersonate support staff, mimic familiar voices or identities, and conduct more realistic conversations through messages, phone calls, or video.

Once attackers gain access to an account, they may be able to read private messages, group chats, and shared files via WhatsApp and Signal. They can also impersonate the compromised user to launch additional phishing attacks targeting the victim’s contacts.

The alert echoes a previous warning issued by the Cybersecurity and Infrastructure Security Agency (CISA), which reported that encrypted messaging apps are increasingly being used as entry points for spyware and phishing campaigns targeting high-value individuals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic lawsuit gains Big Tech support in AI dispute

Several major US technology companies have backed Anthropic in its lawsuit challenging the US Department of Defence’s decision to label the AI company a national security ‘supply chain risk’.

Google, Amazon, Apple, and Microsoft have filed legal briefs supporting Anthropic’s attempt to overturn the designation issued by Defence Secretary Pete Hegseth. Anthropic argues the decision was retaliation after the company declined to allow its AI systems to be used for mass surveillance or autonomous weapons.

In court filings, the companies warned that the government’s action could have wider consequences for the technology sector. Microsoft said the decision could have ‘broad negative ramifications for the entire technology sector’.

Microsoft, which works closely with the US government and the Department of Defence, said it agreed with Anthropic’s position that AI systems should not be used to conduct domestic mass surveillance or enable autonomous machines to initiate warfare.

A joint amicus brief supporting Anthropic was also submitted by the Chamber of Progress, a technology policy organisation funded by companies including Google, Apple, Amazon and Nvidia. The group said it was concerned about the government penalising a company for its public statements.

The brief described the designation as ‘a potentially ruinous sanction’ for businesses and warned it could create a climate in which companies fear government retaliation for expressing views.

Anthropic’s lawsuit claims the government violated its free speech rights by retaliating against the company for comments made by its leadership. The dispute escalated after Anthropic declined to remove contractual restrictions preventing its AI models from being used for mass surveillance or autonomous weapons.

The company had previously introduced safeguards in government contracts to limit certain uses of its technology. Negotiations over revised contract language continued for several weeks before the disagreement became public.

Former military officials and technology policy advocates have also filed supporting briefs, warning that the decision could discourage companies from participating in national security projects if they fear retaliation for voicing concerns. The case is currently being heard in federal court in San Francisco.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic’s Pentagon dispute and military AI governance in 2026

On 28 February 2026, Anthropic’s Claude rose to No. 1 in Apple’s US App Store free rankings, overtaking OpenAI’s ChatGPT. The surge came shortly after OpenAI announced a partnership with the US Department of Defense (DoD), making its technology available to the US Army. The development prompted discussion among users and observers about whether concerns over military partnerships were influencing the shift to alternative AI tools.

Mere hours before the USD $200 million OpenAI-DoD deal was finalised, Anthropic was informed that its potential deal with the Pentagon had fallen through, largely because the AI company refused to relinquish total control of its technology for domestic mass surveillance. According to reporting, discussions broke down after Anthropic declined to grant the US government unrestricted control over its models, particularly for potential uses related to large-scale surveillance.

Following the breakdown of negotiations, US officials reportedly designated Anthropic as a ‘supply chain risk to national security’. The decision effectively limited the company’s participation in certain defence-related projects and highlighted growing tensions between AI developers’ safety policies and government expectations regarding national security technologies.

The debate over military partnerships sparked internal and industry-wide discussion. Caitlin Kalinowski, the former head of AR glasses hardware at Meta and the hardware leader at OpenAI, resigned soon after the US DoD deal, citing ethical concerns about the company’s involvement in military AI applications.

AI has driven recent technological innovation, with companies like Anduril and Palantir collaborating with the US DoD to deploy AI on and off the battlefield. The debate over AI’s role in military operations, surveillance, and security has intensified, especially as Middle East conflicts highlight its potential uses and risks.

Against this backdrop, the dispute between Anthropic and the Pentagon reflects a wider debate on how AI should be used in security and defence. Governments are increasingly relying on private tech companies to develop the systems that shape modern military capabilities, while those same companies are trying to set limits on how their technologies can be used.

As AI becomes more deeply integrated into security strategies around the world, the challenge may no longer be whether the technology will be used, but how it should be governed. The question is: who should ultimately decide where the limits of military AI lie?

Anthropic’s approach to military AI

Anthropic’s approach is closely tied to its concept of ‘constitutional AI’, a training method that guides how the model behaves by embedding a set of principles directly into its responses. Such principles are intended to reduce harmful outputs and ensure the system avoids unsafe or unethical uses. While such safeguards are intended to improve reliability and trust, they can also limit how the technology can be deployed in more sensitive contexts such as military operations.

Anthropic’s Constitution says its AI assistant should be ‘genuinely helpful’ to people and society, while avoiding unsafe, unethical, or deceptive actions. The document reflects the company’s broader effort to build safeguards into model deployment. In practice, Anthropic has set limits on certain applications of its technology, including uses related to large-scale surveillance or military operations.

Anthropic presents these safeguards as proof of its commitment to responsible AI. Reports indicate that concerns over unrestricted model access led to the breakdown in talks with the US DoD.

At the same time, Anthropic clarifies that its concerns are specific to certain uses of its technology. The company does not generally oppose cooperation with national security institutions. In a statement following the Pentagon’s designation of the company as a ‘supply chain risk to national security’, CEO Dario Amodei said, ‘Anthropic has much more in common with the US DoD than we have differences.’ He added that the company remains committed to ‘advancing US national security and defending the American people.’

The episode, therefore, highlights a nuanced position. Anthropic appears open to defence partnerships but seeks to maintain clearer limits on the deployment of its AI systems. The disagreement with the Pentagon ultimately reflects not a fundamental difference in goals, but rather different views on how far military institutions should be able to control and use advanced AI technologies.

Anthropic’s position illustrates a broader challenge facing governments and tech companies as AI becomes increasingly integrated into national security systems. While military and security institutions are eager to deploy advanced AI tools to support intelligence analysis, logistics, and operational planning, the companies developing these technologies are also seeking to establish safeguards for their use. Anthropic’s willingness to step back from a major defence partnership and challenge the Pentagon’s response underscores how some AI developers are trying to set limits on military uses of their systems.

Defence partnerships that shape the AI industry

While Anthropic has taken a cautious approach to military deployment of AI, other technology companies have pursued closer partnerships with defence institutions. One notable example is Palantir, the US data analytics firm co-founded by Peter Thiel that has longstanding relationships with numerous government agencies. Documents leaked in 2013 suggested that the company had contracts with at least 12 US government bodies. More recently, Palantir has expanded its defence offering through its Artificial Intelligence Platform (AIP), designed to support intelligence analysis and operational decision-making for military and security institutions.

Another prominent player is Anduril Industries, a US defence technology company focused on developing AI-enabled defence systems. The firm produces autonomous and semi-autonomous technologies, including unmanned aerial systems and surveillance platforms, which it supplies to the US DoD.

Shield AI, meanwhile, is developing autonomous flight software designed to operate in environments where GPS and communications may be unavailable. Its Hivemind AI platform powers drones that can navigate buildings and complex environments without human control. The company has worked with the US military to test these systems in training exercises and operational scenarios, including aircraft autonomy projects aimed at supporting fighter pilots.

The aforementioned partnerships illustrate how the US government has increasingly embraced AI as a key pillar of national defence and future military operations. In many cases, these technologies are already being used in operational contexts. Palantir’s Gotham and AIP, for instance, have supported US military and intelligence operations by processing satellite imagery, drone footage, and intercepted communications to help analysts identify patterns and potential threats.

Other companies are contributing to defence capabilities through autonomous systems development and hardware integration. Anduril supplies the US DoD with AI-enabled surveillance, drone, and counter-air systems designed to detect and respond to potential threats. At the same time, OpenAI’s technology is increasingly being integrated into national security and defence projects through growing collaboration with US defence institutions.

Such developments show that AI is no longer a supporting tool but a fundamental part of military infrastructure, influencing how defence organisations process information and make decisions. As governments deepen their reliance on private-sector AI, the emerging interplay among innovation, operational effectiveness, and oversight will define the central debate on military AI adoption.

The potential benefits of military AI

The debate over Anthropic’s restrictions on military AI use highlights the reasons governments invest in such technologies: defence institutions are drawn to AI because it processes vast amounts of information much faster than human analysts. Military operations generate massive data streams from satellites, drones, sensors, and communication networks, and AI systems can analyse them in near real time.

In 2017, the US DoD launched Project Maven to apply machine learning to drone and satellite imagery, enabling analysts to identify objects, movements, and potential threats on the battlefield faster than with traditional manual methods.

AI is increasingly used in military logistics and operational planning. It helps commanders anticipate equipment failures, enables predictive maintenance, optimises supply chains, and improves field asset readiness.

Recent conflicts have shown that AI-driven tools can enhance military intelligence and planning. In Ukraine, for example, forces reportedly used software to analyse satellite imagery, drone footage, and battlefield data. Key benefits include more efficient target identification, real-time tracking of troop movements, and clearer battlefield awareness through the integration of multiple data sources.

AI-assisted analysis has been used in intelligence and targeting during the Gaza conflict. Israeli defence systems use AI tools to rapidly process large datasets for surveillance and intelligence operations. The tools help analysts identify potential militant infrastructure, track movements, and prioritise key intelligence, thus speeding up information processing for teams during periods of high operational activity.

More broadly, AI is transforming the way militaries coordinate across land, air, sea, and cyber domains. AI integrates data from diverse sources, equipping commanders to interpret complex operational situations and enabling faster, informed decision-making. The advances reinforce why many governments see AI as essential for future defence planning.

Ethical concerns and Anthropic’s limits on military AI

Despite the operational advantages of military AI, its growing role in national defence systems has raised ethical concerns. Critics warn that overreliance on AI for intelligence analysis, targeting, or operational planning could introduce risks if the systems produce inaccurate outputs or are deployed without sufficient human oversight. Even highly capable models can generate misleading or incomplete information, which in high-stakes military contexts could have serious consequences.

Concerns about the reliability of AI systems are also linked to the quality of the data they learn from. Many models still struggle to distinguish authentic information from synthetic or manipulated content online. As generative AI becomes more widespread, the risk that systems may absorb inaccurate or fabricated data increases, potentially affecting how these tools interpret intelligence or analyse complex operational environments.

Questions about autonomy have also become a major issue in discussions around military AI. As AI systems become increasingly capable of analysing battlefield data and identifying potential targets, debates have emerged over how much decision-making authority they should be given. Many experts argue that decisions involving the use of lethal force should remain under meaningful human control to prevent unintended consequences or misidentification of targets.

Another area of concern relates to the potential expansion of surveillance capabilities. AI systems can analyse satellite imagery, communications data, and online activity at a scale beyond the capacity of human analysts alone. While such tools may help intelligence agencies detect threats more efficiently, critics warn that they could also enable large-scale monitoring if deployed without clear legal and institutional safeguards.

It is within this ethical landscape that Anthropic has attempted to position itself as a more cautious actor in the AI industry. Through initiatives such as Claude’s Constitution and its broader emphasis on AI safety, the company argues that powerful AI systems should include safeguards that limit harmful or unethical uses. Anthropic’s reported refusal to grant the Pentagon unrestricted control over its models during negotiations reflects this approach.

The disagreement between Anthropic and the US DoD therefore highlights a broader tension in the development of military AI. Governments increasingly view AI as a strategic technology capable of strengthening defence and intelligence capabilities, while some developers seek to impose limits on how their systems are deployed. As AI becomes more deeply embedded in national security strategies, the question may no longer be whether these technologies will be used, but who should define the boundaries of their use.

Military AI and the limits of corporate control

Anthropic’s dispute with the Pentagon shows that the debate over military AI is no longer only about technological capability. Questions of speed, efficiency, and battlefield advantage now collide with concerns over surveillance, autonomy, human oversight, and corporate responsibility. Governments increasingly see AI as a strategic asset, while companies such as Anthropic are trying to draw boundaries around how far their systems can go once they enter defence environments.

Contrasting approaches across the industry make the tension even clearer. Palantir, Anduril, Shield AI, and OpenAI have moved closer to defence partnerships, reflecting a broader push to integrate advanced AI into military infrastructure. Anthropic, by comparison, has tried to keep one foot in national security cooperation while resisting uses it views as unsafe or unethical. A divide of that kind suggests that the future of military AI may be shaped as much by company policies as by government strategy.

The growing reliance on private firms to build national security technologies has made governance harder to define. Military institutions want flexibility, scale, and operational control, while AI developers increasingly face pressure to decide whether they are simply suppliers or active gatekeepers of how their models are deployed. Anthropic’s position does not outright defence cooperation, but it does expose how fragile the relationship becomes when state priorities and corporate safeguards no longer align.

Military AI will continue to expand, whether through intelligence analysis, logistics, surveillance, or autonomous systems. Governance, however, remains the unresolved issue at the centre of that expansion. As AI becomes more deeply embedded in defence policy and military planning, should governments alone decide how far these systems can go, or should companies like Anthropic retain the power to set limits on their use?

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

GitHub malware campaign uses SEO tricks to steal browser data

Cybersecurity researchers have uncovered a malware campaign spreading through over 100 GitHub repositories disguised as free software tools. Hackers used SEO-heavy descriptions to make their fake repositories appear high in search results, close to legitimate software.

Users searching for popular programs were directed to counterfeit download pages. These pages offered ZIP files containing BoryptGrab, a malware designed to steal data from infected Windows systems. The files were disguised as cracked software, gaming cheats, or utility tools.

The malware collects sensitive information, including browser passwords, cookies, and cryptocurrency wallet details. It can access nine major browsers, including Chrome, Edge, Firefox, Opera, Brave, and Vivaldi, and bypass some security protections.

Certain variants also install additional tools allowing remote access and persistent control over infected machines. However, this enables hackers to run commands, maintain ongoing access, and steal more information without the user’s knowledge.

Trend Micro, the cybersecurity firm that reported the campaign, noted some code and logs suggest a possible Russian origin, though attribution is not confirmed. Experts warn that GitHub and search engine manipulation make this attack method especially dangerous.

Users are advised to download software only from trusted sources and to verify the authenticity of the repository. Organisations should follow security best practices such as software allowlisting, maintaining inventory, and removing unauthorised applications to prevent similar attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The US releases national cyber strategy, prioritising offense and AI

President Donald Trump released his administration’s national cybersecurity strategy, outlining priorities across six policy areas: offensive and defensive cyber operations, federal network security, critical infrastructure protection, regulatory reform, emerging technology leadership, and workforce development. Trump also signed an executive order the same day, directing federal agencies to increase the prosecution of cybercrime and fraud.

The strategy document spans five pages of substantive text, with administration officials describing it as intentionally high-level. The White House stated that more detailed implementation guidance would follow.

The strategy’s six pillars include the following provisions:

Shaping adversary behaviour requires deploying US offensive and defensive cyber capabilities and incentivising private-sector disruption of adversary networks. It also states the administration will “counter the spread of the surveillance state and authoritarian technologies.”

Promoting regulation advocates for reducing compliance requirements characterised as ‘costly checklists’ and addresses liability frameworks — a priority also present in the prior administration’s approach.

Modernising federal networks involves adopting post-quantum cryptography, AI, zero-trust architecture, and reducing procurement barriers for technology vendors.

Securing critical infrastructure emphasises supply chain resilience and preference for domestically produced technology, alongside a role for state, local, tribal, and territorial governments.

Sustaining technological superiority focuses primarily on AI, quantum cryptography, data centre security, and privacy protection.

Building cyber talent commits to removing barriers among industry, academia, government, and the military to develop a skilled cybersecurity workforce. This pillar follows a period in which the administration reduced the number of federal cyber positions.

The accompanying executive order directs the attorney general to prioritise cybercrime prosecution, tasks agencies with reviewing tools to counter international criminal organisations, and assigns the Department of Homeland Security expanded training responsibilities. The strategy itself references cybercrime once.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!