Anthropic launches Claude Platform on AWS with managed AI agent tools

Anthropic has made Claude Platform on AWS generally available, giving AWS customers access to Claude Platform features through AWS authentication, billing and infrastructure integrations.

The platform includes Claude Managed Agents, code execution, web search, web fetch, prompt caching, batch processing, citations, support for the Files API, and support for Skills and MCP connectors. Anthropic said new Claude models and beta tools will become available on AWS at the same time they launch on the native Claude API.

Authentication runs through AWS Identity and Access Management, while audit logging is handled through AWS CloudTrail and billing through a single AWS invoice. Anthropic said the service is designed for organisations seeking native Claude Platform functionality while staying within existing AWS credentials, permissions and operational workflows.

The company also clarified the distinction between Claude Platform on AWS and Claude on Amazon Bedrock. Under the new platform, Anthropic operates the service and data is processed outside the AWS boundary.

By contrast, Claude on Amazon Bedrock keeps AWS as the data processor and operates within the AWS boundary, making it more suitable for customers with strict regional data residency requirements or those needing data processed exclusively within AWS infrastructure.

Why does it matter?

The launch shows how competition between major AI providers is shifting towards enterprise deployment, cloud integration and agent-based automation. For organisations, the choice is no longer only about model performance, but also about where data is processed, how access is controlled, how audit logs are handled and whether AI agents can be deployed within existing cloud governance systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Agentic AI and the future of cybersecurity

With the rapid expansion of AI technologies, agentic AI is rapidly moving from experimentation to deployment on a scale larger than ever before. As a result, these systems have been given far greater autonomy to perform tasks with limited human input, much to the delight of enterprise magnates.

Companies such as Microsoft, Google, Anthropic, and OpenAI are increasingly developing agentic AI systems capable of automating vulnerability detection, incident response, code analysis, and other security tasks traditionally handled by human teams.

The appeal of using agentic AI as a first line of defence is palpable, as cybersecurity teams face mounting pressure from the growing volume of attacks. According to the Microsoft Digital Defense Report 2025, the company now detects more than 600 million cyberattacks daily, ranging from ransomware and phishing campaigns to identity attacks. Additionally, the International Monetary Fund has also warned that cyber incidents have more than doubled since the COVID-19 pandemic, potentially triggering institutional failures and incurring enormous financial losses.

To add insult to injury, ransomware groups such as Conti, LockBit, and Salt Typhoon have shown increased activity from 2024 through early 2026, targeting critical infrastructure and global communications, as if aware of the upcoming cybersecurity fortifications and using a limited window of time to incur as much damage as possible.

In such circumstances, fully embracing agentic AI may seem like an ideal answer to the cybersecurity challenges looming on the horizon. Systems capable of autonomously detecting threats, analysing vulnerabilities, and accelerating response times could significantly strengthen cyber resilience.

Yet the same autonomy that makes these systems attractive to defenders could also be exploited by malicious actors. If agentic AI becomes a defining feature of cyber defence, policymakers and companies may soon face a more difficult question: how can they maximise its benefits without creating an entirely new layer of cyber risk?

Why cybersecurity is turning to agentic AI

The growing interest in agentic AI is not simply driven by the rise in cyber threats. It is also a response to the operational limitations of modern security teams, which are often overwhelmed by repetitive tasks that consume time and resources.

Security analysts routinely handle phishing alerts, identity verification requests, vulnerability assessments, patch management, and incident prioritisation — processes that can become difficult to manage at scale. Many of these tasks require speed rather than strategic decision-making, creating a natural opening for AI systems to operate with greater autonomy.

Microsoft has aggressively moved into this space. In March 2025, the company introduced Security Copilot agents designed to autonomously handle phishing triage, data security investigations, and identity management. Rather than replacing human analysts, Microsoft positioned the tools to reduce repetitive workloads and enable security teams to focus on more complex threats.

Google has approached the issue through vulnerability research. Through Project Naptime, the company demonstrated how AI systems could replicate parts of the workflow traditionally handled by human security researchers by identifying vulnerabilities, testing hypotheses, and reproducing findings.

Anthropic introduced another layer of complexity through Claude Mythos, a model built for high-risk cybersecurity tasks. While the company presented the model as a controlled release for defensive purposes, the announcement also highlighted how advanced cyber capabilities are becoming increasingly embedded in frontier AI systems.

Meanwhile, OpenAI has expanded partnerships with cybersecurity organisations and broadened access to specialised tools for defenders, signalling that major AI firms increasingly view cybersecurity as one of the most commercially viable applications for autonomous systems.

Together, these developments show that agentic AI is gradually becoming embedded in the cybersecurity infrastructure. For many companies, the question is no longer whether autonomous systems can support cyber defence, but how much responsibility they should be given.

When agentic AI tools become offensive weapons

The same capabilities that make agentic AI valuable to defenders also make it attractive to malicious actors. Systems designed to identify vulnerabilities, analyse code, automate workflows, and accelerate decision-making can be repurposed for offensive cyber operations.

Anthropic offered one of the clearest examples of that risk when it disclosed that malicious actors had used Claude in cyber campaigns. The company said attackers were not simply using the model for basic assistance, but were integrating it into broader operational workflows. The incident showed how agentic AI can move cyber misuse beyond advice and into execution.

The risk extends beyond large-scale cyber operations. Agentic AI systems could make phishing campaigns more scalable, automate reconnaissance, accelerate vulnerability discovery, and reduce the technical expertise needed to launch certain attacks. Tasks that once required specialist teams could become easier to coordinate through autonomous systems.

Security researchers have repeatedly warned that generative AI is already making social engineering more convincing through realistic phishing emails, cloned voices, and synthetic identities. More autonomous systems could further push those risks by combining content generation with independent action.

The concern is not that agentic AI will replace human hackers. Cybercrime could become faster, cheaper, and more scalable, mirroring the same efficiencies that organisations hope to achieve through AI-powered defence.

The agentic AI governance gap

The governance challenge surrounding agentic AI is no longer theoretical. As autonomous systems gain access to internal networks, cloud infrastructure, code repositories, and sensitive datasets, companies and regulators are being forced to confront risks that existing cybersecurity frameworks were not designed to manage.

Policymakers are starting to respond. In February 2026, the US National Institute of Standards and Technology (NIST) launched its AI Agent Standards Initiative, focused on identity verification and authentication frameworks for AI agents operating across digital environments. The aim is simple but important: organisations need to know which agents can be trusted, what they are allowed to do, and how their actions can be traced.

Governments are also becoming more cautious about deployment risks. In May 2026, the Cybersecurity and Infrastructure Security Agency (CISA) joined cybersecurity agencies from Australia, Canada, New Zealand, and the United Kingdom in issuing guidance on the secure adoption of agentic AI services. The warning was clear: autonomous systems become more dangerous when they are connected to sensitive infrastructure, external tools, and internal permissions.

The private sector is adjusting as well. Companies are increasingly discussing safeguards such as restricted permissions, audit logs, human approval checkpoints, and sandboxed environments to limit the degree of autonomy granted to AI agents.

The questions facing businesses are becoming practical. Should an AI agent be allowed to patch vulnerabilities without approval? Can it disable accounts, quarantine systems, or modify infrastructure independently? Who is held accountable when an autonomous system makes the wrong decision?

Agentic AI may become one of cybersecurity’s most effective defensive tools. Its success, however, will depend on whether governance frameworks evolve quickly enough to keep pace with the technology itself.

How companies are building guardrails around agentic AI

As concerns around autonomous cyber systems grow, companies are increasingly experimenting with safeguards designed to prevent agentic AI from becoming an uncontrolled risk. Rather than granting unrestricted access, many organisations are limiting what AI agents can see, what systems they can interact with, and what actions they can execute without human approval.

Anthropic has restricted access to Claude Mythos over concerns about offensive misuse, while OpenAI has recently expanded its Trusted Access for Cyber programme to provide vetted defenders with broader access to advanced cyber tools. Both approaches reflect a growing consensus that powerful cyber capabilities may require tiered access rather than unrestricted deployment.

The broader industry is moving in a similar direction. CrowdStrike has increasingly integrated AI-driven automation into threat intelligence and incident response workflows while maintaining human oversight for critical decisions. Palo Alto Networks has also expanded its AI-powered security automation tools designed to reduce response times without fully removing human analysts from the decision-making process.

Cloud providers are also becoming more cautious about autonomous access. Amazon Web Services, Google Cloud, and Microsoft Azure have increasingly emphasised zero-trust security models, role-based permissions, and segmented access controls as enterprises deploy more automated tools across sensitive infrastructure.

Meanwhile, sectors such as finance, healthcare, and critical infrastructure remain particularly cautious about fully autonomous deployment due to the potential consequences of false positives, accidental shutdowns, or disruptions to essential services.

As a result, security teams are increasingly discussing safeguards such as audit logs, sandboxed environments, role-based permissions, staged deployments, and human approval checkpoints to balance speed with accountability. For now, many companies seem ready to embrace agentic AI, but without keeping one hand on the emergency brake.

The future of cybersecurity may be agentic

Agentic AI is unlikely to remain a niche experiment for long. The scale of modern cyber threats, combined with the mounting pressure on security teams, means organisations will continue to look for faster and more scalable defensive tools.

That shift could significantly improve cybersecurity resilience. Autonomous systems may help organisations detect threats earlier, reduce response times, address workforce shortages, and manage the growing volume of attacks that human teams increasingly struggle to handle alone.

At the same time, the technology’s long-term success will depend as much on restraint as on innovation. Without clear governance frameworks, operational safeguards, and human oversight, the same tools designed to strengthen cyber defence could introduce entirely new vulnerabilities.

The future of cybersecurity may increasingly belong to agentic AI. Whether that future becomes safer or more volatile may depend on how responsibly governments, companies, and security teams manage the transition.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!


US EDA launches AI workforce training programme

The US Economic Development Administration has announced approximately $25 million in funding for a new AI Upskill Accelerator Pilot Program to support AI workforce training.

The programme will fund industry-driven partnerships that design and implement AI training models for workers and businesses in sectors considered important to regional economies. EDA says the initiative is intended to support workforce development approaches that can scale, adapt and become self-sustaining as AI technologies continue to evolve.

The funding opportunity links the programme to the Trump administration’s 2025 Artificial Intelligence Action Plan, which includes goals to accelerate AI development, support adoption across industries and strengthen US leadership in the technology. EDA says the programme is part of efforts to empower American workers to use AI tools and support industries tied to regional growth.

Deputy Assistant Secretary and Chief Operating Officer Ben Page said AI is becoming ‘a core driver of productivity and growth across industries’ and that workers need AI skills so regions can attract investment, adopt advanced technologies and sustain long-term economic growth.

The pilot will support workforce development in an emerging technology area while helping businesses and workers build the skills needed to use AI in the workplace. Applications for the programme are open until 10 July 2026.

Why does it matter?

The programme shows how AI policy is increasingly being linked to regional economic development and workforce readiness, not only research or infrastructure. By funding industry-driven training models, the EDA is trying to prepare workers and local economies for AI adoption while helping businesses close skills gaps that could affect productivity, investment and competitiveness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Supply chains move toward adaptive AI-driven networks

Supply chains are increasingly being redesigned to respond more dynamically to disruption, risk and changing market conditions, as companies explore AI-native systems that can support planning, decision-making and real-time adaptation.

Writing for the World Economic Forum, Avathon CEO Pervinder Johar argues that traditional supply-chain software is struggling to cope with a more volatile operating environment because many systems still rely on rigid rules, static configurations and manual workflows.

The article says the emerging model places greater emphasis on knowledge rather than raw data, combining context and reasoning across suppliers, logistics routes, energy markets and policy environments. AI-native systems are presented as a way to support continuous learning, improve disruption forecasting and help organisations assess alternative responses before problems escalate.

Physical AI is also described as part of the shift, embedding intelligence more directly into operational infrastructure. According to the article, this could allow logistics systems, equipment and connected assets to sense, compute and coordinate responses more quickly across supply-chain networks.

As automation expands, human roles are expected to move towards strategic oversight. Supply-chain professionals may spend less time managing dashboards and exceptions, and more time setting priorities, weighing trade-offs and guiding AI agents through intent expressed in natural language.

The broader argument is that supply-chain management is moving from reactive workflows towards more adaptive coordination, where systems can anticipate disruption, assess options and support decisions across organisations and partners.

Why does it matter?

Supply chains are facing persistent disruption from geopolitical tensions, climate risks, logistics bottlenecks and changing market conditions. If AI-enabled systems can improve forecasting, coordination and response, they could help companies build more resilient operations. However, the shift also raises governance questions around accountability, human oversight, data quality and reliance on automated decision-making across critical trade and logistics networks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

Canada’s IRCC sets AI strategy for immigration services

Immigration, Refugees and Citizenship Canada has released its first AI Strategy, outlining how the department plans to use AI across immigration, citizenship, refugee, passport and settlement services while maintaining human oversight, privacy protection and accountability.

The strategy aligns with Canada’s AI Strategy for the Federal Public Service 2025-2027 and frames AI as a tool to improve service delivery, reduce administrative burdens, strengthen programme integrity and respond to fraud and cybersecurity threats. IRCC says its approach is based on responsible adoption, governance, workforce readiness, transparency and public engagement.

The department says it has used advanced analytics and machine learning since 2018 to support application triage, workload distribution and risk detection. It says machine learning can help identify straightforward, low-risk files for expedited officer review, while outcomes remain subject to officer verification.

IRCC states that it does not use autonomous AI agents or intelligent automation systems that can refuse client applications. It says systems that learn and adapt independently are generally unsuitable for administrative decision-making because their logic can be difficult to explain or reproduce.

The strategy identifies several areas of interest, including client service, fraud detection, document anomaly detection, settlement support, data analysis, accessibility and internal knowledge management. IRCC is also experimenting with AI tools for tasks such as document fraud detection, anomaly detection and support for administrative processes.

Privacy is presented as a central guardrail. IRCC says AI systems must use only the minimum personal information necessary for specific, justified purposes, and must include privacy assessments, mitigation measures, testing, auditing and Canadian-controlled environments for sensitive information. The department also says it will avoid black-box AI models for application decisions and keep AI systems explainable, supervised, secure and regularly tested.

The strategy sets five implementation priorities: establishing an AI Centre of Expertise, strengthening governance, building an AI-ready workforce, accelerating experimentation and developing an engagement strategy with employees, clients, vulnerable groups and partner organisations. IRCC describes the strategy as a living document that will evolve with domestic and international AI policy developments.

Why does it matter?

Immigration decisions can have life-changing consequences, making AI use in this field especially sensitive. IRCC’s strategy shows how governments are trying to use AI to improve efficiency and detect risks while drawing limits around autonomous decision-making, black-box models and the handling of personal information. The real test will be whether safeguards around human oversight, explainability, privacy and bias are strong enough as AI becomes more embedded in public administration.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Singapore cooperation with Japan targets AI in patent examination

The Intellectual Property Office of Singapore and the Japan Patent Office have announced a new cooperation initiative on the use of AI in patent substantive examination, as patent offices adapt to rapid technological change.

The initiative was announced after a bilateral meeting in Singapore between IPOS Chief Executive Tan Kong Hwee and JPO Commissioner Yasuyuki Kasai. It builds on a Memorandum of Cooperation signed in Tokyo last November.

Under the initiative, IPOS and JPO will launch a bilateral patent examiner exchange programme and hold regular technical exchanges on the use of AI in patent examination. The two offices said the cooperation is intended to strengthen capabilities, share best practices and develop robust processes for high-quality and trusted patent examination.

Tan said AI is reshaping innovation and work processes, making it necessary for IP offices to evolve while maintaining examination quality and trust. Kasai said the cooperation would bring together the experience and expertise of both offices and support innovation in both countries.

The cooperation will also cover patent search and examination quality management, benchmarking of examination practices, IT infrastructure development, operational management and IP policy exchanges. Both offices will also coordinate initiatives to support enterprises, including SMEs, and strengthen trade and IP flows between Singapore and Japan.

IPOS and JPO said the partnership reflects their shared commitment to addressing emerging challenges in the intellectual property landscape and keeping innovation ecosystems trusted, efficient and future-ready.

Why does it matter?

Patent offices are increasingly facing pressure to handle more complex applications while maintaining examination quality, consistency and trust. Cooperation between Singapore and Japan on AI-assisted examination shows how intellectual property authorities are beginning to adapt their own administrative systems to AI, not only to regulate AI-related inventions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK’s NCSC urges caution on using AI to detect software vulnerabilities

The UK National Cyber Security Centre has warned organisations not to rush into using AI models to find software vulnerabilities without first considering security, legal, operational, and resourcing risks.

In guidance signed by Ruth C, Head of Vulnerability Management Group at the NCSC, the agency says organisations may feel pressure to use new AI models for vulnerability discovery, but should first ask what they are trying to achieve and whether AI is the best way to improve security.

The NCSC stresses that finding vulnerabilities does not automatically improve an organisation’s security and could make it worse if teams lack a process to manage, prioritise, and fix the issues that AI tools identify. It says basic cyber hygiene, including patching known vulnerabilities and controlling unauthorised access, is still more important for most organisations than focusing on zero-days.

The guidance also urges organisations to prioritise exploitable vulnerabilities rather than simply counting how many issues have been found. It notes that more than 40,000 vulnerabilities were assigned CVEs in 2025, while CISA’s Known Exploited Vulnerabilities catalogue tracked about 400 newly exploited vulnerabilities and around 40 that were zero-days when first exploited.

The NCSC highlights several risks associated with using AI for vulnerability discovery, including information leakage, infrastructure security, sandboxing, production-environment access, permissions granted to large language models, data retention policies, and legal compliance. It also advises organisations using hosted models to consider the physical location and legal jurisdictions that apply to them.

The guidance recommends starting with the external attack surface and verifying results through both AI and human review. It says keeping pace with frontier AI cyber developments will almost certainly be critical to cyber resilience over the next decade, but adds that organisations should invest in people as well as tools, stating that AI models accelerate the skills of cybersecurity staff rather than replacing them.

The NCSC also says organisations should understand how everything they develop or use is patched, with good asset management and dependency management described as crucial foundations for cyber resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity sector revenue reaches £14.7 billion in UK

The UK cybersecurity sector generated £14.7 billion in annual revenue and £9.1 billion in gross value added, according to the government’s Cyber Security Sectoral Analysis 2026.

The report, commissioned by the Department for Science, Innovation and Technology and produced by Ipsos and Perspective Economics, identifies 2,603 firms active in the UK cybersecurity market. That marks a 20% increase from the previous report, which identified 2,165 firms.

Employment in the sector reached about 69,600 full-time equivalent roles, an increase of around 2,300 jobs, or 3%, over the past year. The report says this is the lowest recorded employment growth rate since the series began in 2018, suggesting a softening in workforce growth.

Revenue rose by around 11% from last year’s estimate of £13.2 billion, while gross value added increased by 17%. The report also estimates GVA per employee at £131,200, up from £116,200, suggesting higher productivity within the cybersecurity ecosystem.

The analysis also points to growth in AI security and software security. It estimates that 111 firms active and registered in the UK now clearly offer cybersecurity for AI systems as an explicit product or service, up 68% from the previous baseline. Of those, 32 are specialist providers focused mainly or exclusively on AI security, while 79 offer AI security as part of a broader portfolio.

Software security is also expanding across the market. The report estimates that 1,141 firms provide software security services, an increase of 181 firms, or 19%, from the previous baseline. Nearly half of all UK cybersecurity providers appear to be involved in software security provision, with application security, cloud and container security, secure development, supply chain security, and DevSecOps highlighted as key areas.

Investment remains more subdued. Dedicated cybersecurity firms raised £184 million across 47 deals in 2025, down 11% from £206 million across 59 deals in 2024. The report says investors highlighted AI security and post-quantum cryptography as key themes, while also noting procurement barriers and limited UK growth-stage capital as ongoing concerns.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU weighs social media age rules to protect children

The European Commission has signalled that it may propose EU-level rules on delaying children’s access to social media, as concerns grow over addictive platform design, harmful content and AI-enabled risks for minors.

In a keynote address at the European Summit on Artificial Intelligence and Children in Copenhagen, European Commission President Ursula von der Leyen said the EU must consider whether young people should be given more time before using social media. She said the question was not whether young people should have access to social media, but ‘whether social media should have access to young people’.

Von der Leyen said almost all the EU member states had called for an assessment of whether a minimum age is needed, while Denmark and nine other member states want to introduce one. She added that the Commission’s expert panel on child safety online is advising on the issue, and that a legal proposal could follow this summer, depending on its findings.

Von der Leyen linked the debate to wider concerns about platform business models. She argued that children’s attention was being treated as a commodity through addictive design, advertising, algorithmic recommendation systems and content that can harm mental health. She also pointed to risks linked to AI-generated sexualised images and child sexual abuse material.

The Commission President cited enforcement under the Digital Services Act, including actions involving TikTok, Meta and X, as well as investigations into platforms over whether children are being drawn into harmful content. She said the EU had created strong tools through the Digital Services Act and the Digital Markets Act, and that platforms breaking the rules would be held accountable.

Von der Leyen said that any age restriction model would depend on reliable age verification. She said the EU had developed an open-source age verification app that would soon be available, including a rollout in Denmark by summer, and that the Union was working with member states to integrate it into digital wallets.

The speech also framed child online safety as a matter of platform responsibility, not just parental control. Von der Leyen said social media companies should be responsible for product safety in the same way other industries are, adding that ‘safety by design’ protections should be strengthened and expanded. She also pointed to the forthcoming Digital Fairness Act, which is expected to address addictive and harmful design practices.

Why does it matter?

The speech suggests that the EU child online safety policy may be moving from platform accountability after harm occurs towards more structural controls over access, design and age verification. A possible social media delay would mark a major shift in how the EU approaches children’s participation online, raising questions about privacy-preserving age checks, children’s rights, parental responsibility, platform duties and the balance between protection and digital inclusion.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New IRIS report links AI narratives to civic action

A report by International Resource for Impact and Storytelling examines how organisations worldwide are adapting to AI and algorithm-driven platforms. It focuses on how technology and storytelling are being used to support democracy and counter harmful narratives.

The study draws on insights from 10 organisations, identifying key approaches such as co-opting technology, countering surveillance and disinformation, and innovating in storytelling. These strategies aim to reshape narratives and challenge authoritarian pressures.

Examples include campaigns addressing digital surveillance, projects using journalism to amplify marginalised voices, and creative approaches to civic engagement. The report also highlights the role of artists and storytellers in influencing how AI is understood.

The findings highlight the growing importance of narrative and culture in the digital landscape, as organisations experiment with new forms of communication and resistance. The research reflects global efforts to align AI with democratic values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot