AI infrastructure raises critical questions for global technology development

AI is increasingly viewed as a key global infrastructure. The CEO of Nvidia argues that AI should not be seen merely as software but as a foundational technology shaping economies and industries. As a result, companies and governments worldwide are expected to build and rely on AI systems increasingly.

At the same time, AI infrastructure expansion is still in its early stages. Nvidia’s CEO notes that although hundreds of billions of dollars have already been invested in data centres and computing systems, the broader AI buildout will likely require trillions of dollars in additional investment.

Moreover, governance and access decisions will play a critical role. According to Nvidia’s CEO, choices about how quickly AI is developed, who can access it, and how it is regulated will ultimately shape the technology’s long-term impact on society.

In addition, AI differs fundamentally from traditional software. While conventional software follows prewritten instructions, AI systems generate responses dynamically based on context. Consequently, AI can produce new outputs rather than simply retrieving stored commands.

Furthermore, AI development depends on multiple interconnected technological layers. The CEO of Nvidia describes a five-layer stack composed of energy, chips, infrastructure, models, and applications. Each layer supports the next, meaning AI services rely on everything from electricity supply to advanced computing hardware.

Finally, AI may also reshape the labour market. Nvidia’s CEO suggests that as AI increases productivity, companies could expand operations and create new jobs, particularly in infrastructure development and technical fields.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU launches AI platform to detect food fraud and contamination

Food safety monitoring across the EU is receiving a technological upgrade with the launch of TraceMap, a new AI platform designed to detect food fraud, contamination and disease outbreaks more quickly.

The European Commission introduced the tool as part of efforts to strengthen consumer protection and improve oversight of the agri-food supply chain.

TraceMap helps authorities analyse large volumes of data related to food production, distribution and trade. By identifying connections between operators, shipments and supply chains, the system allows investigators to spot suspicious activity and potential safety risks earlier.

National authorities in the EU member states can already access the platform, enabling them to conduct more targeted inspections and investigations without requiring additional resources.

The platform draws on data from existing EU systems such as the Rapid Alert System for Food and Feed (RASFF) and the Trade Control and Expert System (TRACES). Using AI to structure and interpret information, TraceMap can reveal patterns in production and trade flows that may indicate contamination, fraud, or other irregularities in the food supply chain.

Early testing of the platform has already demonstrated its practical value. A pilot version of TraceMap helped authorities identify and recall infant milk formula produced with contaminated ARA oil originating from China.

European officials say the system will strengthen the EU’s ability to respond rapidly to food safety risks while improving monitoring of both domestic production and imported products.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The US releases national cyber strategy, prioritising offense and AI

President Donald Trump released his administration’s national cybersecurity strategy, outlining priorities across six policy areas: offensive and defensive cyber operations, federal network security, critical infrastructure protection, regulatory reform, emerging technology leadership, and workforce development. Trump also signed an executive order the same day, directing federal agencies to increase the prosecution of cybercrime and fraud.

The strategy document spans five pages of substantive text, with administration officials describing it as intentionally high-level. The White House stated that more detailed implementation guidance would follow.

The strategy’s six pillars include the following provisions:

Shaping adversary behaviour requires deploying US offensive and defensive cyber capabilities and incentivising private-sector disruption of adversary networks. It also states the administration will “counter the spread of the surveillance state and authoritarian technologies.”

Promoting regulation advocates for reducing compliance requirements characterised as ‘costly checklists’ and addresses liability frameworks — a priority also present in the prior administration’s approach.

Modernising federal networks involves adopting post-quantum cryptography, AI, zero-trust architecture, and reducing procurement barriers for technology vendors.

Securing critical infrastructure emphasises supply chain resilience and preference for domestically produced technology, alongside a role for state, local, tribal, and territorial governments.

Sustaining technological superiority focuses primarily on AI, quantum cryptography, data centre security, and privacy protection.

Building cyber talent commits to removing barriers among industry, academia, government, and the military to develop a skilled cybersecurity workforce. This pillar follows a period in which the administration reduced the number of federal cyber positions.

The accompanying executive order directs the attorney general to prioritise cybercrime prosecution, tasks agencies with reviewing tools to counter international criminal organisations, and assigns the Department of Homeland Security expanded training responsibilities. The strategy itself references cybercrime once.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chinese tech hubs promote OpenClaw AI agent

Technology hubs in China are promoting the OpenClaw AI agent as part of new local industry initiatives. Officials in China say the open source tool can automate tasks such as email management and travel booking.

Cities including Shenzhen, Wuxi and Hefei are drafting policies to build an ecosystem around OpenClaw. Authorities in China are offering subsidies, computing resources and office support to encourage AI-driven one-person companies.

OpenClaw has grown rapidly since its release and has become one of the fastest-expanding projects on GitHub. Technology groups say the tool could allow individuals to operate businesses with far fewer employees.

Regulators have also warned about security and data protection risks linked to AI agents. Draft rules in China propose limits on access to sensitive data and stronger oversight of cross-border information flows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Promptfoo joins OpenAI to secure AI deployments

OpenAI is acquiring Promptfoo, a platform designed to help enterprises identify and remediate vulnerabilities in AI systems during development. Once finalised, Promptfoo’s technology will be integrated into OpenAI Frontier, OpenAI’s platform for building and managing AI coworkers.

Promptfoo, led by Ian Webster and Michael D’Angelo, provides tools trusted by over a quarter of Fortune 500 companies. Its open-source CLI and library support evaluation and red-teaming of large language model applications.

The acquisition allows OpenAI to enhance both open-source initiatives and enterprise capabilities within Frontier.

Integration will introduce native security and evaluation features into Frontier. Enterprises will gain automated tools to detect risks such as prompt injections, jailbreaks, data leaks, tool misuse, and out-of-policy agent behaviour.

Security testing will be built into development workflows to catch issues early and support safe AI deployment.

Oversight and accountability features will also be strengthened. Integrated reporting and traceability will allow organisations to document testing, monitor changes over time, and meet governance, risk, and compliance requirements.

The acquisition is expected to expand OpenAI’s ability to deliver secure and reliable AI for enterprise applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US government faces lawsuits over Anthropic AI move

Anthropic has launched two lawsuits against the US Department of Defence, disputing its recent designation of the AI firm as a ‘supply chain risk.’ The company claims the move is unlawful and infringes on its First Amendment rights.

The company argues that the government is punishing it for refusing to allow the military to use its AI for domestic surveillance or for fully autonomous weapons.

The lawsuits, filed in California and Washington, DC courts, follow the Pentagon’s unprecedented use of the supply chain risk tool against a US company. The designation requires other government contractors to sever ties with Anthropic, posing a serious threat to its business operations.

The company maintains it remains committed to supporting national security applications of its AI.

The Department of Defence has used anthropic’s AI model Claude in operations targeting Iran. The company says it has worked with the DoD on system adaptations and seeks to continue negotiations while protecting its business and partners.

The firm claims government actions cause harm, though CEO Dario Amodei said the designation’s impact is limited. Anthropic insists judicial review is a necessary step to defend its business and ensure the responsible deployment of its technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canada warns about AI-generated scams targeting citizens online

Authorities in Canada have issued a warning about the growing use of AI in impersonation scams targeting citizens. Fraudsters increasingly deploy advanced tools capable of mimicking politicians, government officials and other public figures with convincing realism.

Deepfake videos, synthetic audio and AI-generated messages allow scammers to create convincing communications that appear to come from trusted authorities.

Such tactics are often used to persuade victims to send money, reveal personal information, install malicious software or engage with fraudulent investment offers.

Officials also warn about fake government websites created with AI-assisted tools that imitate official pages by copying national symbols and similar domain names. Suspicious websites often use unusual web addresses, extra characters, or unfamiliar domain endings to mislead visitors.

Authorities advise Canadians to verify unexpected messages through official channels rather than clicking links or responding immediately.

Suspected impersonation attempts should be reported to the Competition Bureau or the Canadian Anti-Fraud Centre.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Astronauts test AI-assisted health checks in orbit

AI is playing an increasingly important role in space medicine as astronauts aboard the International Space Station test new technologies designed to support autonomous health monitoring. The experiment combines augmented reality with an AI system that analyses ultrasound scans in orbit.

NASA astronaut Jack Hathaway and European Space Agency astronaut Sophie Adenot carried out guided ultrasound examinations using the EchoFinder-2 biomedical device.

Augmented-reality instructions helped the astronauts position the scanner correctly while AI analysed the images and confirmed the identification of internal organs.

The developers of the system aim to reduce reliance on medical specialists on Earth. Future crews travelling farther into space may face communication delays, making real-time guidance from ground teams more difficult.

Reliable AI-supported diagnostics could therefore become a key tool for long-duration missions, enabling astronauts to perform complex medical checks independently during journeys to the Moon, Mars, and beyond.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Blockchain network Tron joins Agentic AI Foundation to advance AI infrastructure

Tron has joined the Linux Foundation’s Agentic AI Foundation (AAIF) as a governing member to support the development of AI agent infrastructure. The network aims to enable collaboration and interoperability among systems that efficiently manage high-volume, low-value transactions.

Founder Justin Sun highlighted Tron’s speed, scalability, and low fees as key advantages for AI-agent use cases. He noted that as AI agents move to mainstream machine-to-machine commerce, transaction volumes could rise, increasing demand for robust blockchain networks.

The AAIF encourages open-source agentic AI development and establishes standards for governance, safety, and interoperability. Tron joins major members like Circle and JPMorgan while building tools and infrastructure to support AI, including the Bank of AI with AINFT.

Tron currently leads in blockchain revenue, with data showing strong performance over 24 hours, seven days, and 30 days. Sun confirmed that AI activity is contributing to this growth, reflecting the rapid adoption and scaling of agentic AI on the network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Qualcomm and NEURA Robotics partner to accelerate physical AI and cognitive robotics

NEURA Robotics and Qualcomm have formed a long-term strategic collaboration to advance physical AI and next-generation robotics platforms.

A partnership that aims to bring intelligent robots into real-world environments more rapidly by combining advanced AI processors with full-stack robotic systems.

The cooperation focuses on developing ‘Brain + Nervous System’ reference architectures that integrate high-level cognition, such as perception, reasoning and planning, with ultra-low-latency control systems.

Qualcomm’s robotics processors, including the Dragonwing IQ10 Series, will provide AI compute and connectivity, while NEURA contributes robotic hardware platforms and embodied AI software.

Both companies intend to support deployment across multiple robotic forms, including robotic arms, mobile robots, service machines and humanoid platforms.

NEURA’s cloud environment, Neuraverse, will serve as a shared platform for simulation, training and lifecycle management of robotic intelligence, allowing innovations developed by one robot to spread across entire fleets.

The collaboration also aims to establish a global developer ecosystem for robotics applications. Standardised runtime environments and deployment interfaces are expected to simplify how AI workloads move from development into production while maintaining reliability and safety.

Executives from both companies emphasised that robotics represents one of the most demanding AI environments, as decisions must be made instantly and locally.

By combining edge AI processing with cognitive robotic systems, the partnership aims to accelerate commercial deployment of humanoid and general-purpose robots capable of operating safely alongside humans across industries.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!