Hyundai invests in AI, robotics and hydrogen infrastructure

Hyundai will invest 9 trillion won ($6.3B) to build an AI data centre, robot hub, and hydrogen plant in Saemangeum. The project is part of Hyundai’s 125.2 trillion won domestic investment plan through 2030. Shares surged 10.7% following the announcement.

The AI data centre, costing 5.8 trillion won and due in 2029, will host up to 50,000 GPUs to process data from Hyundai’s automotive, steel, logistics, and defence units. The facility enables ‘physical AI,’ adding intelligence to vehicles and robots, not just software.

Hyundai will invest 400 billion won in a robot manufacturing complex with a capacity of 30,000 units annually. The fully automated facility integrates assembly, parts production, and logistics.

Robotics is central to Hyundai’s shift from automaker to AI platform operator, building on innovations such as the Atlas humanoid robot.

The plan includes a 200-megawatt hydrogen plant powered by solar energy, gigawatt-scale solar generation, and a pilot AI Hydrogen City zone. Hyundai estimates 16 trillion won in economic impact and 71,000 jobs.

President Lee Jae Myung highlighted the project as key to South Korea’s AI, robotics, and clean energy ambitions, promising regulatory support.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

McKinsey claims agentic AI will reshape global banking

Agentic AI is set to transform banking operations in the US and Asia, according to a McKinsey podcast featuring senior partners from New York, Mumbai and London. The technology goes beyond traditional automation by handling less structured tasks and supporting end to end decision making.

Research cited in the discussion suggests many banks are experimenting with AI, yet few report material financial gains. Leaders in the US and Asia are urged to avoid narrow pilot projects and instead redesign workflows, teams and governance around AI at scale.

McKinsey partners said successful banks in the US and Asia are aligning chief executives, technology leaders and risk officers behind a shared strategy. Operations, risk management and frontline services are seen as areas where AI could deliver significant productivity and quality gains.

Banks in India and other Asian markets are also benefiting from regulatory engagement, including guidance from the Reserve Bank of India. Speakers argued that workforce training, cross functional collaboration and clear accountability will determine whether AI delivers lasting impact in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Action-capable AI highlights new security challenges

AI agents are evolving from demos into autonomous tools, with OpenClaw emerging as a leading example. Unlike chatbots, these agents execute tasks directly, interacting with software and systems without constant human input.

The rise of action-capable AI introduces new security challenges. Agents can be manipulated through untrusted input or prompt injection. Persistent memory can also prolong mistakes or unintended behaviour.

The combination of access to sensitive data, external actions, and unverified content, sometimes called the ‘lethal trifecta’, amplifies risks, making careful configuration and oversight essential.

Self-hosted agents offer more control, while cloud-based versions simplify setup but shift security responsibility. Experts recommend running agents in isolated environments, limiting permissions, and requiring approval for sensitive actions.

These precautions reduce the chance of accidental or malicious harm while allowing users to experiment safely.

OpenClaw illustrates the potential of AI agents to automate workflows, handle repetitive tasks, and act proactively rather than passively advising. These tools show the future of consumer AI, but broader adoption requires stronger safety measures and awareness of risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands London research hub

OpenAI is turning its London office into its largest research hub outside the US, marking a strategic shift towards deeper engagement with the UK’s rapidly developing AI landscape. The move places the company in direct competition with Google DeepMind for scientific talent.

An expansion that strengthens OpenAI’s long-term presence in Europe by building a substantial research base rather than relying on satellite operations. The firm aims to attract researchers seeking strong academic links, regulatory clarity and access to the UK’s growing AI ecosystem.

The enlarged London team is expected to support frontier model development and experimental work that aligns with OpenAI’s international ambitions. Senior leadership framed the decision as a vote of confidence in the UK’s capacity to become one of the most influential centres for advanced AI research.

The announcement intensifies debate over global competition for expertise, as major labs seek locations that balance research freedom with responsible oversight.

OpenAI’s investment signals a belief that the UK can offer such conditions while positioning itself as a key player in shaping the next generation of AI capabilities.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Data sovereignty becomes an infrastructure strategy in the AI era

For most of the past decade, data governance was treated as a legal issue. IT built networks and bought tools, while regulators were someone else’s problem. That division no longer holds. Cloud adoption and AI have turned data sovereignty into a core infrastructure and strategy question.

Regulatory frameworks such as GDPR, NIS2, and DORA are expanding and being enforced more strictly. Governments are also scrutinising foreign cloud providers and cross-border access. Local data storage no longer ensures absolute data sovereignty if critical control layers remain outside national jurisdiction.

Traditional SASE and SSE models were not built for this environment. Many still separate outbound cloud traffic from inbound controls. That split creates blind spots in distributed architectures and complicates consistent policy enforcement.

AI workloads intensify the pressure. Retailers, banks, and manufacturers are deploying models locally, not just in hyperscale clouds. Securing east-west traffic across systems and APIs without undermining data sovereignty is becoming a central architectural challenge.

Managed sovereign infrastructure is one response. It reduces reliance on external cloud paths while preserving operational scale. Ultimately, organisations must align security, AI deployment, and governance with long-term resilience goals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nano Banana 2 brings Flash speed to Gemini image generation

Google has introduced Nano Banana 2, branded Gemini 3.1 Flash Image, combining Flash speed with advanced reasoning. The update narrows the gap between rapid generation and visual quality, enabling faster edits. Improved instruction-following enhances the handling of complex prompts.

Nano Banana 2 integrates real-time web grounding to improve subject accuracy and contextual awareness. The model supports more precise text rendering and in-image translation for marketing and localisation tasks. It can also assist with diagrams, infographics, and data visualisations.

Upgrades include stronger subject consistency across multiple characters and objects within a single workflow. Users can create assets in aspect ratios and resolutions from 512px to 4K. Google highlighted improvements in lighting, textures, and photorealism while maintaining Flash-level speed.

The model is rolling out across the Gemini app, Search, Lens, AI Studio, Vertex AI, Flow, and Google Ads. In Gemini, Nano Banana 2 replaces Nano Banana Pro by default, though Pro remains available for specialised tasks. Availability is expanding to additional countries and languages.

Google also reinforced its provenance strategy by combining SynthID watermarking with C2PA Content Credentials. The company said verification tools in Gemini have been used millions of times to identify AI-generated media. C2PA verification will be added to the app in a future update.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European businesses gain AI-powered contract tools with local data hosting

Workday has rolled out its Contract Lifecycle Management (CLM) platform with EU-hosted data in Frankfurt, allowing European organisations to use AI contract tools while keeping all data within the EU.

German, French, and Spanish language support is live, with more languages planned. The update is part of Workday’s EU Sovereign Cloud strategy, targeting the CLM market, which is set to grow to $1.9 billion by 2033.

The platform uses AI agents to automate contracts. The Contract Intelligence Agent extracts terms, obligations, and renewal dates to create a searchable repository, while the Contract Negotiation Agent flags deviations, drafts redlines, and speeds approvals.

Multilingual support ensures smooth workflows across Europe’s largest commercial languages, improving compliance and efficiency.

GDPR compliance remains critical, with fines up to €20 million or 4% of global turnover. EU-hosted CLM removes offshore data risks, which are crucial for the finance, healthcare, and defence sectors. Workday combines AI efficiency with full legal compliance.

Decision-makers should focus on three priorities: EU data residency, leveraging AI agents to accelerate contracts, and integrating CLM with HR and finance systems to maximise value. Workday aims to capture market share in Europe against competitors such as Icertis and DocuSign.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google API keys exposed after Gemini privilege expansion

Security researchers warn that exposed Google API keys in public client-side code could be used to authenticate with the Gemini AI assistant and access private data. The issue arose after developers enabled the Generative Language API in existing projects without updating key permissions.

Truffle Security scanned the November 2025 Common Crawl dataset and identified more than 2,800 live Google API keys publicly exposed in website source code. Some belonged to financial institutions, security firms, recruitment companies, and Google infrastructure.

Before Gemini’s launch, Google Cloud API keys were widely treated as non-sensitive identifiers for services such as Maps, YouTube embeds, analytics, and Firebase. After Gemini was introduced, those duplicate Google API keys also acted as authentication credentials for the AI assistant, expanding their privileges.

Researchers demonstrated the risk by using one exposed key to query the Gemini API models endpoint and list available models. They warned that attackers could exploit such access to extract private data or generate substantial API charges on victim accounts.

Google was notified in November 2025 and later classified the issue as a single-service privilege escalation. The company said it has introduced controls to block leaked keys, limit new AI Studio keys to Gemini-only scope, and notify developers of detected exposure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Financial crime risks are reshaped by the rise of autonomous AI agents

Autonomous AI agents are transforming finance by executing transactions independently and speeding up workflows in digital assets and programmable finance. Software can manage wallets and move funds across blockchains in seconds, narrowing detection windows.

AI agents don’t create new crimes but increase speed and complexity, making accountability essential. Responsibility rests with developers, operators, and beneficiaries, with investigators tracing control, configuration, and economic benefit to determine liability.

Weak oversight or misconfigured rules can lead to significant compliance and enforcement consequences.

Investigations face new challenges as autonomous agents operate across multiple blockchains, decentralised exchanges, and global jurisdictions.

Real-time analytics and automated tracing are essential to link transactions to accountable actors before funds move. Governance architecture and monitoring systems increasingly serve as evidence in regulatory or criminal actions.

Institutions and law enforcement are using AI monitoring, anomaly detection, and automated containment systems. Autonomous AI impacts sanctions and national security, emphasising the need for human oversight alongside automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI becomes central to biotech discovery and drug development

The biotechnology industry is moving from early AI experimentation to fully integrated discovery systems that embed AI into everyday research operations.

According to the 2026 Biotech AI Report from Benchling, leading organisations are reshaping data environments and R&D structures, making AI a core part of the drug development process.

Predictive models, such as protein structure prediction and docking simulations, are accelerating early-stage discovery, helping scientists identify targets faster and improve accuracy.

Challenges persist in generative design, biomarker analysis, and ADME prediction, where adoption lags due to fragmented or poor-quality data.

Organisations overcoming these hurdles invest in high-quality, well-annotated measurements and strong integration between wet and dry lab work. It creates a continuous learning cycle that drives faster insights and reduces experimental dead ends.

Talent strategies are evolving to place AI expertise directly in R&D teams. Many firms upskill existing scientific staff to act as ‘scientific translators,’ bridging biology, regulatory needs, and machine learning.

Embedding AI leadership within research teams or using hybrid models reduces handoffs and ensures AI tools remain practical in real-world experiments.

Biotech firms combine in-house development with commercial components, following a ‘build what differentiates, buy what scales’ strategy. Confidence in AI is rising, driving investment in infrastructure, modelling, and integrated AI workflows for research.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot