Singapore and South Korea expand AI partnership

South Korean President Lee Jae Myung used the opening day of his state visit to Singapore to set out plans for deeper cooperation in emerging technologies and renewable energy.

He framed the partnership as a chance to build a future-oriented agenda shaped by a shared reliance on human capital rather than natural resources.

The visit precedes a summit with Lawrence Wong, their second meeting in four months following the upgrade of bilateral ties to a strategic partnership. Both governments want to broaden collaboration across AI, energy, the green transition and defence while maintaining strong trade and investment links.

Lee told Korean residents in Singapore that the strengthened partnership could guide relations for the next fifty years by opening new routes for collaboration across strategic sectors. He added that expanding cooperation would support wider regional stability and long-term technological development.

The programme also includes a meeting with Tharman Shanmugaratnam and attendance at AI Connect. This forum connects business leaders and entrepreneurs from both countries seeking opportunities in AI research and commercial innovation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI and Microsoft strengthen their long-term AI collaboration

Microsoft and OpenAI have reaffirmed their long-standing collaboration after new funding and partnerships raised speculation about their relationship.

Both firms stressed that recent announcements leave their original agreements intact, preserving a framework built on technical integration, trust and shared ambitions for AI development.

Microsoft’s exclusive licence to OpenAI’s intellectual property remains untouched, as does its position as the sole cloud provider for stateless APIs powering OpenAI models.

These APIs can be accessed through either company. Yet all such calls, including those arising from third-party partnerships such as OpenAI’s work with Amazon, continue to run on Azure rather than on alternative clouds. OpenAI’s own products, including Frontier, also stay hosted on Azure.

Revenue-sharing arrangements are unchanged, alongside the contractual definition and evaluation process for artificial general intelligence.

Both companies emphasised that the partnership was designed to allow independent initiatives while preserving deep cooperation across research, engineering and product innovation.

OpenAI retains the freedom to secure additional compute capacity elsewhere, supported by large-scale initiatives such as the Stargate project.

Even with broader collaborations emerging across the industry, both firms present their alliance as central to advancing responsible AI and expanding access to powerful tools worldwide.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New all-island AI research alliance formed by Queen’s and UCD

Queen’s University Belfast and University College Dublin (UCD) have formalised a cross-border partnership focused on artificial intelligence research and talent development.

The collaboration will bring together researchers, faculty and students from both institutions to address shared challenges and opportunities in AI, including applications in healthcare, cybersecurity, data analytics and ethical AI governance.

The initiative aims to deepen academic cooperation, foster joint research projects, and expand interdisciplinary learning programmes that equip students with AI-relevant skills.

Leaders from both universities emphasised the importance of an all-island approach to strengthening AI expertise, enhancing competitiveness, and contributing to economic growth in Northern Ireland and the Republic of Ireland.

The partnership is expected to facilitate knowledge exchange, researcher mobility, and shared access to specialised facilities and funding opportunities.

Stakeholders also highlighted the broader societal context: as AI becomes integral to multiple sectors, coordinated academic and research ecosystems can help ensure that innovation aligns with ethical standards and public value.

By pooling resources and expertise across jurisdictions, the initiative positions both universities to play a more influential role in shaping AI policy, industry adoption and workforce development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Live facial recognition rolled out in Cardiff policing operation

South Wales Police has deployed live facial recognition technology in Cardiff to help prevent and detect crime. The operation is designed to identify suspects, wanted individuals and high-risk missing persons.

The deployment forms part of the force’s broader strategy to integrate advanced technologies into policing across South Wales. Officers will operate in clearly marked vehicles and designated recognition zones during the initiative.

Facial Recognition Technology compares faces captured from live camera feeds or digital images against a database of stored images. The system analyses key facial features and converts them into a mathematical representation using NEC’s NeoFace M40 algorithm before generating potential matches for officer review.

South Wales Police uses three types of facial recognition tools. Live Facial Recognition scans faces in real time against a pre-set watchlist, while Retrospective Facial Recognition analyses still images after incidents. Operator-Initiated Facial Recognition allows officers to take a photo on a mobile device and compare it against a watchlist to confirm identity.

Members of the public are encouraged to approach officers to learn more about how the technology works. Where possible, demonstrations will be provided to explain its operation and purpose.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in the workplace raises critical governance and shadow use challenges

AI adoption in the workplace is accelerating faster than corporate governance frameworks are evolving. Experts warn that many organisations are unprepared for the risks associated with widespread AI use, creating gaps in oversight and accountability.

A study by the University of Melbourne and KPMG found that nearly half of surveyed professionals admitted to misusing AI at work. Many employees also reported witnessing colleagues misuse AI tools, often without formal authorisation.

Standard practices include uploading sensitive company data to public AI platforms, using AI during internal assessments, and presenting AI-generated work as original output. A significant number of employees also reported reducing their effort because they rely on AI assistance.

Experts caution that this trend creates an illusion of productivity and competence. Managers may receive polished reports generated by AI, while employees may not fully understand or verify the content, exposing organisations to poor decision-making, security vulnerabilities, and compliance risks.

Data protection concerns are particularly significant. Feeding confidential or proprietary information into public AI systems can lead to data leakage and legal exposure, especially when misuse results in financial harm or regulatory breaches.

To address these risks, experts recommend clear internal rules, approved AI tools, monitoring of sensitive data flows, and mandatory human oversight in critical processes. Training programmes should focus on practical guidance and reinforce that employees remain responsible for the accuracy and legality of AI-assisted work.

Analysts note that similar patterns emerged during the early stages of internet adoption. As AI use expands, governance frameworks, enforcement mechanisms, and organisational cultures will need to evolve to manage long-term risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI data centre planned for East Manchester

Latos Data Centres is preparing plans for a 28,000 sq ft data centre in Monsall, East Manchester, aimed at serving rising demand for AI computing. The scheme would occupy a three acre brownfield site at Bower Street and Ten Acres Lane in Manchester.

The East Manchester project is designed as a neural edge data centre, bringing AI processing closer to end users than traditional cloud facilities. Latos said the Manchester development would form part of a broader plan to deliver 30 UK sites by 2030.

A live consultation in Manchester will run until 16 March, with Create Architecture leading the design. Advisers on the Manchester scheme include Euan Kellie Property Solutions on planning and SK Transport Planning on transport matters.

Latos said the Manchester facility would regenerate a vacant industrial plot and operate to high environmental and safety standards. The developer is also delivering a separate data centre in Tees Valley as it expands its AI-focused portfolio across the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Action-capable AI highlights new security challenges

AI agents are evolving from demos into autonomous tools, with OpenClaw emerging as a leading example. Unlike chatbots, these agents execute tasks directly, interacting with software and systems without constant human input.

The rise of action-capable AI introduces new security challenges. Agents can be manipulated through untrusted input or prompt injection. Persistent memory can also prolong mistakes or unintended behaviour.

The combination of access to sensitive data, external actions, and unverified content, sometimes called the ‘lethal trifecta’, amplifies risks, making careful configuration and oversight essential.

Self-hosted agents offer more control, while cloud-based versions simplify setup but shift security responsibility. Experts recommend running agents in isolated environments, limiting permissions, and requiring approval for sensitive actions.

These precautions reduce the chance of accidental or malicious harm while allowing users to experiment safely.

OpenClaw illustrates the potential of AI agents to automate workflows, handle repetitive tasks, and act proactively rather than passively advising. These tools show the future of consumer AI, but broader adoption requires stronger safety measures and awareness of risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands London research hub

OpenAI is turning its London office into its largest research hub outside the US, marking a strategic shift towards deeper engagement with the UK’s rapidly developing AI landscape. The move places the company in direct competition with Google DeepMind for scientific talent.

An expansion that strengthens OpenAI’s long-term presence in Europe by building a substantial research base rather than relying on satellite operations. The firm aims to attract researchers seeking strong academic links, regulatory clarity and access to the UK’s growing AI ecosystem.

The enlarged London team is expected to support frontier model development and experimental work that aligns with OpenAI’s international ambitions. Senior leadership framed the decision as a vote of confidence in the UK’s capacity to become one of the most influential centres for advanced AI research.

The announcement intensifies debate over global competition for expertise, as major labs seek locations that balance research freedom with responsible oversight.

OpenAI’s investment signals a belief that the UK can offer such conditions while positioning itself as a key player in shaping the next generation of AI capabilities.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Data sovereignty becomes an infrastructure strategy in the AI era

For most of the past decade, data governance was treated as a legal issue. IT built networks and bought tools, while regulators were someone else’s problem. That division no longer holds. Cloud adoption and AI have turned data sovereignty into a core infrastructure and strategy question.

Regulatory frameworks such as GDPR, NIS2, and DORA are expanding and being enforced more strictly. Governments are also scrutinising foreign cloud providers and cross-border access. Local data storage no longer ensures absolute data sovereignty if critical control layers remain outside national jurisdiction.

Traditional SASE and SSE models were not built for this environment. Many still separate outbound cloud traffic from inbound controls. That split creates blind spots in distributed architectures and complicates consistent policy enforcement.

AI workloads intensify the pressure. Retailers, banks, and manufacturers are deploying models locally, not just in hyperscale clouds. Securing east-west traffic across systems and APIs without undermining data sovereignty is becoming a central architectural challenge.

Managed sovereign infrastructure is one response. It reduces reliance on external cloud paths while preserving operational scale. Ultimately, organisations must align security, AI deployment, and governance with long-term resilience goals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European businesses gain AI-powered contract tools with local data hosting

Workday has rolled out its Contract Lifecycle Management (CLM) platform with EU-hosted data in Frankfurt, allowing European organisations to use AI contract tools while keeping all data within the EU.

German, French, and Spanish language support is live, with more languages planned. The update is part of Workday’s EU Sovereign Cloud strategy, targeting the CLM market, which is set to grow to $1.9 billion by 2033.

The platform uses AI agents to automate contracts. The Contract Intelligence Agent extracts terms, obligations, and renewal dates to create a searchable repository, while the Contract Negotiation Agent flags deviations, drafts redlines, and speeds approvals.

Multilingual support ensures smooth workflows across Europe’s largest commercial languages, improving compliance and efficiency.

GDPR compliance remains critical, with fines up to €20 million or 4% of global turnover. EU-hosted CLM removes offshore data risks, which are crucial for the finance, healthcare, and defence sectors. Workday combines AI efficiency with full legal compliance.

Decision-makers should focus on three priorities: EU data residency, leveraging AI agents to accelerate contracts, and integrating CLM with HR and finance systems to maximise value. Workday aims to capture market share in Europe against competitors such as Icertis and DocuSign.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot