Finance ministry in South Korea pledges reform for public crypto management

South Korea’s finance minister, Koo Yun-cheol, has pledged urgent reforms to how government agencies manage digital assets following high-profile failures in state custody.

Recent incidents revealed that police and tax authorities mishandled seized cryptocurrency, highlighting weaknesses in oversight and security practices. Authorities will review current management methods and implement measures to prevent future losses.

Operational risks around securing crypto in public institutions have become increasingly apparent. A notable case involved Seoul police in Gangnam losing access to 22 BTC, worth around $1.4 million, after failing to retain private keys and allowing a third-party firm to manage the assets.

Prosecutors are now investigating potential bribery linked to the case.

The government says it holds only digital assets acquired through lawful enforcement, such as seizures for unpaid taxes or criminal cases. The reforms aim to strengthen security, improve operational controls, and restore confidence in the public sector’s handling of crypto amid growing scrutiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Reddit surges as AI search drives a new era of online discovery

AI-generated search summaries are reshaping online discovery and pushing Reddit to the forefront of global information flows.

The rise of Google’s AI Overview feature places curated AI summaries above traditional search results, encouraging users to rely on machine-generated syntheses instead of browsing lists of websites.

Reddit’s visibility surged after the platform agreed to data access partnerships with Google and OpenAI, enabling large language models to train on its vast archive of human conversations.

The platform’s user-generated discussions are increasingly prioritised because they provide commentary viewed as more neutral and less commercially influenced.

Research from Profound identifies Reddit as the most cited source across major AI platforms. Reddit’s rapid expansion reflects such a shift.

It has overtaken TikTok in the UK, according to Ofcom and now reports 116 million daily active users and more than one billion monthly users.

Communities built around niche interests, combined with voting systems and karma-driven credibility, create a structure that appeals to AI systems searching for grounded, human-authored content.

The platform’s design, centred on subreddits run by volunteer moderators, reinforces trust signals that large models can evaluate when generating AI Overview results.

As AI-powered search becomes the dominant interface for navigating the internet, Reddit’s role as a primary corpus for training and citation continues to expand, reshaping how people discover and verify information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Samsung advances toward AI autonomous factories by 2030

The South Korean electronics corporation, Samsung, is preparing a major shift to autonomous manufacturing, converting global production sites into AI-driven factories by 2030.

As such, the company is moving toward a model in which AI systems understand on-site conditions and make operational decisions independently, rather than relying on fixed automation.

A transition that will use digital twin simulations across the whole manufacturing cycle, from materials warehousing to shipping.

Samsung will deploy AI agents for quality control, production and logistics, aiming for stronger data-driven verification and improved efficiency. Wider adoption of AI in environmental health and safety is expected to raise workplace safety standards.

The firm plans to integrate agentic AI, first introduced with the Galaxy S26, into industrial operations, enabling systems to set and execute their own tasks. Humanoid manufacturing robots will be rolled out in phases as Samsung builds fully optimised smart factories.

Samsung will present its manufacturing vision at Mobile World Congress 2026, followed by the Samsung Mobile Business Summit, where executives will detail governance strategies for managing the rise of agentic AI across industries.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Singapore and South Korea expand AI partnership

South Korean President Lee Jae Myung used the opening day of his state visit to Singapore to set out plans for deeper cooperation in emerging technologies and renewable energy.

He framed the partnership as a chance to build a future-oriented agenda shaped by a shared reliance on human capital rather than natural resources.

The visit precedes a summit with Lawrence Wong, their second meeting in four months following the upgrade of bilateral ties to a strategic partnership. Both governments want to broaden collaboration across AI, energy, the green transition and defence while maintaining strong trade and investment links.

Lee told Korean residents in Singapore that the strengthened partnership could guide relations for the next fifty years by opening new routes for collaboration across strategic sectors. He added that expanding cooperation would support wider regional stability and long-term technological development.

The programme also includes a meeting with Tharman Shanmugaratnam and attendance at AI Connect. This forum connects business leaders and entrepreneurs from both countries seeking opportunities in AI research and commercial innovation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI and Microsoft strengthen their long-term AI collaboration

Microsoft and OpenAI have reaffirmed their long-standing collaboration after new funding and partnerships raised speculation about their relationship.

Both firms stressed that recent announcements leave their original agreements intact, preserving a framework built on technical integration, trust and shared ambitions for AI development.

Microsoft’s exclusive licence to OpenAI’s intellectual property remains untouched, as does its position as the sole cloud provider for stateless APIs powering OpenAI models.

These APIs can be accessed through either company. Yet all such calls, including those arising from third-party partnerships such as OpenAI’s work with Amazon, continue to run on Azure rather than on alternative clouds. OpenAI’s own products, including Frontier, also stay hosted on Azure.

Revenue-sharing arrangements are unchanged, alongside the contractual definition and evaluation process for artificial general intelligence.

Both companies emphasised that the partnership was designed to allow independent initiatives while preserving deep cooperation across research, engineering and product innovation.

OpenAI retains the freedom to secure additional compute capacity elsewhere, supported by large-scale initiatives such as the Stargate project.

Even with broader collaborations emerging across the industry, both firms present their alliance as central to advancing responsible AI and expanding access to powerful tools worldwide.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New all-island AI research alliance formed by Queen’s and UCD

Queen’s University Belfast and University College Dublin (UCD) have formalised a cross-border partnership focused on artificial intelligence research and talent development.

The collaboration will bring together researchers, faculty and students from both institutions to address shared challenges and opportunities in AI, including applications in healthcare, cybersecurity, data analytics and ethical AI governance.

The initiative aims to deepen academic cooperation, foster joint research projects, and expand interdisciplinary learning programmes that equip students with AI-relevant skills.

Leaders from both universities emphasised the importance of an all-island approach to strengthening AI expertise, enhancing competitiveness, and contributing to economic growth in Northern Ireland and the Republic of Ireland.

The partnership is expected to facilitate knowledge exchange, researcher mobility, and shared access to specialised facilities and funding opportunities.

Stakeholders also highlighted the broader societal context: as AI becomes integral to multiple sectors, coordinated academic and research ecosystems can help ensure that innovation aligns with ethical standards and public value.

By pooling resources and expertise across jurisdictions, the initiative positions both universities to play a more influential role in shaping AI policy, industry adoption and workforce development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Live facial recognition rolled out in Cardiff policing operation

South Wales Police has deployed live facial recognition technology in Cardiff to help prevent and detect crime. The operation is designed to identify suspects, wanted individuals and high-risk missing persons.

The deployment forms part of the force’s broader strategy to integrate advanced technologies into policing across South Wales. Officers will operate in clearly marked vehicles and designated recognition zones during the initiative.

Facial Recognition Technology compares faces captured from live camera feeds or digital images against a database of stored images. The system analyses key facial features and converts them into a mathematical representation using NEC’s NeoFace M40 algorithm before generating potential matches for officer review.

South Wales Police uses three types of facial recognition tools. Live Facial Recognition scans faces in real time against a pre-set watchlist, while Retrospective Facial Recognition analyses still images after incidents. Operator-Initiated Facial Recognition allows officers to take a photo on a mobile device and compare it against a watchlist to confirm identity.

Members of the public are encouraged to approach officers to learn more about how the technology works. Where possible, demonstrations will be provided to explain its operation and purpose.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in the workplace raises critical governance and shadow use challenges

AI adoption in the workplace is accelerating faster than corporate governance frameworks are evolving. Experts warn that many organisations are unprepared for the risks associated with widespread AI use, creating gaps in oversight and accountability.

A study by the University of Melbourne and KPMG found that nearly half of surveyed professionals admitted to misusing AI at work. Many employees also reported witnessing colleagues misuse AI tools, often without formal authorisation.

Standard practices include uploading sensitive company data to public AI platforms, using AI during internal assessments, and presenting AI-generated work as original output. A significant number of employees also reported reducing their effort because they rely on AI assistance.

Experts caution that this trend creates an illusion of productivity and competence. Managers may receive polished reports generated by AI, while employees may not fully understand or verify the content, exposing organisations to poor decision-making, security vulnerabilities, and compliance risks.

Data protection concerns are particularly significant. Feeding confidential or proprietary information into public AI systems can lead to data leakage and legal exposure, especially when misuse results in financial harm or regulatory breaches.

To address these risks, experts recommend clear internal rules, approved AI tools, monitoring of sensitive data flows, and mandatory human oversight in critical processes. Training programmes should focus on practical guidance and reinforce that employees remain responsible for the accuracy and legality of AI-assisted work.

Analysts note that similar patterns emerged during the early stages of internet adoption. As AI use expands, governance frameworks, enforcement mechanisms, and organisational cultures will need to evolve to manage long-term risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI data centre planned for East Manchester

Latos Data Centres is preparing plans for a 28,000 sq ft data centre in Monsall, East Manchester, aimed at serving rising demand for AI computing. The scheme would occupy a three acre brownfield site at Bower Street and Ten Acres Lane in Manchester.

The East Manchester project is designed as a neural edge data centre, bringing AI processing closer to end users than traditional cloud facilities. Latos said the Manchester development would form part of a broader plan to deliver 30 UK sites by 2030.

A live consultation in Manchester will run until 16 March, with Create Architecture leading the design. Advisers on the Manchester scheme include Euan Kellie Property Solutions on planning and SK Transport Planning on transport matters.

Latos said the Manchester facility would regenerate a vacant industrial plot and operate to high environmental and safety standards. The developer is also delivering a separate data centre in Tees Valley as it expands its AI-focused portfolio across the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Action-capable AI highlights new security challenges

AI agents are evolving from demos into autonomous tools, with OpenClaw emerging as a leading example. Unlike chatbots, these agents execute tasks directly, interacting with software and systems without constant human input.

The rise of action-capable AI introduces new security challenges. Agents can be manipulated through untrusted input or prompt injection. Persistent memory can also prolong mistakes or unintended behaviour.

The combination of access to sensitive data, external actions, and unverified content, sometimes called the ‘lethal trifecta’, amplifies risks, making careful configuration and oversight essential.

Self-hosted agents offer more control, while cloud-based versions simplify setup but shift security responsibility. Experts recommend running agents in isolated environments, limiting permissions, and requiring approval for sensitive actions.

These precautions reduce the chance of accidental or malicious harm while allowing users to experiment safely.

OpenClaw illustrates the potential of AI agents to automate workflows, handle repetitive tasks, and act proactively rather than passively advising. These tools show the future of consumer AI, but broader adoption requires stronger safety measures and awareness of risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!