Agentic AI study begins through University of Glasgow and Lloyds partnership

The University of Glasgow and Lloyds Banking Group have launched a four-year research partnership to study how agentic AI tools could support software and data engineering work.

According to the announcement, engineers at Lloyds Banking Group in Bristol, Manchester, and Hyderabad will work with large-language-model-based coding tools on different tasks each quarter. The aim is to measure effects on delivery speed and quality.

The collaboration will also create a PhD position, a Master of Research position, and a postdoctoral research associate post at the University of Glasgow.

Dr Tim Storer said: ‘Agentic-driven software engineering is a fast-developing sector with the potential to enable human engineers to work more efficiently by automating some tasks and allowing them to focus their skills on higher-level work.’

However, there has been relatively little research in industry on how integrating agentic AI into software engineering practices can be done effectively in large-scale organisations.’

We’re delighted to be partnering with Lloyds Banking Group on this groundbreaking project. Together, we will enable the Group’s plans to increase their software development capacity, produce high-quality research for the benefit of all, and influence national policy and industry standards.’

Dr Shane Montague said: ‘Lloyds Banking Group’s mission to Help Britain Prosper means leading innovation that genuinely improves how engineering gets done, with a focus on delivering enhanced digital services for our customers.’

‘We’re excited to partner with the University of Glasgow to gather rigorous, real-world evidence from day-to-day engineering work, so we can understand what really works and how agentic AI can be applied effectively and responsibly at scale.’

The partners say they plan to publish regular research papers and best-practice documents as the project develops.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

California challenges federal approach with new AI rules

The government of California is advancing a more interventionist approach to AI governance, signalling a divergence from federal deregulatory preferences.

An executive order signed by Gavin Newsom mandates the development of comprehensive AI policies within 4 months, prioritising public safety and protecting fundamental rights.

The proposed framework requires companies seeking state contracts to demonstrate safeguards against harmful outputs, including the prevention of child exploitation material and violent content.

It also calls for measures addressing algorithmic bias and unlawful discrimination, alongside increased transparency through mechanisms such as watermarking AI-generated media.

Federal guidance has discouraged state-level intervention, framing such efforts as obstacles to technological leadership.

The evolving policy landscape reflects growing concern over the societal impact of AI systems, including risks to employment, content integrity and civil liberties.

An initiative by California that may therefore serve as a testing ground for future regulatory models, shaping broader debates on balancing innovation with accountability in digital governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Healthcare data breach raises concerns over cloud security

A cybersecurity incident involving CareCloud has exposed vulnerabilities in the protection of sensitive medical information, following unauthorised access to patient records stored within its systems.

A breach was detected on 16 March, allowing attackers to access electronic health records for several hours, which raised concerns about potential data exposure.

The company has stated that the intrusion was contained on the same day, with systems restored and an external investigation launched.

However, uncertainty remains about whether any data were extracted and the scale of the potential impact, particularly given the company’s role in supporting tens of thousands of healthcare providers and millions of patients.

Such an incident reflects broader structural risks within digital healthcare infrastructures, where centralised storage of highly sensitive data increases the potential impact of cyberattacks.

Cloud environments, including services provided by Amazon Web Services, are increasingly integral to such systems, amplifying both efficiency and exposure.

The breach follows a pattern of escalating cyber threats targeting healthcare data, driven by its high value in criminal markets.

As investigations continue, the case underscores the need for stronger data protection measures, enhanced monitoring systems and more robust regulatory oversight to safeguard patient information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Australia reviews compliance with under-16 social media age ban

Australia’s eSafety Commissioner has released an update on rules requiring platforms to prevent users under 16 from holding accounts. Early results show significant action by companies, but also ongoing challenges in fully enforcing the restrictions.

By mid-December 2025, around 4.7 million accounts were removed or restricted, with more than 300,000 additional accounts blocked by March 2026. Despite these reductions, many children continue to retain accounts, create new ones, or pass age assurance checks.

Regulators identified several compliance concerns, including platforms that allow repeated attempts at age verification and encourage some users to update their ages. Reporting systems for underage accounts were often difficult to access, particularly for parents.

Investigations into five major platforms are ongoing to determine whether they have taken reasonable steps to meet their legal obligations. Authorities are assessing systems and processes rather than individual accounts, with enforcement decisions expected by mid-2026.

A new legislative rule introduced in March 2026 targets platform features linked to potential harm, such as recommender systems and continuous content feeds. Regulators will continue working with industry while gathering evidence and maintaining transparency during the enforcement process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

EU boosts fact-checking with €5 million disinformation resilience plan

The European Commission has committed €5 million to strengthen independent fact-checking networks, reinforcing efforts to counter disinformation across Europe. The initiative seeks to expand verification capacity in all EU languages while improving coordination among key stakeholders.

The programme introduces a comprehensive support system for fact-checkers, covering legal assistance, cybersecurity protection and psychological support.

It also establishes a centralised European repository of verified information, designed to enhance transparency and improve access to reliable content across the EU.

Led by the European Fact-Checking Standards Network, the project builds on existing frameworks such as the European Digital Media Observatory. The initiative forms part of the EU’s broader strategy to strengthen information integrity and safeguard democratic processes.

By reinforcing independent verification ecosystems, the programme reflects a policy-driven effort to address disinformation threats while supporting a more resilient and trustworthy digital environment across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

World Data Organisation launches in Beijing to advance global data governance

The World Data Organisation was formally established in Beijing on 30 March 2026, as the first professional international body focused on global data development and governance. The organisation aims to operate as a non-governmental, non-profit platform for dialogue, rule-making, and international collaboration.

The WDO has three stated goals: bridging the data divide, unlocking data’s value, and powering the digital economy. These priorities are intended to reduce disparities in digital capacity between developed and developing countries.

Global data use has become central to addressing challenges such as poverty reduction, public health, climate change, and AI development. Disparities persist, with digitally deliverable services accounting for over 60% of service exports in advanced economies but only 15% in least developed countries.

China’s digital infrastructure has advanced rapidly, with 4.8 million 5G base stations built by the end of 2025, and computing power ranked second globally. Officials said platforms like the WDO and UN will help shape international data governance, promote cooperation, and support secure cross-border data flows.

The WDO seeks to safeguard countries’ rights to develop data while respecting privacy, security, and enterprise interests. By 2030, it is expected to become a globally influential platform and a trusted hub in international data governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

New China rules broaden 2026 agricultural census and tighten data controls

China has revised its regulation on the national agricultural census ahead of the country’s fourth such survey, with the updated rules due to take effect on 1 May 2026. According to the reported summary, Premier Li Qiang signed a State Council decree publishing the revised regulation.

The changes expand the scope of the agricultural census to include rural industrial development and village construction, alongside more traditional measures of agricultural activity. New data-collection methods, including remote sensing, have also been added to the framework.

Stronger data-quality controls form another part of the revision. The updated regulation introduces a post-census spot-check system and sets out confidentiality obligations for census personnel involved in the process.

Penalties for data falsification have also been tightened. The revised rules say people found to have fabricated or manipulated statistics may face heavier sanctions, including higher fines and possible criminal prosecution.

The fourth national agricultural census aims to provide an updated picture of agricultural development, rural construction, farmers’ living standards, and the outcomes of rural reform in China. Areas listed for coverage include agricultural production conditions, grain output, new quality productive forces in agriculture, rural development, and the living conditions of rural residents.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FTC accuses OkCupid of sharing user data contrary to privacy promises

The US Federal Trade Commission has taken action against OkCupid and Match Group Americas over allegations that the dating app shared users’ personal information, including photos and location data, with an unrelated third party despite privacy promises saying such sharing would not occur without notice or an opportunity to opt out.

According to the FTC’s complaint, OkCupid gave the third party access to personal data from millions of users even though the recipient was not a service provider, business partner, or affiliate within the company’s corporate family. The agency says consumers were not informed and were not given a chance to opt out.

The complaint says the third party sought large OkCupid datasets because OkCupid’s founders were financial investors in that company, despite there being no business relationship with the app. The FTC alleges that OkCupid provided access to nearly 3 million user photos, along with location and other information, without formal or contractual limits on how the data could be used.

Christopher Mufarrige, Director of the FTC’s Bureau of Consumer Protection, said: ‘The FTC enforces the privacy promises that companies make. We will investigate, and where appropriate, take action against companies that promise to safeguard your data but fail to follow through—even if that means we have to enforce our Civil Investigative Demands in court.’

The FTC also alleges that, since September 2014, Match and OkCupid have taken extensive steps to conceal and deny that the apps shared users’ personal information with the data recipient, including conduct the agency says obstructed its investigation. One example cited in the complaint is that, after a news report revealed the third party had obtained large OkCupid datasets, the company told the media and users that it was not involved with that third party.

Under the proposed settlement, OkCupid and Match would be permanently prohibited from misrepresenting how they collect, maintain, use, disclose, delete, or protect personal information, including photos, demographic data, and geolocation data. Restrictions would also cover how they describe the purposes of data collection and disclosure, as well as how they present privacy controls and consumer choices under state privacy laws.

The Commission vote authorising staff to file the complaint and stipulating the final order was 2-0. The FTC filed both in the US District Court for the Northern District of Texas, Dallas Division. The agency notes that a complaint reflects its view that it has ‘reason to believe’ the law has been or is about to be violated, while stipulated final orders carry the force of law only if approved and signed by the district court judge.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO initiative drives new digital platform governance frameworks in South Asia

South Asia is strengthening digital platform governance through a rights-based approach shaped by regional cooperation and international guidance.

A workshop led by UNESCO brought together policymakers, civil society and academics to align platform regulation with principles of freedom of expression and access to information.

The discussions focused on addressing governance gaps linked to misinformation, platform accountability and transparency. Participants examined national experiences and identified shared regulatory challenges, emphasising the need for coordinated regional responses instead of fragmented national measures.

An initiative that also validated regional toolkits designed for policymakers and civil society, translating global principles into practical guidance. These tools aim to support the implementation of governance frameworks that reflect local contexts while upholding international human rights standards.

The process builds on UNESCO’s Internet for Trust guidelines, reinforcing a human-centred model of digital governance. Continued collaboration across South Asia is expected to strengthen regulatory capacity and ensure that digital platforms operate with greater accountability and public trust.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI capacity partnership links UNDP and Intel in Lesotho and Liberia

The United Nations Development Programme and Intel are working together to expand AI training and digital skills in Lesotho and Liberia under a Memorandum of Understanding signed in March 2025. According to UNDP, the partnership is intended to combine global technical expertise with local leadership as both countries pursue broader digital transformation goals.

Lesotho and Liberia are approaching the issue from different starting points.UNDP says Lesotho is aiming for universal digital access by 2030, while Liberia is investing in AI in higher education and governance systems to prepare for the future digital economy. Through its partnership with Intel, the UN’s global development network says it is helping close gaps in AI literacy and capacity-building so communities can better understand how AI may affect everyday life.

In Lesotho, UNDP says it has already helped establish 40 Digital Skills Learning Labs and train 40 Digital Ambassadors, including teachers, religious leaders, and local influencers. Intel’s ‘AI for Citizens (AI Community Experiences)’ programme was introduced to provide locally relevant training materials for low-connectivity environments. UNDP says the onboarding included virtual sessions using games and storytelling, while analogue activities and puzzles were used to explain concepts such as computer vision.

Liberia’s work has focused more on higher education and the public sector. UNDP says it supported the University of Liberia in designing its first Master of AI programme through six online sessions with global experts and in-person workshops involving 20 faculty members. The collaboration also extended to government, with targeted training for nearly 100 officials on how AI could improve public service delivery and inform policy decisions.

Anshul Sonak, Global Head of Intel Digital Readiness Programs, said: ‘We are deeply honoured to be a part of the AI training collaboration in Liberia with UNDP. Bringing AI skills and digital literacy to a country rich in history and potential was an amazing experience. We look forward to more collaborations in the future and finding more opportunities for Intel to be a player in the region.’

UNDP says future phases may include expanding training to more communities and countries, adapting content to local languages and contexts, and adding online components as connectivity improves. Dhani Spiller, Head of UNDP’s Digital Capacity Lab, said: ‘This partnership shows what’s possible when we combine UNDP’s development mandate with the innovation and technical depth of private-sector leaders.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!