Ukraine highlights AI strategic shifts

The National Security and Defense Council of Ukraine has published an overview of global AI developments for March 2026, highlighting a shift towards infrastructure and strategic realignment. The report is part of its ‘AI Frontiers’ analytical series.

According to the Council, growing investment and expansion of data centres to fuel AI demands are increasing pressure on energy resources. This is creating new competition not only for computing power but also for energy stability.

The analysis also points to intensifying competition between the US, China and the European Union, extending beyond AI models to supply chains, semiconductors and infrastructure. At the same time, AI is becoming more integrated into defence, cyberspace and information operations.

The Council highlights rising risks linked to disinformation, synthetic content and legal challenges, alongside growing demand for clearer regulation and content labelling as AI adoption expands in Ukraine.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ILO sets first global framework for AI use in manufacturing sector

The International Labour Organization (ILO) has adopted its first-ever tripartite conclusions on AI in manufacturing, marking a significant policy step in addressing the sector’s digital transformation.

Agreed following a five-day technical meeting in Geneva, the framework brings together governments, employers and workers to shape how AI is integrated into one of the world’s largest employment sectors.

These ILO conclusions respond to the growing impact of AI on manufacturing, which employs nearly 500 million people globally.

Rather than focusing solely on productivity gains, the framework emphasises the need to align technological adoption with labour standards, ensuring that innovation supports decent work, strengthens enterprises and contributes to inclusive economic growth.

Key provisions address skills development, lifelong learning and occupational safety, alongside the protection of fundamental rights at work.

The framework also highlights the importance of social dialogue, recognising that collaboration between stakeholders is essential to managing AI-driven change and mitigating potential disruptions to employment and working conditions.

An agreement that reflects a broader effort to balance efficiency with worker protection, rejecting the notion that productivity and labour rights are competing priorities.

Instead, it positions AI as a tool that, if properly governed, can enhance both economic performance and job quality within the manufacturing sector.

The conclusions will be submitted to the ILO Governing Body in November 2026 for formal approval, with the intention of guiding national policies and international approaches to AI deployment in industry.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Employee monitoring grows at Meta as AI overhaul accelerates

Meta has introduced a new internal tool to track employee activity, including keystrokes and mouse movements, as part of efforts to train its AI systems. The company says the data will help improve AI models designed to perform everyday digital tasks.

According to company statements, the tracking is limited to Meta-owned devices and applications, with safeguards in place to protect sensitive information. The initiative reflects a broader strategy to gather real-world usage data to enhance the performance and accuracy of AI tools.

The move has raised concerns among employees, some of whom view the monitoring as intrusive, particularly amid ongoing job cuts and reduced hiring. Reports indicate that Meta has significantly scaled back recruitment while increasing investment in AI development.

The company has committed substantial resources to AI, with plans to expand spending and accelerate model development. Internal tracking is positioned as part of a broader shift toward automation, as firms seek to reshape workflows and productivity through AI.

The development highlights growing tensions between AI innovation and workplace privacy. Increased reliance on employee data to train AI systems may reshape labour practices, raising questions about surveillance, consent, and the balance between technological advancement and workers’ rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Law Society conference highlights GDPR’s role in regulating AI tools

GDPR obligations remain ‘fundamental’ when addressing data protection issues linked to AI tools, according to legal experts speaking at a conference organised by the Law Society’s Intellectual Property and Data Protection Commission, a committee within the Law Society of Ireland, on 20 April. The event reviewed recent legislative developments, case law and the use of AI tools in the workplace.

Olivia Mullooly, partner at Arthur Cox, said regulation in the area remains a ‘moving feast’ amid ongoing negotiations on the EU Digital Omnibus. She added that GDPR has been effective in regulating new and novel activities by AI companies, and continues to overlap with other regulatory frameworks.

In a panel discussion, Bird & Bird partner Deirdre Kilroy said firms should not ignore fundamental GDPR principles when using AI. She also noted that organisations should not delay compliance actions despite shifting regulatory conditions.

Speakers also discussed uncertainty around evolving the EU rules and increasing complexity in compliance. The Data Protection Commission reported a rise in AI-related engagements, which accounted for one in four cases last year, up from one in 35 in 2021.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

YouTube expands AI deepfake detection tools for celebrities

The expansion of its likeness detection technology to the entertainment industry has been announced by YouTube, extending access beyond content creators to talent agencies, management companies and the individuals they represent.

The move is part of a broader effort by the platform to address the growing misuse of AI to generate misleading or unauthorised videos of public figures. By extending the tool to entertainment industry stakeholders, YouTube is signalling that AI-driven impersonation is no longer treated as a niche creator issue but as a broader identity and rights problem.

The system works in a way broadly comparable to Content ID, allowing eligible users to identify videos that use AI to replicate a person’s face or likeness. Once such content is detected, individuals can request its removal through YouTube’s existing privacy complaint process.

The rollout has been developed with input from major industry players, including Creative Artists Agency, United Talent Agency, William Morris Endeavor, and Untitled Management. Those partnerships are intended to help YouTube refine how the system works in practice and ensure it reflects the needs of artists and rights holders dealing with synthetic media.

Importantly, access to the tool is not limited to people who actively run YouTube channels. Celebrities and public figures can use it even without a direct creator presence on the platform, extending its reach across a much broader part of the entertainment ecosystem.

The significance of the update lies in how platforms are beginning to treat AI impersonation as a governance issue rather than merely a content-moderation problem.

As synthetic media tools become easier to use and more convincing, technology companies are under growing pressure to provide faster and more credible mechanisms for detecting misuse, protecting identity rights, and limiting deceptive content.

YouTube’s latest move shows that platform responses are becoming more structured and rights-based, especially in sectors where a person’s likeness is closely tied to reputation, image, and commercial value. The bigger question now is whether such tools will prove effective enough to keep pace with the scale and speed of AI-generated impersonation online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

US and Philippines plan economic security zone focused on AI and supply chains

The United States Department of State has announced plans with the Government of the Republic of the Philippines to establish a 4,000 acre Economic Security Zone. The project is designed as part of efforts to strengthen supply chains and industrial cooperation.

According to the Department of State, the zone will serve as the first AI native industrial acceleration hub under the Pax Silica framework. It aims to support advanced manufacturing, data infrastructure and technology development.

The initiative is intended to enhance coordination across the full technology supply chain, including critical minerals, semiconductors and computing systems. It reflects broader efforts to align investment and industrial capacity among partner countries.

The US Department of State states that the project will contribute to economic security and technological cooperation, with the Economic Security Zone planned in the Philippines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

European Commission allocates €63.2 million to support AI innovation in health and online safety

The European Commission has announced €63.2 million in funding to support AI innovation, focusing on health, online safety and broader technological development. The initiative aims to accelerate the deployment of AI solutions across key sectors.

According to the Commission, the funding will support projects that improve healthcare systems and strengthen protections in digital environments. It is part of ongoing efforts to expand AI capabilities and adoption.

The programme also seeks to encourage collaboration between research institutions, businesses and public bodies. This approach is intended to foster innovation while addressing societal challenges linked to AI use.

The Commission states that the investment will contribute to strengthening Europe’s digital capacity and advancing AI development across the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australian regulator highlights rising AI use across various industries

The Australian Communications and Media Authority reports that AI use is accelerating across telecommunications, media and online gambling sectors. The findings highlight growing adoption alongside increasing complexity in how the technology is applied.

According to the Authority, AI is being used in media to personalise advertising and streamline content production. However, concerns have been raised about misinformation risks and the use of copyrighted material.

In the gambling sector, AI supports predictive analytics, promotions and detection of harmful behaviour, while telecommunications companies use it to improve efficiency, detect scams and strengthen network resilience.

The Authority states that despite efficiency gains, stakeholders are calling for stronger governance, transparency and safeguards as AI adoption expands in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK regulator selects firms for second cohort of AI testing programme in financial services

The Financial Conduct Authority (FCA) has selected eight firms to join the second cohort of its AI Live Testing programme, with trials beginning in April 2026. The announcement was made at UK FinTech Week.

The initiative allows participants to test AI applications under regulatory oversight, with a focus on risk management and live monitoring. FCA is working with AI assurance specialist Advai to support the deployment of systems across financial markets.

Jessica Rusu, chief data, information and intelligence officer at FCA, said the programme reflects collaboration between regulators and industry. She added that FCA continues to work with firms to support the safe and responsible development of AI in UK financial markets.

The second cohort includes Barclays, Experian, Lloyds Banking Group, UBS, Aereve, Coadjute, GoCardless and Palindrome. FCA noted that use cases include targeted investment support, credit scoring insights, anti-money laundering detection and agentic payments.

FCA will also use the programme to examine emerging concepts, such as targeted support, a lighter-touch regulatory category aimed at addressing the UK’s advice gap. It reported that applications to its innovation services, including the Regulatory Sandbox and Innovation Pathways, increased by 49 percent year on year. A report on AI adoption practices is expected later in 2026, with a full evaluation of the cohort due in 2027.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Frontier AI cybersecurity risks highlighted by the World Economic Forum

A shift is emerging in cybersecurity as frontier AI systems become more capable and harder to control.

Anthropic’s decision to restrict access to the Claude Mythos Preview reflects growing concern about how such models can be used in real-world cybersecurity operations, as highlighted in an article published by the World Economic Forum.

Reported capabilities include identifying unknown vulnerabilities and generating working exploits. Tasks that once required specialised teams over long periods can now be accelerated significantly.

Defensive benefits exist, particularly in faster vulnerability detection, but the same capabilities can also lower barriers for attackers.

The main challenge is no longer finding weaknesses but managing them. AI can generate large volumes of vulnerabilities in a short time, while many organisations still rely on slower response cycles.

That gap increases exposure, especially for critical systems and infrastructure.

Cybersecurity is therefore moving away from static protection toward continuous monitoring and rapid response. At the same time, the lack of clear global rules on access to advanced AI systems raises broader concerns about governance and long-term stability.

Such an evolving imbalance between capability and control is likely to define the next phase of cyber risk.

The World Economic Forum report also stresses that AI-driven cyber risk is becoming a strategic issue, requiring board-level attention, stronger public–private coordination, and faster response timelines, as vulnerability discovery and exploitation compress from weeks to hours.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!