YouTube expands AI deepfake detection tools for celebrities

The expansion of its likeness detection technology to the entertainment industry has been announced by YouTube, extending access beyond content creators to talent agencies, management companies and the individuals they represent.

The move is part of a broader effort by the platform to address the growing misuse of AI to generate misleading or unauthorised videos of public figures. By extending the tool to entertainment industry stakeholders, YouTube is signalling that AI-driven impersonation is no longer treated as a niche creator issue but as a broader identity and rights problem.

The system works in a way broadly comparable to Content ID, allowing eligible users to identify videos that use AI to replicate a person’s face or likeness. Once such content is detected, individuals can request its removal through YouTube’s existing privacy complaint process.

The rollout has been developed with input from major industry players, including Creative Artists Agency, United Talent Agency, William Morris Endeavor, and Untitled Management. Those partnerships are intended to help YouTube refine how the system works in practice and ensure it reflects the needs of artists and rights holders dealing with synthetic media.

Importantly, access to the tool is not limited to people who actively run YouTube channels. Celebrities and public figures can use it even without a direct creator presence on the platform, extending its reach across a much broader part of the entertainment ecosystem.

The significance of the update lies in how platforms are beginning to treat AI impersonation as a governance issue rather than merely a content-moderation problem.

As synthetic media tools become easier to use and more convincing, technology companies are under growing pressure to provide faster and more credible mechanisms for detecting misuse, protecting identity rights, and limiting deceptive content.

YouTube’s latest move shows that platform responses are becoming more structured and rights-based, especially in sectors where a person’s likeness is closely tied to reputation, image, and commercial value. The bigger question now is whether such tools will prove effective enough to keep pace with the scale and speed of AI-generated impersonation online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

US and Philippines plan economic security zone focused on AI and supply chains

The United States Department of State has announced plans with the Government of the Republic of the Philippines to establish a 4,000 acre Economic Security Zone. The project is designed as part of efforts to strengthen supply chains and industrial cooperation.

According to the Department of State, the zone will serve as the first AI native industrial acceleration hub under the Pax Silica framework. It aims to support advanced manufacturing, data infrastructure and technology development.

The initiative is intended to enhance coordination across the full technology supply chain, including critical minerals, semiconductors and computing systems. It reflects broader efforts to align investment and industrial capacity among partner countries.

The US Department of State states that the project will contribute to economic security and technological cooperation, with the Economic Security Zone planned in the Philippines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

European Commission allocates €63.2 million to support AI innovation in health and online safety

The European Commission has announced €63.2 million in funding to support AI innovation, focusing on health, online safety and broader technological development. The initiative aims to accelerate the deployment of AI solutions across key sectors.

According to the Commission, the funding will support projects that improve healthcare systems and strengthen protections in digital environments. It is part of ongoing efforts to expand AI capabilities and adoption.

The programme also seeks to encourage collaboration between research institutions, businesses and public bodies. This approach is intended to foster innovation while addressing societal challenges linked to AI use.

The Commission states that the investment will contribute to strengthening Europe’s digital capacity and advancing AI development across the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australian regulator highlights rising AI use across various industries

The Australian Communications and Media Authority reports that AI use is accelerating across telecommunications, media and online gambling sectors. The findings highlight growing adoption alongside increasing complexity in how the technology is applied.

According to the Authority, AI is being used in media to personalise advertising and streamline content production. However, concerns have been raised about misinformation risks and the use of copyrighted material.

In the gambling sector, AI supports predictive analytics, promotions and detection of harmful behaviour, while telecommunications companies use it to improve efficiency, detect scams and strengthen network resilience.

The Authority states that despite efficiency gains, stakeholders are calling for stronger governance, transparency and safeguards as AI adoption expands in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK regulator selects firms for second cohort of AI testing programme in financial services

The Financial Conduct Authority (FCA) has selected eight firms to join the second cohort of its AI Live Testing programme, with trials beginning in April 2026. The announcement was made at UK FinTech Week.

The initiative allows participants to test AI applications under regulatory oversight, with a focus on risk management and live monitoring. FCA is working with AI assurance specialist Advai to support the deployment of systems across financial markets.

Jessica Rusu, chief data, information and intelligence officer at FCA, said the programme reflects collaboration between regulators and industry. She added that FCA continues to work with firms to support the safe and responsible development of AI in UK financial markets.

The second cohort includes Barclays, Experian, Lloyds Banking Group, UBS, Aereve, Coadjute, GoCardless and Palindrome. FCA noted that use cases include targeted investment support, credit scoring insights, anti-money laundering detection and agentic payments.

FCA will also use the programme to examine emerging concepts, such as targeted support, a lighter-touch regulatory category aimed at addressing the UK’s advice gap. It reported that applications to its innovation services, including the Regulatory Sandbox and Innovation Pathways, increased by 49 percent year on year. A report on AI adoption practices is expected later in 2026, with a full evaluation of the cohort due in 2027.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Frontier AI cybersecurity risks highlighted by the World Economic Forum

A shift is emerging in cybersecurity as frontier AI systems become more capable and harder to control.

Anthropic’s decision to restrict access to the Claude Mythos Preview reflects growing concern about how such models can be used in real-world cybersecurity operations, as highlighted in an article published by the World Economic Forum.

Reported capabilities include identifying unknown vulnerabilities and generating working exploits. Tasks that once required specialised teams over long periods can now be accelerated significantly.

Defensive benefits exist, particularly in faster vulnerability detection, but the same capabilities can also lower barriers for attackers.

The main challenge is no longer finding weaknesses but managing them. AI can generate large volumes of vulnerabilities in a short time, while many organisations still rely on slower response cycles.

That gap increases exposure, especially for critical systems and infrastructure.

Cybersecurity is therefore moving away from static protection toward continuous monitoring and rapid response. At the same time, the lack of clear global rules on access to advanced AI systems raises broader concerns about governance and long-term stability.

Such an evolving imbalance between capability and control is likely to define the next phase of cyber risk.

The World Economic Forum report also stresses that AI-driven cyber risk is becoming a strategic issue, requiring board-level attention, stronger public–private coordination, and faster response timelines, as vulnerability discovery and exploitation compress from weeks to hours.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK NCSC calls for stronger cyber readiness

The UK National Cyber Security Centre has warned that organisations must urgently prepare for severe cyber threats, describing them as a growing risk to operations and national resilience. The guidance calls for immediate action from leadership.

Cyber attacks are becoming more capable and disruptive, with new technologies such as AI increasing their speed and scale. These threats can lead to major operational, financial and security impacts.

The agency emphasises that resilience, rather than prevention alone, is critical. Organisations must be able to continue operating and recover during cyber attacks, with preparation and planning carried out in advance.

The Centre states that responsibility lies with organisational leaders, urging investment, coordination and early planning to ensure essential services can continue under pressure in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

IWF and Utropolis partnership strengthens AI-driven child online safety

The Internet Watch Foundation (IWF) has announced a new partnership with Utropolis, marking a step forward in efforts to strengthen online child protection. The collaboration brings together established detection tools and emerging AI-driven safeguarding technologies.

Utropolis specialises in cloud-based filtering systems designed to identify risks in real time, particularly in school environments.

By integrating IWF datasets, including verified lists of harmful content, the platform aims to improve prevention and detection capabilities while helping educators maintain safer digital spaces.

The initiative reflects a broader trend towards combining AI with established regulatory and safeguarding frameworks. As harmful material continues to spread online, organisations are increasingly focusing on scalable, automated solutions that can adapt to evolving threats.

The partnership also aligns with UK online safety standards in education, reinforcing compliance requirements and strengthening institutional responses.

As digital environments continue to expand, collaborations of this kind highlight the growing role of AI in supporting child protection strategies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK invests £500 million in Sovereign AI fund to boost startups

The UK government has launched a £500 million Sovereign AI initiative to support domestic startups, aiming to strengthen national capabilities and reduce reliance on foreign technology providers.

The programme is designed to help companies start, scale and compete globally while remaining rooted in Britain.

An initiative that combines direct investment with broader support, including fast-track visas, access to high-performance computing and assistance in navigating regulation and procurement.

Early backers target firms working on advanced AI infrastructure, life sciences and next-generation computing, reflecting a strategic focus on sectors with long-term economic and security implications.

A central feature is access to national supercomputing resources, addressing one of the most significant barriers to AI development.

By providing large-scale compute capacity and linking it to potential future investment, the programme aims to accelerate research, testing and deployment within the UK ecosystem.

Essentially, the policy signals a shift toward a more interventionist approach, positioning the state as an active investor rather than a passive regulator.

The objective is to anchor innovation domestically, ensuring that intellectual property, talent and economic value remain within the UK as global competition in AI intensifies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New India partnership targets AI innovation and digital transformation

Broadcast Engineering Consultants India Limited (BECIL) and the Centre for Development of Advanced Computing (C-DAC) have signed a Memorandum of Understanding to collaborate on advanced technologies and digital transformation. The agreement focuses on joint projects, consultancy, and technical support across sectors.

The partnership covers AI, machine learning, Internet of Things, cybersecurity, 5G, and cloud computing. It also includes the development of turnkey solutions, technology transfer, and the commercialisation of innovative products.

Capacity development is a key component of the collaboration. Both organisations will support workforce upskilling and skill development to strengthen technical capabilities.

Officials stated that the partnership aims to leverage complementary strengths to deliver technology solutions. It is also expected to support innovation and contribute to India’s broader digital development objectives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot