YouTube is expanding its likeness-detection technology designed to identify AI-generated deepfakes, extending access to a pilot group of government officials, political candidates, and journalists.
The tool allows participants to detect unauthorised AI-generated videos that simulate their faces and request removal if the content violates YouTube policies. The system builds on technology launched last year for around four million creators in the YouTube Partner Program.
Similar to YouTube’s Content ID system, which detects copyrighted material in uploaded videos, the likeness detection feature scans for AI-generated faces created with deepfake tools. Such technologies are increasingly used to spread misinformation or manipulate public perception by making prominent figures appear to say or do things they never did.
According to YouTube, the pilot programme aims to balance free expression with safeguards against AI impersonation, particularly in sensitive civic contexts.
‘This expansion is really about the integrity of the public conversation,’ said Leslie Miller, YouTube’s vice president of Government Affairs and Public Policy. ‘We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we’re also being careful about how we use it.’
Removal requests will be assessed individually under YouTube’s privacy policy rules to determine whether the content constitutes parody or political critique, which remain protected forms of expression. Participants must verify their identity by uploading a selfie and a government-issued ID before accessing the tool. Once verified, they can review detected matches and submit removal requests for content they believe violates policy.
YouTube also said it supports the proposed NO FAKES Act in the United States, which aims to regulate the unauthorised use of an individual’s voice or visual likeness in AI-generated media. AI-generated videos on the platform are already labelled, though label placement varies depending on the topic’s sensitivity.
‘There’s a lot of content that’s produced with AI, but that distinction’s actually not material to the content itself,’ said Amjad Hanif, YouTube’s vice president of Creator Products. The company said it plans to expand the technology over time to detect AI-generated voices and other intellectual property.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Leaders from government, academia, and industry gathered to emphasise that sustainable AI must shape efficient, inclusive, and environmentally responsible systems. The discussion focused on embedding sustainability, ethics, and human-centred principles throughout the AI lifecycle by adopting a sustainable-by-design approach.
The workshop was built on Saudi Arabia’s expanding role in AI and digital transformation through the Saudi Data & AI Authority (SDAIA) and the National Strategy for Data and AI (NSDAI). The efforts are supported by significant investments in cloud infrastructure and data centres under the Kingdom’s Vision 2030 programme. Participants highlighted that sustainable AI must become a core principle in the development of emerging digital infrastructure and AI-powered services.
Abdulrahman Habib, Director of the International Centre for Artificial Intelligence Research and Ethics (ICAIRE), highlighted Saudi Arabia’s growing leadership in AI ethics and governance. With national AI Ethics Principles and a maturing regulatory landscape, the Kingdom is positioning itself as a global contributor to responsible AI dialogue, translating principles into operational governance systems rather than just policy statements.
Leona Verdadero of UNESCO highlighted two core concepts: Greening with AI, which uses AI to accelerate sustainability, and Greening of AI, which ensures systems are energy-efficient, ethical, and human-centred. She stressed that effective AI governance requires collaboration and industry leadership at every stage of development.
Per Ola Kristensson from the University of Cambridge urged action beyond rhetoric, stressing that true AI sustainability means developing technology to augment, not replace, human potential. Industry presentations reinforced that sustainable AI drives real-world progress. Companies like RECYCLEE optimise resource recovery, Remedium reduces environmental impacts in healthcare and infrastructure, and IDOM strengthens sustainability reporting through AI-enhanced design.
UNESCO supports Saudi Arabia’s drive for inclusive, ethical, and sustainable AI ecosystems, framing sustainable AI as critical in the global transition to green digital transformation.
Faisal Al Azib, Executive Director of the UN Global Compact Network Saudi Arabia, stated: ‘As the Kingdom advances its digital transformation under Vision 2030, we have a responsibility to ensure that innovation advances hand in hand with sustainability and human dignity.’
Al Azib concluded: ‘Sustainable AI is central to building resilient, future-ready businesses. Through partnerships with UNESCO and our local ecosystem, we aim to equip companies with the governance tools to embed responsible, energy-efficient, and human-centred AI into their core strategies.’
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Council of the European Union is examining a compromise proposal that could introduce restrictions on certain AI systems capable of generating sensitive synthetic images.
The discussions form part of ongoing adjustments to the EU AI Act.
Policymakers are considering ways to prevent the development or deployment of systems that could produce such material while maintaining proportionate rules for legitimate AI applications.
Early indications suggest the proposal may not apply to images depicting people in standard clothing contexts, such as swimwear. The distinction reflects policymakers’ effort to define the scope of restrictions without imposing unnecessary limits on common image-generation uses.
The debate highlights broader regulatory challenges linked to generative AI technologies. European institutions are seeking to strengthen protections against harmful uses of AI while preserving space for innovation and lawful digital services.
Further negotiations among the EU institutions are expected as lawmakers continue refining how these provisions could fit within the broader European framework governing AI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A court in the Netherlands has increased potential penalties against Meta after ruling that changes to social media timelines must be implemented urgently.
The decision raises the potential fine for non-compliance from €5 million to €10 million if required adjustments are not applied to Facebook and Instagram feeds.
Judges at the Amsterdam Court of Appeals said users must be able to select a timeline that does not rely on profiling-based recommendations.
The ruling follows a legal challenge from the digital rights organisation Bits of Freedom, which argued that users who switched away from algorithmic feeds were automatically returned to them after navigating the platform or reopening the application.
The court concluded that the automatic resetting mechanism represents a deceptive design practice known as a ‘dark pattern’.
Such practices are prohibited under the EU’s Digital Services Act, which requires large online platforms to provide greater transparency and user control over recommendation systems.
Judges acknowledged that Meta had already introduced several technical changes, although not all required measures were fully implemented. The company must ensure that the non-profiling timeline option remains active once selected, rather than reverting to algorithmic recommendations.
The dispute also highlights regulatory tensions within the European framework. Before turning to the courts, Bits of Freedom submitted a complaint to Coimisiún na Meán, the national authority responsible for overseeing Meta’s compliance with the EU rules.
According to the organisation, the lack of progress from regulators encouraged legal action in Dutch courts.
Meta indicated that the company intends to challenge the decision and pursue further legal proceedings. The case could become an important test of how the Digital Services Act is enforced against major online platforms across Europe.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI, cloud computing, and cross-border data flows have made questions about control and jurisdiction increasingly important for governments and businesses. In Asia, the debate around digital sovereignty often focuses on ‘US versus non-US cloud’ providers or data localisation.
Such simplifications miss the practical challenges organisations face when choosing hosting locations or training AI models while navigating diverse regulatory regimes.
At the same time, Asia’s digital economy is building its own regulatory foundations. In Vietnam and Indonesia, new rules such as Vietnam’s Decree 53 and Indonesia’s data protection framework show how governments are shaping data governance while still relying on global cloud and AI platforms. Most organisations across the region continue to operate using a mix of local, regional, and international providers.
Organisations must address key questions about data jurisdiction and workload mobility when risks change. They must also control who can access sensitive systems during incidents. Digital sovereignty is clearer when seen through three pillars: data sovereignty, technical sovereignty, and operational sovereignty.
Data sovereignty is about jurisdiction, not just data storage. As AI regulation expands, businesses need to know which authorities can access their data and how it may be used. Technical sovereignty is the ability to move or redesign systems as regulations or geopolitics shift. Multi-cloud and hybrid strategies help organisations remain adaptable.
Operational sovereignty focuses on governance and control. It addresses who can access systems, from where, and under what safeguards, thus linking sovereignty directly to cybersecurity and incident response.
For Asia-Pacific organisations, digital sovereignty should not be a simple procurement checklist. Instead, it should guide cloud and AI strategies from the start, ensuring legal clarity, technical flexibility, and operational trust as the digital landscape evolves.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The operation, led by Microsoft, Europol, and several industry partners, targeted the infrastructure behind Tycoon 2FA, which enabled large-scale phishing campaigns against more than 500,000 organisations each month.
By mid-2025, Tycoon 2FA accounted for 62% of the phishing attempts blocked by Microsoft, with over 30 million malicious emails blocked in a single month. Experts link the platform to around 96,000 global victims since 2023, including 55,000 Microsoft customers.
Researchers from Resecurity found cybercriminals widely used the platform to impersonate legitimate users and gain unauthorised access to accounts such as Microsoft 365, Outlook and Gmail. The service relied on techniques such as URL rotation using open redirect vulnerabilities and the misuse of Cloudflare Workers to hide malicious infrastructure.
‘The author of Tycoon 2FA is actively updating the tool with regular kit updates,’ reads the report published by Resecurity. ‘What makes Tycoon 2FA so special is that the kit effectively combines multiple methods to deliver phishing at scale—from PDF attachments to QR codes.’
Authorities say taking the infrastructure offline disrupts a key pathway for account takeover attacks and prevents additional threats, such as data theft, ransomware, business email compromise, and financial fraud.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Clemson University has introduced ChatGPT Edu to its students, faculty, and staff, providing them free access to the secure, institutionally managed version of the AI platform.
The rollout is part of Clemson’s partnership with OpenAI. It forms part of the university’s broader AI Initiative, which aims to develop a human-centred approach to AI across education, research, and operations.
University officials said the ChatGPT Edu environment will expand access to generative AI tools while ensuring institutional data remains protected and is not used to train external AI systems.
Members of the Clemson community who want to use the platform must request access through a ChatGPT Edu account request form. Once approved, accounts are automatically created, and users can sign in through Clemson’s single sign-on system.
Even if students or staff members already have a ChatGPT account linked to their Clemson email, they will still need to request access to ChatGPT Edu. After approval, they can merge your current account or download your chat history before creating a new one.
The university said the launch reflects its view that access to emerging technologies should be paired with clear guidance and responsible use. Users are advised to review Clemson’s updated AI guidelines before using the system.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
On 28 February 2026, Anthropic’s Claude rose to No. 1 in Apple’s US App Store free rankings, overtaking OpenAI’s ChatGPT. The surge came shortly after OpenAI announced a partnership with the US Department of Defense (DoD), making its technology available to the US Army. The development prompted discussion among users and observers about whether concerns over military partnerships were influencing the shift to alternative AI tools.
Mere hours before the USD $200 million OpenAI-DoD deal was finalised, Anthropic was informed that its potential deal with the Pentagon had fallen through, largely because the AI company refused to relinquish total control of its technology for domestic mass surveillance. According to reporting, discussions broke down after Anthropic declined to grant the US government unrestricted control over its models, particularly for potential uses related to large-scale surveillance.
Following the breakdown of negotiations, US officials reportedly designated Anthropic as a ‘supply chain risk to national security’. The decision effectively limited the company’s participation in certain defence-related projects and highlighted growing tensions between AI developers’ safety policies and government expectations regarding national security technologies.
I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are…
The debate over military partnerships sparked internal and industry-wide discussion. Caitlin Kalinowski, the former head of AR glasses hardware at Meta and the hardware leader at OpenAI, resigned soon after the US DoD deal, citing ethical concerns about the company’s involvement in military AI applications.
AI has driven recent technological innovation, with companies like Anduril and Palantir collaborating with the US DoD to deploy AI on and off the battlefield. The debate over AI’s role in military operations, surveillance, and security has intensified, especially as Middle East conflicts highlight its potential uses and risks.
Against this backdrop, the dispute between Anthropic and the Pentagon reflects a wider debate on how AI should be used in security and defence. Governments are increasingly relying on private tech companies to develop the systems that shape modern military capabilities, while those same companies are trying to set limits on how their technologies can be used.
As AI becomes more deeply integrated into security strategies around the world, the challenge may no longer be whether the technology will be used, but how it should be governed. The question is: who should ultimately decide where the limits of military AI lie?
Anthropic’s approach to military AI
Anthropic’s approach is closely tied to its concept of ‘constitutional AI’, a training method that guides how the model behaves by embedding a set of principles directly into its responses. Such principles are intended to reduce harmful outputs and ensure the system avoids unsafe or unethical uses. While such safeguards are intended to improve reliability and trust, they can also limit how the technology can be deployed in more sensitive contexts such as military operations.
Anthropic’s Constitution says its AI assistant should be ‘genuinely helpful’ to people and society, while avoiding unsafe, unethical, or deceptive actions. The document reflects the company’s broader effort to build safeguards into model deployment. In practice, Anthropic has set limits on certain applications of its technology, including uses related to large-scale surveillance or military operations.
Anthropic presents these safeguards as proof of its commitment to responsible AI. Reports indicate that concerns over unrestricted model access led to the breakdown in talks with the US DoD.
At the same time, Anthropic clarifies that its concerns are specific to certain uses of its technology. The company does not generally oppose cooperation with national security institutions. In a statement following the Pentagon’s designation of the company as a ‘supply chain risk to national security’, CEO Dario Amodei said, ‘Anthropic has much more in common with the US DoD than we have differences.’ He added that the company remains committed to ‘advancing US national security and defending the American people.’
The episode, therefore, highlights a nuanced position. Anthropic appears open to defence partnerships but seeks to maintain clearer limits on the deployment of its AI systems. The disagreement with the Pentagon ultimately reflects not a fundamental difference in goals, but rather different views on how far military institutions should be able to control and use advanced AI technologies.
Anthropic’s position illustrates a broader challenge facing governments and tech companies as AI becomes increasingly integrated into national security systems. While military and security institutions are eager to deploy advanced AI tools to support intelligence analysis, logistics, and operational planning, the companies developing these technologies are also seeking to establish safeguards for their use. Anthropic’s willingness to step back from a major defence partnership and challenge the Pentagon’s response underscores how some AI developers are trying to set limits on military uses of their systems.
Defence partnerships that shape the AI industry
While Anthropic has taken a cautious approach to military deployment of AI, other technology companies have pursued closer partnerships with defence institutions. One notable example is Palantir, the US data analytics firm co-founded by Peter Thiel that has longstanding relationships with numerous government agencies. Documents leaked in 2013 suggested that the company had contracts with at least 12 US government bodies. More recently, Palantir has expanded its defence offering through its Artificial Intelligence Platform (AIP), designed to support intelligence analysis and operational decision-making for military and security institutions.
Another prominent player is Anduril Industries, a US defence technology company focused on developing AI-enabled defence systems. The firm produces autonomous and semi-autonomous technologies, including unmanned aerial systems and surveillance platforms, which it supplies to the US DoD.
Shield AI, meanwhile, is developing autonomous flight software designed to operate in environments where GPS and communications may be unavailable. Its Hivemind AI platform powers drones that can navigate buildings and complex environments without human control. The company has worked with the US military to test these systems in training exercises and operational scenarios, including aircraft autonomy projects aimed at supporting fighter pilots.
The aforementioned partnerships illustrate how the US government has increasingly embraced AI as a key pillar of national defence and future military operations. In many cases, these technologies are already being used in operational contexts. Palantir’s Gotham and AIP, for instance, have supported US military and intelligence operations by processing satellite imagery, drone footage, and intercepted communications to help analysts identify patterns and potential threats.
Other companies are contributing to defence capabilities through autonomous systems development and hardware integration. Anduril supplies the US DoD with AI-enabled surveillance, drone, and counter-air systems designed to detect and respond to potential threats. At the same time, OpenAI’s technology is increasingly being integrated into national security and defence projects through growing collaboration with US defence institutions.
Such developments show that AI is no longer a supporting tool but a fundamental part of military infrastructure, influencing how defence organisations process information and make decisions. As governments deepen their reliance on private-sector AI, the emerging interplay among innovation, operational effectiveness, and oversight will define the central debate on military AI adoption.
The potential benefits of military AI
The debate over Anthropic’s restrictions on military AI use highlights the reasons governments invest in such technologies: defence institutions are drawn to AI because it processes vast amounts of information much faster than human analysts. Military operations generate massive data streams from satellites, drones, sensors, and communication networks, and AI systems can analyse them in near real time.
In 2017, the US DoD launched Project Maven to apply machine learning to drone and satellite imagery, enabling analysts to identify objects, movements, and potential threats on the battlefield faster than with traditional manual methods.
AI is increasingly used in military logistics and operational planning. It helps commanders anticipate equipment failures, enables predictive maintenance, optimises supply chains, and improves field asset readiness.
Recent conflicts have shown that AI-driven tools can enhance military intelligence and planning. In Ukraine, for example, forces reportedly used software to analyse satellite imagery, drone footage, and battlefield data. Key benefits include more efficient target identification, real-time tracking of troop movements, and clearer battlefield awareness through the integration of multiple data sources.
AI-assisted analysis has been used in intelligence and targeting during the Gaza conflict. Israeli defence systems use AI tools to rapidly process large datasets for surveillance and intelligence operations. The tools help analysts identify potential militant infrastructure, track movements, and prioritise key intelligence, thus speeding up information processing for teams during periods of high operational activity.
More broadly, AI is transforming the way militaries coordinate across land, air, sea, and cyber domains. AI integrates data from diverse sources, equipping commanders to interpret complex operational situations and enabling faster, informed decision-making. The advances reinforce why many governments see AI as essential for future defence planning.
Ethical concerns and Anthropic’s limits on military AI
Despite the operational advantages of military AI, its growing role in national defence systems has raised ethical concerns. Critics warn that overreliance on AI for intelligence analysis, targeting, or operational planning could introduce risks if the systems produce inaccurate outputs or are deployed without sufficient human oversight. Even highly capable models can generate misleading or incomplete information, which in high-stakes military contexts could have serious consequences.
Concerns about the reliability of AI systems are also linked to the quality of the data they learn from. Many models still struggle to distinguish authentic information from synthetic or manipulated content online. As generative AI becomes more widespread, the risk that systems may absorb inaccurate or fabricated data increases, potentially affecting how these tools interpret intelligence or analyse complex operational environments.
Questions about autonomy have also become a major issue in discussions around military AI. As AI systems become increasingly capable of analysing battlefield data and identifying potential targets, debates have emerged over how much decision-making authority they should be given. Many experts argue that decisions involving the use of lethal force should remain under meaningful human control to prevent unintended consequences or misidentification of targets.
Another area of concern relates to the potential expansion of surveillance capabilities. AI systems can analyse satellite imagery, communications data, and online activity at a scale beyond the capacity of human analysts alone. While such tools may help intelligence agencies detect threats more efficiently, critics warn that they could also enable large-scale monitoring if deployed without clear legal and institutional safeguards.
It is within this ethical landscape that Anthropic has attempted to position itself as a more cautious actor in the AI industry. Through initiatives such as Claude’s Constitution and its broader emphasis on AI safety, the company argues that powerful AI systems should include safeguards that limit harmful or unethical uses. Anthropic’s reported refusal to grant the Pentagon unrestricted control over its models during negotiations reflects this approach.
The disagreement between Anthropic and the US DoD therefore highlights a broader tension in the development of military AI. Governments increasingly view AI as a strategic technology capable of strengthening defence and intelligence capabilities, while some developers seek to impose limits on how their systems are deployed. As AI becomes more deeply embedded in national security strategies, the question may no longer be whether these technologies will be used, but who should define the boundaries of their use.
Military AI and the limits of corporate control
Anthropic’s dispute with the Pentagon shows that the debate over military AI is no longer only about technological capability. Questions of speed, efficiency, and battlefield advantage now collide with concerns over surveillance, autonomy, human oversight, and corporate responsibility. Governments increasingly see AI as a strategic asset, while companies such as Anthropic are trying to draw boundaries around how far their systems can go once they enter defence environments.
Contrasting approaches across the industry make the tension even clearer. Palantir, Anduril, Shield AI, and OpenAI have moved closer to defence partnerships, reflecting a broader push to integrate advanced AI into military infrastructure. Anthropic, by comparison, has tried to keep one foot in national security cooperation while resisting uses it views as unsafe or unethical. A divide of that kind suggests that the future of military AI may be shaped as much by company policies as by government strategy.
The growing reliance on private firms to build national security technologies has made governance harder to define. Military institutions want flexibility, scale, and operational control, while AI developers increasingly face pressure to decide whether they are simply suppliers or active gatekeepers of how their models are deployed. Anthropic’s position does not outright defence cooperation, but it does expose how fragile the relationship becomes when state priorities and corporate safeguards no longer align.
Military AI will continue to expand, whether through intelligence analysis, logistics, surveillance, or autonomous systems. Governance, however, remains the unresolved issue at the centre of that expansion. As AI becomes more deeply embedded in defence policy and military planning, should governments alone decide how far these systems can go, or should companies like Anthropic retain the power to set limits on their use?
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Researchers in the US have found that AI analysis of mammograms may help identify women at risk of heart disease. The study examined breast scans to measure calcium deposits in arteries, a sign linked to cardiovascular problems.
Scientists from Emory University in Atlanta analysed screening data from more than 120,000 women. Results showed women with higher levels of arterial calcium detected in mammograms faced significantly greater risk of heart attacks or strokes.
Researchers reported that even women under 50 years old showed increased cardiovascular risk when calcium deposits appeared on scans. Experts say the findings suggest routine breast screening could reveal hidden heart health risks.
Doctors in Atlanta say AI could allow mammograms to act as a dual screening tool for breast cancer and cardiovascular disease. Further research is planned before hospitals in the US widely adopt the method.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Technology hubs in China are promoting the OpenClaw AI agent as part of new local industry initiatives. Officials in China say the open source tool can automate tasks such as email management and travel booking.
Cities including Shenzhen, Wuxi and Hefei are drafting policies to build an ecosystem around OpenClaw. Authorities in China are offering subsidies, computing resources and office support to encourage AI-driven one-person companies.
OpenClaw has grown rapidly since its release and has become one of the fastest-expanding projects on GitHub. Technology groups say the tool could allow individuals to operate businesses with far fewer employees.
Regulators have also warned about security and data protection risks linked to AI agents. Draft rules in China propose limits on access to sensitive data and stronger oversight of cross-border information flows.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!