Deepfakes and injection attacks are no longer just tools for misinformation; they are now being deployed to break the identity verification systems that underpin banking, hiring, and account access.
Bad actors are targeting the critical moments when a system determines whether someone is a real person, from customer onboarding at banks to remote hiring and account recovery workflows.
Attackers exploit verification systems in two main ways: by using increasingly convincing synthetic faces and voice clones to mimic real people, and by launching injection attacks that substitute fraudulent video into the capture pipeline before it ever reaches the detection system.
According to the Entrust 2026 Identity Fraud Report, deepfakes are now linked to one in five biometric fraud attempts, with injection attacks rising 40% year-on-year.
Experts warn that detecting deepfakes alone is no longer sufficient. Enterprises must validate the whole session, including device integrity and behavioural signals, in real time.
Gartner predicts that by 2026, 30% of enterprises will no longer consider face-based identity verification reliable in isolation, given the pace AI AI-generated deepfake attacks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A data breach at British game studio Cloud Imperium has angered players worldwide after the company quietly announced the incident. Users criticised the slow disclosure and the minimal information provided about what was accessed.
The breach, which occurred on 21 January, exposed names, contact details and dates of birth from backup systems. Cloud Imperium insists no passwords, financial information or game data were compromised.
Players have expressed frustration over the company’s reassurances, arguing that even basic personal details could be used in phishing campaigns. Forums and social media quickly filled with criticism, calling the announcement hidden and inadequate.
Cloud Imperium said it acted quickly to contain the breach, refresh security settings, and monitor systems for further incidents. The studio maintains that the issue should not affect gameplay or user safety, but some users remain sceptical.
The company’s flagship game, Star Citizen, is crowdfunded and boasts millions of players. However, it has not disclosed the total number of accounts affected, leaving the community uneasy about the transparency of the response.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Ocado has announced plans to cut 1,000 jobs from its 20,000 strong global workforce, with roles mainly affected in technology and support. The company, headquartered in Hatfield, Hertfordshire, said the move would save £150m and follows major investment in robotics and automation.
Chief executive Tim Steiner said Ocado had completed a significant phase of investment in automation, but the company declined to confirm that AI directly led to the redundancies. At its Luton warehouse, opened in 2023, human staff continue to work alongside AI powered robots.
Analysts suggested that competition has intensified as retailers in the UK, the US and Canada adopt similar AI driven systems. Some former clients in the US and Canada have invested in their own technology, reducing reliance on Ocado’s platform.
Retail experts argued that deeper structural challenges, including changing consumer expectations and cost pressures in Hertfordshire and beyond, are also at play. Local leaders in Welwyn Hatfield have requested urgent talks as the company reshapes its operating model.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft has temporarily locked its official Copilot Discord server after a surge of spam linked to criticism of its AI strategy. The disruption followed widespread use of the nickname ‘Microslop’, a term mocking the company’s AI push.
The backlash intensified after chief executive Satya Nadella urged the industry to embrace AI in a December 2025 blog post. Users began flooding the Copilot Discord server with variations of the term, bypassing Microsoft’s word filters.
Microsoft initially blocked the word before restricting channels and eventually taking the entire server offline. In a statement, the company said the move was intended to protect users from harmful spam.
The controversy reflects broader resistance to AI integration across Windows 11 and Microsoft software. Microsoft has not confirmed when the Copilot Discord server will return online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A deepfake video of Bombay Stock Exchange chief executive Sundararaman Ramamurthy circulated on social media in India, falsely offering stock advice to investors. The exchange moved quickly to report and remove the content, warning the public not to trust fake investment clips.
Cybersecurity experts say such cases are rising sharply, with one US firm estimating a 3,000 percent increase in deepfake incidents over two years. Executives in the US and the UK have also been impersonated using AI-generated audio and video.
In Hong Kong, police said a UK engineering firm lost $25m after an employee joined a video call featuring deepfake versions of senior colleagues. The transfer was made to multiple accounts before the fraud was discovered.
Security companies in the US and the UK are developing detection tools that analyse facial movement and blood flow patterns to identify AI-generated footage. Analysts warn that as costs fall and tools improve, businesses in India, Hong Kong and beyond face an escalating arms race against digital fraud.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic has enhanced its Claude AI chatbot to make switching from other platforms easier. Users on the free plan can now activate Claude’s memory feature, which allows them to import data from other AI platforms using a new dedicated tool.
The update ensures that users don’t have to start over when transferring context and history from competitors like OpenAI’s ChatGPT or Google’s Gemini.
The memory import option, first introduced in October for paid subscribers, now appears under ‘settings’ → ‘capabilities’ for all users. The tool lets users copy a prompt from their previous AI and paste the output into Claude, seamlessly transferring past interactions.
The recent popularity of Claude has been driven by tools such as Claude Code and Claude Cowork, as well as the launch of the Opus 4.6 and Sonnet 4.6 models. Upgrades enhance Claude’s coding, spreadsheet, and complex task capabilities, boosting its appeal to new users.
Anthropic’s visibility has also increased amid debates with the Pentagon, as the company refuses to loosen AI safeguards for military use, drawing ‘red lines’ around mass surveillance and autonomous weapons.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A high-severity vulnerability in Chrome’s integrated Gemini AI assistant exposed users to the potential activation of the camera and microphone, local file access, and phishing attacks. The issue, tracked as CVE-2026-0628, was disclosed by Palo Alto Networks’ Unit 42 and patched by Google in January 2026.
Gemini Live operates as a privileged AI panel embedded within the browser, capable of web page summarisation and task automation. To enable multimodal functionality, the panel is granted elevated permissions, including access to screenshots, local files, and device hardware.
Researchers identified inconsistent handling of the declarativeNetRequest API when gemini.google.com was loaded inside the AI side panel rather than a standard browser tab. While extensions could inject JavaScript in both cases, the panel context inherited browser-level privileges.
A malicious extension exploiting this distinction could hijack the trusted panel and execute arbitrary code with elevated access. Potential impacts included silent activation of a camera or microphone, screenshot capture, local file exfiltration, and high-credibility phishing attacks.
Google released a fix on 5 January 2026 following responsible disclosure. Users running the latest version of Chrome are protected, and organisations are advised to ensure updates are applied across all endpoints.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
When Hayao Miyazaki dismissed early AI-generated animation as ‘an insult to life itself’ in 2016, the technology felt distant from mainstream creative work. Less than a decade later, generative AI tools produce images and text in seconds, reviving debate over authorship, copyright, and artistic identity.
In Japan, debate reflects both anxiety and ambition. Illustrators question the use of their work in training data, while policymakers and corporations see AI as vital to easing a projected labour shortfall by 2040. Legal provisions allowing data use for analysis have intensified calls for safeguards.
Public sentiment in Japan remains broadly favourable toward AI adoption. Surveys indicate relatively high levels of trust, with many viewing AI as part of long-term structural adjustment rather than an immediate threat. Economic expectations often outweigh concerns about disruption.
Workplace implementation, however, remains limited. OECD research shows only a small share of employees actively use AI tools, citing skills shortages and cautious corporate culture. Analysts describe a paradox: AI could ease labour pressures, yet adoption is constrained by limited expertise.
Creative professionals report more immediate effects. Surveys highlight income pressures and uncertainty among illustrators and freelancers. As deployment expands, Japan faces the task of balancing economic necessity with cultural preservation and fair access to emerging technologies.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Hundreds of academics urged governments to halt plans for mandatory age checks on social media, rather than accelerating deployment without assessing the risks.
Researchers argue that current systems expose people to privacy breaches, security vulnerabilities and malicious sites that ignore verification rules instead of offering meaningful protection.
They say scientific consensus has not yet formed on the benefits or harms of age-assurance technologies, making large-scale implementation premature and potentially discriminatory.
The letter stresses that any credible system would require cryptographic safeguards for every query, protecting data in transit rather than leaving identity checks to platforms without robust technical guarantees.
Academics believe such infrastructure would be complex to build globally and would create friction that many providers may refuse to adopt.
Concern escalated after early deployments in Italy and France, where verification is already mandatory.
Signatories, including Ronald Rivest and Bart Preneel, warn that governments risk introducing a socially unacceptable system that increases exposure to data misuse instead of ensuring children’s safety online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The social media platform, X, has introduced a new ‘Paid Partnership’ label that creators can attach to posts to show when content is promotional instead of leaving audiences unsure about commercial intent.
An update that improves transparency for followers while meeting rules set by the Federal Trade Commission, which expects sponsored material to be disclosed clearly.
Creators previously relied on hashtags such as #ad or #paidpartnership instead of an integrated disclosure option. The new feature allows users to apply the label through a content-disclosure toggle either during posting or afterwards.
X’s product lead, Nikita Bier, said undisclosed promotions damage trust and weaken the platform’s integrity, so the tool is meant to support creators and regulators simultaneously.
X has been trying to build a stronger creator ecosystem by offering payouts, subscriptions and other incentives. Yet many creators still favour Instagram or YouTube over X as their primary channel, because those platforms have longer-standing monetisation tools.
The addition of a built-in label aligns X with broader industry practice and aims to regain credibility among advertisers and creators.
The company has also tightened API access, preventing programmatic replies unless a user is directly mentioned or quoted.
A change that seeks to limit LLM-generated spam instead of allowing automated responses to distort discussions or appear as fake engagement beneath sponsored content.
X hopes these combined measures will enhance authenticity around commercial posts.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!