Cybercriminals shift to stolen credentials and AI-enabled attacks

Ransomware attacks are increasingly relying on stolen passwords rather than traditional malware, according to Cloudflare’s latest annual threat report. Attackers now exploit legitimate account credentials to blend into regular traffic, making breaches harder to detect and contain.

Manufacturing and critical infrastructure organisations account for over half of targeted attacks, reflecting their high operational stakes.

Cloudflare highlighted that AI is enabling attackers to prioritise speed and scale over technical sophistication. Generative AI lets criminals automate fraud, hijacking email threads and targeting a ~$49,000 sweet spot to maximise profit while avoiding scrutiny.

Nation-state actors also leverage legitimate platforms for command-and-control operations, with Russia, China, Iran, and North Korea each following distinct cyber strategies.

Researchers warned that modern ransomware is less a malware crisis and more an identity and access challenge. Attackers using authorised credentials can bypass defences and execute high-impact extortion, marking a significant shift in global threat vectors.

The report urges businesses to strengthen identity security, monitor access, and defend against AI-driven attacks that exploit impersonation and automation at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI ethics as societal infrastructure in the digital era

In recent days, social media has been alight with discussions about the 2014 series whose portrayal of AI and ethical dilemmas now feels remarkably prophetic: Silicon Valley. Fans and professionals alike are highlighting how the show’s depiction of AI, automated agents, and ethical dilemmas mirrors today’s real-world challenges. 

From algorithmic decision-making to AI shaping social and economic interactions, the series explores the boundaries, responsibilities, and societal impact of AI in ways that feel startlingly relevant. What once seemed like pure comedy is increasingly being seen as a warning, highlighting how the choices we make around AI and its ethical frameworks will shape whether the technology benefits society.

While the show dramatises these dilemmas for entertainment, the real world is now facing the same questions. Recent trends in generative AI, autonomous agents, and large-scale automated decision-making are bringing their predictions to life, raising urgent ethical questions for developers, policymakers, and society alike.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

The rise of AI ethics: from niche concern to central requirement

The growing influence of AI on society has propelled ethics from a theoretical discussion to a central factor in technological decision-making. Initially confined to academic debate, ethics in AI is now a guiding force in technological development. The impact of AI is becoming tangible across society, from employment and finance to online content.

Technical performance alone no longer defines success; the consequences of design choices have become morally and socially significant. Governments, international organisations, and corporations are responding by developing ethical frameworks. 

The EU AI Act, the OECD AI Principles, and numerous corporate codes of conduct signal that society expects AI systems to align with human values, demonstrating accountability, fairness, and trustworthiness. Ethical reflection has become a prerequisite for technological legitimacy and societal acceptance.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

Functions of AI ethics: trust, guidance, and societal risk

Ethical frameworks for AI fulfil multiple roles, balancing moral guidance with practical necessity. They build public trust between developers, organisations, and users, reassuring society that AI systems operate consistently with shared values.

For developers, ethical principles offer a blueprint for decision-making, helping anticipate societal impact and minimise unintended harm. Beyond guidance, AI ethics acts as a form of societal risk governance, allowing organisations to identify potential consequences before they manifest. 

By integrating ethics into design, AI systems become socially sustainable technologies, bridging technical capability with moral responsibility. The approach like this is particularly critical in high-stakes domains such as healthcare, finance, and law, where algorithmic decisions can significantly affect human well-being.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

The politics of AI ethics: regulatory theatre and corporate influence

Despite widespread adoption, AI ethics frameworks sometimes risk becoming regulatory theatre, where public statements signal commitment but fail to ensure meaningful action. Many organisations promote ethical AI principles, yet consistent enforcement and follow-through often lag behind these claims.

Even with their limitations, ethical frameworks are far from meaningless. They shape public discourse, influence policy, and determine which AI systems gain social legitimacy. The challenge lies in balancing credibility with practical impact, ensuring that ethical commitments are more than symbolic gestures. 

Social media platforms like X amplify this tension, with public scrutiny and viral debates exposing both successes and failures in applying ethical principles.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

AI ethics as a lens for technology and society

The prominence of AI ethics reflects a broader societal transformation in evaluating technology. Modern societies no longer judge AI solely by efficiency, speed, or performance; they assess social consequences, fairness, and the distribution of risks and benefits. 

AI is increasingly seen as a social actor rather than a neutral tool, influencing public behaviour, shaping social norms, and redefining concepts such as trust, autonomy, and accountability. Ethical evaluation of AI is not just a philosophical exercise, but demonstrate evolving expectations about the role technology should play in human life.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

AI ethics as early-warning governance for social impact

AI ethics functions as a critical early-warning system for society. Ethical principles anticipate harms that might otherwise go unnoticed, from systemic bias to privacy violations. By highlighting potential consequences, ethics enables organisations to act proactively, reducing the likelihood of crises and improving public trust. 

Moreover, ethics ensures that long-term impacts, including societal cohesion, equity, and fairness, are considered alongside immediate technical performance. In doing so, AI ethics bridges the gap between what AI can do and what society deems acceptable, ensuring that innovation remains aligned with moral and social norms.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

The bridge between technological power and social legitimacy

AI ethics remains the essential bridge between technological power and social legitimacy. Embedding ethical reflection into AI development ensures that innovation is not only technically effective but also socially sustainable, trustworthy, and accountable. 

Yet a growing tension defines the next phase of this evolution: the accelerating pace of innovation often outstrips the slower processes of ethical deliberation and regulation, raising questions about who sets the norms and how quickly societies can adapt.

Rather than acting solely as a safeguard, ethics is increasingly becoming a strategic dimension of technological leadership, shaping public trust, market adoption, and even geopolitical influence in the global race for AI. The rise of AI ethics, therefore, signals more than a moral awakening, reflecting a structural shift in how technological progress is evaluated and legitimised.

As AI continues to integrate into everyday life, ethical frameworks will determine not only how systems function, but also whether they are accepted as part of the social fabric. Aligning innovation with societal values is no longer optional but the condition under which AI can sustain legitimacy, unlock its full potential, and remain a transformative force that benefits society as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Parliament deadlock leaves EU chat-scanning extension in doubt

The civil liberties committee failed to secure majority backing for its amended report on extending the EU’s temporary chat-scanning rules instead of giving a clear negotiating position.

Members of Parliament reviewed the amendments on Monday, but the final text did not garner sufficient support, leaving the proposal without endorsement as the adoption deadline approaches.

A proposal to extend the current derogation that allows tech companies to voluntarily scan their services for Child Sexual Abuse Material (CSAM).

The existing regime expires in April 2026 and was intended only as a stopgap while a permanent Child Sexual Abuse Regulation was developed. Years of stalled negotiations have led to the temporary rules being extended twice since 2021.

Council has already approved its position without changes to the Commission proposal, creating a tight timeline for Parliament.

With trilogue talks finally underway, institutions would need to conclude discussions unusually quickly to prevent the legal basis from expiring. If no agreement is reached by April, companies would lose their ability to scan services under the EU law.

The committee confirmed that the file will now move to plenary in the week of 9–12 March, where political groups may table new amendments. An outcome that will determine whether the temporary regime remains in place while negotiations on the permanent system continue.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

X Chat debuts as separate app for iOS

Social platform X has released a standalone version of its private messaging service, X Chat, via Apple’s TestFlight. The initial beta reached capacity within two hours, reflecting strong early demand among iOS users eager to trial the new app.

Michael Boswell confirmed that the first 1,000 places were quickly expanded to 5,000, with further growth expected. Development has been ongoing for several months, and testers have been urged to stress-test the product and submit detailed feedback.

Early screenshots suggest a cleaner interface and possible rebranding to ‘xChat’.

Security claims remain under scrutiny, as experts question whether X Chat’s encryption matches established platforms such as Signal. Clear evidence addressing those concerns in the standalone build has yet to emerge.

Launch of the separate app marks a notable shift from Elon Musk’s earlier ambition to integrate messaging, payments, and content into a single ‘everything app’.

Chats will synchronise across X, its web platform chat.x.com, and the new iOS app, while an Android version is expected soon.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ClawJacked flaw let attackers hijack AI agents through the browser

A high-severity vulnerability dubbed ‘ClawJacked’ has been discovered in OpenClaw, an open-source AI agent framework that lets developers run autonomous AI assistants locally.

The flaw, uncovered by Oasis Security, allowed malicious websites to silently hijack a user’s local AI agent instance and steal sensitive data, all triggered by a single browser visit.

The attack exploited OpenClaw’s local WebSocket gateway, which assumed that traffic from localhost could be trusted. A malicious website could open a WebSocket connection to the gateway, brute-force the password at hundreds of guesses per second, with no rate limiting applied to local connections, and then silently register as a trusted device without any user prompt.

Once inside, attackers gained admin-level access to the AI agent, connected devices, logs, and configuration data. Oasis Security responsibly disclosed the flaw, and OpenClaw issued a patch within 24 hours, releasing version 2026.2.26.

Security experts are urging organisations to update immediately, audit the permissions held by their AI agents, and apply strict governance policies, treating AI agents as non-human identities that require the same oversight as human users or service accounts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why detecting deepfakes is no longer enough to stay secure

Deepfakes and injection attacks are no longer just tools for misinformation; they are now being deployed to break the identity verification systems that underpin banking, hiring, and account access.

Bad actors are targeting the critical moments when a system determines whether someone is a real person, from customer onboarding at banks to remote hiring and account recovery workflows.

Attackers exploit verification systems in two main ways: by using increasingly convincing synthetic faces and voice clones to mimic real people, and by launching injection attacks that substitute fraudulent video into the capture pipeline before it ever reaches the detection system.

According to the Entrust 2026 Identity Fraud Report, deepfakes are now linked to one in five biometric fraud attempts, with injection attacks rising 40% year-on-year.

Experts warn that detecting deepfakes alone is no longer sufficient. Enterprises must validate the whole session, including device integrity and behavioural signals, in real time.

Gartner predicts that by 2026, 30% of enterprises will no longer consider face-based identity verification reliable in isolation, given the pace AI AI-generated deepfake attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Yale expert warns against overtrusting AI health chatbots

More than 40 million people use ChatGPT alone for health information every day, and both ChatGPT and Claude have recently launched services specifically designed to give consumers health advice.

Yale School of Medicine clinician-educator Shaili Gupta warns that whilst chatbots can democratise access to health information, the risks of overtrust are significant.

Gupta notes that AI chatbots are deliberately designed to feel personal, trained to use pronouns like ‘you’ and ‘I’, which makes users more likely to treat them as authoritative voices rather than information tools.

She cautions against the ‘three C’s’: chatbots that are too competent, too cogent, or too concrete, as these are the most likely to lead patients into harmful health decisions.

Human clinicians, Gupta argues, remain challenging to replace not only because they conduct physical examinations, but also because they bring instinct, experience, and genuine relatability to patient care. She recommends using chatbots for efficiency and general information, whilst leaving diagnosis firmly in the hands of medical professionals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Data breach sparks outrage at Cloud Imperium among players

A data breach at British game studio Cloud Imperium has angered players worldwide after the company quietly announced the incident. Users criticised the slow disclosure and the minimal information provided about what was accessed.

The breach, which occurred on 21 January, exposed names, contact details and dates of birth from backup systems. Cloud Imperium insists no passwords, financial information or game data were compromised.

Players have expressed frustration over the company’s reassurances, arguing that even basic personal details could be used in phishing campaigns. Forums and social media quickly filled with criticism, calling the announcement hidden and inadequate.

Cloud Imperium said it acted quickly to contain the breach, refresh security settings, and monitor systems for further incidents. The studio maintains that the issue should not affect gameplay or user safety, but some users remain sceptical.

The company’s flagship game, Star Citizen, is crowdfunded and boasts millions of players. However, it has not disclosed the total number of accounts affected, leaving the community uneasy about the transparency of the response.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ocado job cuts raise AI questions

Ocado has announced plans to cut 1,000 jobs from its 20,000 strong global workforce, with roles mainly affected in technology and support. The company, headquartered in Hatfield, Hertfordshire, said the move would save £150m and follows major investment in robotics and automation.

Chief executive Tim Steiner said Ocado had completed a significant phase of investment in automation, but the company declined to confirm that AI directly led to the redundancies. At its Luton warehouse, opened in 2023, human staff continue to work alongside AI powered robots.

Analysts suggested that competition has intensified as retailers in the UK, the US and Canada adopt similar AI driven systems. Some former clients in the US and Canada have invested in their own technology, reducing reliance on Ocado’s platform.

Retail experts argued that deeper structural challenges, including changing consumer expectations and cost pressures in Hertfordshire and beyond, are also at play. Local leaders in Welwyn Hatfield have requested urgent talks as the company reshapes its operating model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfake scams target Indian global executives

A deepfake video of Bombay Stock Exchange chief executive Sundararaman Ramamurthy circulated on social media in India, falsely offering stock advice to investors. The exchange moved quickly to report and remove the content, warning the public not to trust fake investment clips.

Cybersecurity experts say such cases are rising sharply, with one US firm estimating a 3,000 percent increase in deepfake incidents over two years. Executives in the US and the UK have also been impersonated using AI-generated audio and video.

In Hong Kong, police said a UK engineering firm lost $25m after an employee joined a video call featuring deepfake versions of senior colleagues. The transfer was made to multiple accounts before the fraud was discovered.

Security companies in the US and the UK are developing detection tools that analyse facial movement and blood flow patterns to identify AI-generated footage. Analysts warn that as costs fall and tools improve, businesses in India, Hong Kong and beyond face an escalating arms race against digital fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot