Security warning issued over OpenClaw AI agent

Cybersecurity authorities have warned that vulnerabilities in the OpenClaw AI agent could expose sensitive data. Officials in China say weak default security settings may allow attackers to exploit the system.

Experts in China warned that prompt injection attacks could manipulate OpenClaw when it accesses online content. Malicious instructions hidden in websites may cause the AI agent to reveal confidential information.

Researchers have also identified risks involving link previews in messaging apps such as Telegram and Discord. Investigators in China say attackers could trick the system into sending sensitive data to malicious websites.

Security specialists in China advise organisations to strengthen protections around AI agents. Recommendations include isolating systems, limiting network access and installing trusted software components only.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta removes encrypted messaging from Instagram DMs

Meta will discontinue end-to-end encryption for Instagram direct messages starting in May 2026. The company said the feature saw limited use among Instagram users.

Users with encrypted chats will receive instructions on how to download messages or media before the feature ends. Meta confirmed the change through updates to its support pages and in-app notifications.

The decision comes amid ongoing debate about encryption and online safety on major social platforms. Critics argue that encrypted messaging can make it harder to detect harmful activity involving minors.

Meta said users seeking encrypted communication can continue using WhatsApp or Messenger. The company maintains end-to-end encryption for messaging services outside Instagram.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

French court upholds €40 million GDPR fine for Criteo

France’s highest administrative court has upheld a €40 million GDPR fine against advertising technology company Criteo. Regulators in France concluded that the firm failed to obtain valid consent for tracking users across websites.

The investigation began in 2018 following complaints from privacy groups and examined Criteo’s behavioural advertising model. Authorities in France said the company did not properly respect rights to access, erasure and transparency.

The ruling in France also confirmed that pseudonymous identifiers linked to browsing data can still qualify as personal data. Judges rejected arguments that such identifiers were effectively anonymous.

Privacy advocates say the decision strengthens GDPR enforcement across Europe. Experts in France argue that the case highlights growing scrutiny of online tracking practices used in digital advertising.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Study warns AI chatbots may reinforce delusional thinking

A new scientific review has raised concerns that AI chatbots could reinforce delusional thinking, particularly among people already vulnerable to psychosis. The review, published in The Lancet Psychiatry, summarises emerging evidence suggesting that chatbot interactions may validate or amplify delusional thinking in certain users.

The study examined reports and research discussing what some have described as ‘AI-associated delusions’. Dr Hamilton Morrin, a psychiatrist and researcher at King’s College London, analysed media reports and existing evidence exploring how chatbot responses might interact with psychotic symptoms.

Psychotic delusions generally fall into three categories: grandiose, romantic, and paranoid. Researchers say chatbots may unintentionally reinforce such beliefs because they often respond in ways that are supportive or affirming. In some reported cases, users received responses suggesting spiritual significance or implying that a higher entity was communicating through the chatbot.

Researchers emphasise that there is currently no clear evidence that AI systems can independently cause psychosis in individuals without prior vulnerability. However, interactions with chatbots could strengthen existing beliefs or accelerate the progression of delusional thinking in people already at risk.

Experts say the interactive nature of chatbots may intensify the effect. Unlike static sources of information such as videos or articles, chatbots can engage users directly and repeatedly, potentially reinforcing problematic beliefs more quickly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU reviews X compliance proposal under Digital Services Act

X has submitted a compliance proposal to the European Commission outlining how it intends to modify its blue check verification system following regulatory concerns under the Digital Services Act.

The EU regulators concluded that the platform’s system allowed users to obtain verification simply by paying for a subscription without meaningful identity checks, potentially misleading users about the authenticity of accounts.

The Commission imposed a €120 million fine in December and gave the company 60 working days to propose corrective measures. Officials confirmed that X met the deadline for submitting a plan, which regulators will now assess.

The platform, owned by Elon Musk, must also pay the penalty while the Commission evaluates the proposed changes. The company has challenged the enforcement decision before the EU’s General Court.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Major tech firms pledge to fight online fraud

Eight major technology companies, including Google, Amazon, and OpenAI, have signed the ‘Online Services Accord Against Scams’ to share threat intelligence and strengthen defences against online fraud.

The voluntary pact aims to help companies work together and coordinate with governments, law enforcement, and NGOs.

The accord commits to sharing intelligence on criminal networks, using AI to detect fraud, and strengthening verification for financial transactions. Participating companies will also provide clearer reporting channels for users and encourage governments to prioritise scam prevention.

Executives emphasised that tackling scams requires collective effort. Meta’s Nathaniel Gleicher said the accord enables companies to share insights beyond individual cases, while Microsoft’s Steven Masada highlighted the need for faster collaboration to disrupt scams and track perpetrators globally.

The move comes as online scams grow in scale and sophistication, aided by AI-generated content and cross-platform operations. Consumers lost over $16 billion to online scams in 2024, prompting firms to boost safety features and push for stronger regulations and law enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europe aims to tighten AI rules and personal data standards

The European Council has proposed AI Act amendments, banning nudification tools and tightening rules for processing sensitive personal data. The move represents a key step in streamlining the continent’s digital legislation and improving safeguards for citizens.

Council officials highlighted the prohibition of AI systems that generate non-consensual sexual content or child sexual abuse material. The measure matches a European Parliament ban, showing strong support for tighter AI controls amid misuse concerns.

The proposal follows incidents such as the Grok chatbot producing millions of non-consensual intimate images, which sparked a global backlash and prompted an EU probe into the social media platform X and its AI features.

Other amendments reinstate strict rules for processing sensitive data to detect bias and require providers to register high-risk AI systems, even if claiming exemptions. Negotiations between the Council and Parliament will finalise the AI Act’s updated measures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI says ChatGPT advertisements remain limited to the US

Despite speculation that the feature was expanding internationally, OpenAI has clarified that advertisements in ChatGPT are currently available only to users in the US.

Questions about a broader rollout emerged after references to advertisements appeared in the platform’s updated privacy policy. Some users interpreted the language as evidence that advertising would soon be introduced globally.

OpenAI said the policy update does not signal an immediate expansion. According to the company, advertising features are still being tested within the US as part of a gradual deployment strategy.

ChatGPT advertisements were introduced in February 2026 and appear below responses generated by the chatbot. The ads are shown only to logged-in users on free subscription tiers and are not displayed to users under eighteen.

Company representatives stated that advertising systems operate independently from the AI model that generates responses. According to OpenAI, advertisers cannot influence or modify the content produced by ChatGPT.

The company also said it does not share user conversations or personal chat histories with advertisers. However, advertisements may still be personalised based on user queries, which has prompted discussions about how conversational interfaces could shape consumer decisions.

OpenAI indicated that it is adopting a cautious, phased approach before considering any wider rollout of ChatGPT advertising features in other markets.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Young investors warned on crypto and AI advice

Australia’s financial regulator has warned young investors to be cautious with social media influencers and AI chatbots. A survey by the Australian Securities and Investments Commission found one in four Gen Z Australians invest in crypto, often guided by online content.

The survey of 1,127 participants aged 18 to 28 showed 63% use social media for financial information, 18% rely on AI platforms, and 30% consult YouTube. AI was the most trusted source at 64%, but over half still trust influencers and social media despite possible misinformation.

ASIC previously issued warnings to 18 influencers suspected of promoting high-risk products without a licence. Commissioner Alan Kirkland said some social media marketing promotes crypto scams or risky super switches that threaten young people’s key assets.

The regulator is also watching AI financial guidance. Personalised advice from unlicensed sources is illegal, and young investors should carefully check sources, especially as crypto exchanges increasingly use AI bots for trading guidance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfake attacks push organisations to rethink cybersecurity strategies

Organisations are strengthening their cybersecurity strategies as deepfake attacks become more convincing and easier to produce using generative AI.

Security experts alert that enterprises must move beyond basic detection tools and adopt layered security strategies to defend against the growing threat of deepfake attacks targeting communications and digital identity.

Many existing tools for identifying manipulated media are still imperfect. Digital forensics expert Hany Farid estimates that some systems used to detect deepfake attacks are only about 80 percent effective and often fail to explain how they determine whether an image, video, or audio recording is authentic. The lack of explainability also raises challenges for legal investigations and public verification of suspicious media.

Cybersecurity companies are creating new technologies to improve the detection of deepfake attacks by analysing slight signals that are difficult for humans to notice. Firms such as GetReal Security, Reality Defender, Deep Media, and Sensity AI examine lighting consistency, shadow angles, voice patterns, and facial movements. Environmental indicators such as device location, metadata, and IP information can also help security teams spot potential deepfake attacks.

However, experts say detection alone cannot fully protect organisations from deepfake attacks. Companies are increasingly conducting internal red-team exercises that simulate impersonation scenarios to expose weaknesses in verification procedures. Multi-factor authentication techniques can reduce the risk of employees responding to fraudulent communications.

Another emerging defence involves digital provenance systems designed to track the origin and modification history of digital content. Initiatives such as the Coalition for Content Provenance and Authenticity (C2PA) embed cryptographically signed metadata into media files, allowing organisations to verify whether content linked to suspected deepfake attacks has been altered.

Recent experiments highlight how testing these threats can be. In February, cybersecurity company Reality Defender conducted an exercise with NATO by introducing deepfake media into a simulated military scenario. The findings showed how easily even experienced officials can struggle to identify manipulated communications, reinforcing calls for automated systems capable to detecting deepfake attacks across critical infrastructure.

As generative AI tools continue to advance, organisations are expected to combine detection technologies, stronger verification procedures, and provenance tracking to reduce the risks posed by deepfake attacks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!