Study warns AI chatbots may reinforce delusional thinking

A new scientific review has raised concerns that AI chatbots could reinforce delusional thinking, particularly among people already vulnerable to psychosis. The review, published in The Lancet Psychiatry, summarises emerging evidence suggesting that chatbot interactions may validate or amplify delusional thinking in certain users.

The study examined reports and research discussing what some have described as ‘AI-associated delusions’. Dr Hamilton Morrin, a psychiatrist and researcher at King’s College London, analysed media reports and existing evidence exploring how chatbot responses might interact with psychotic symptoms.

Psychotic delusions generally fall into three categories: grandiose, romantic, and paranoid. Researchers say chatbots may unintentionally reinforce such beliefs because they often respond in ways that are supportive or affirming. In some reported cases, users received responses suggesting spiritual significance or implying that a higher entity was communicating through the chatbot.

Researchers emphasise that there is currently no clear evidence that AI systems can independently cause psychosis in individuals without prior vulnerability. However, interactions with chatbots could strengthen existing beliefs or accelerate the progression of delusional thinking in people already at risk.

Experts say the interactive nature of chatbots may intensify the effect. Unlike static sources of information such as videos or articles, chatbots can engage users directly and repeatedly, potentially reinforcing problematic beliefs more quickly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google Earth AI supports disease forecasting and public health planning

Researchers are increasingly combining geospatial data with predictive modelling to anticipate health risks.

In that context, Google has introduced new capabilities within Google Earth AI designed to help public health experts forecast outbreaks and identify vulnerable communities.

The system integrates environmental information such as weather patterns, flooding and air quality with population mobility data and health records.

These insights allow researchers to analyse how environmental conditions influence the spread of diseases, including Dengue Fever and Cholera.

Several research initiatives are already testing the models. In collaboration with the World Health Organisation Regional Office for Africa, forecasting tools combining Google’s time-series models with geospatial data improved cholera prediction accuracy by more than 35 percent.

Academic researchers are also applying the technology to other diseases. Scientists at the University of Oxford have used Earth AI datasets to improve six-month dengue forecasts in Brazil, helping local authorities prepare preventative responses.

The technology is also being tested for chronic disease analysis. In Australia, partnerships with health organisations are exploring how geospatial models can identify regional health needs and support preventative care strategies.

Combining environmental intelligence with health data could enable public health systems to shift from reactive crisis management to earlier detection and prevention of disease outbreaks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU reviews X compliance proposal under Digital Services Act

X has submitted a compliance proposal to the European Commission outlining how it intends to modify its blue check verification system following regulatory concerns under the Digital Services Act.

The EU regulators concluded that the platform’s system allowed users to obtain verification simply by paying for a subscription without meaningful identity checks, potentially misleading users about the authenticity of accounts.

The Commission imposed a €120 million fine in December and gave the company 60 working days to propose corrective measures. Officials confirmed that X met the deadline for submitting a plan, which regulators will now assess.

The platform, owned by Elon Musk, must also pay the penalty while the Commission evaluates the proposed changes. The company has challenged the enforcement decision before the EU’s General Court.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Europe aims to tighten AI rules and personal data standards

The European Council has proposed AI Act amendments, banning nudification tools and tightening rules for processing sensitive personal data. The move represents a key step in streamlining the continent’s digital legislation and improving safeguards for citizens.

Council officials highlighted the prohibition of AI systems that generate non-consensual sexual content or child sexual abuse material. The measure matches a European Parliament ban, showing strong support for tighter AI controls amid misuse concerns.

The proposal follows incidents such as the Grok chatbot producing millions of non-consensual intimate images, which sparked a global backlash and prompted an EU probe into the social media platform X and its AI features.

Other amendments reinstate strict rules for processing sensitive data to detect bias and require providers to register high-risk AI systems, even if claiming exemptions. Negotiations between the Council and Parliament will finalise the AI Act’s updated measures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfake attacks push organisations to rethink cybersecurity strategies

Organisations are strengthening their cybersecurity strategies as deepfake attacks become more convincing and easier to produce using generative AI.

Security experts alert that enterprises must move beyond basic detection tools and adopt layered security strategies to defend against the growing threat of deepfake attacks targeting communications and digital identity.

Many existing tools for identifying manipulated media are still imperfect. Digital forensics expert Hany Farid estimates that some systems used to detect deepfake attacks are only about 80 percent effective and often fail to explain how they determine whether an image, video, or audio recording is authentic. The lack of explainability also raises challenges for legal investigations and public verification of suspicious media.

Cybersecurity companies are creating new technologies to improve the detection of deepfake attacks by analysing slight signals that are difficult for humans to notice. Firms such as GetReal Security, Reality Defender, Deep Media, and Sensity AI examine lighting consistency, shadow angles, voice patterns, and facial movements. Environmental indicators such as device location, metadata, and IP information can also help security teams spot potential deepfake attacks.

However, experts say detection alone cannot fully protect organisations from deepfake attacks. Companies are increasingly conducting internal red-team exercises that simulate impersonation scenarios to expose weaknesses in verification procedures. Multi-factor authentication techniques can reduce the risk of employees responding to fraudulent communications.

Another emerging defence involves digital provenance systems designed to track the origin and modification history of digital content. Initiatives such as the Coalition for Content Provenance and Authenticity (C2PA) embed cryptographically signed metadata into media files, allowing organisations to verify whether content linked to suspected deepfake attacks has been altered.

Recent experiments highlight how testing these threats can be. In February, cybersecurity company Reality Defender conducted an exercise with NATO by introducing deepfake media into a simulated military scenario. The findings showed how easily even experienced officials can struggle to identify manipulated communications, reinforcing calls for automated systems capable to detecting deepfake attacks across critical infrastructure.

As generative AI tools continue to advance, organisations are expected to combine detection technologies, stronger verification procedures, and provenance tracking to reduce the risks posed by deepfake attacks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hackers target WhatsApp and Signal in global encrypted messaging attacks

Foreign state-backed hackers are targeting accounts on WhatsApp and Signal used by government officials, diplomats, military personnel, and other high-value individuals, according to a security alert issued by the Portuguese Security Intelligence Service (SIS).

Portuguese authorities described the activity as part of a global cyber-espionage campaign aimed at gaining access to sensitive communications and extracting privileged information from Portugal and allied countries. The advisory did not identify the origin of the suspected attackers.

The warning follows similar alerts from other European intelligence agencies. Earlier this week, Dutch authorities reported that hackers linked to Russia were conducting a global campaign targeting the messaging accounts of officials, military personnel, and journalists.

Security agencies say the attackers are not exploiting vulnerabilities in the messaging platforms themselves. Both WhatsApp and Signal rely on end-to-end encryption designed to protect the content of messages from interception.

Instead, the campaign focuses on social engineering tactics that trick users into granting access to their accounts. According to the SIS report, attackers use phishing messages, malicious links, fake technical support requests, QR-code lures, and impersonation of trusted contacts.

The agency also warned that AI tools are increasingly being used to make such attacks more convincing. AI can help impersonate support staff, mimic familiar voices or identities, and conduct more realistic conversations through messages, phone calls, or video.

Once attackers gain access to an account, they may be able to read private messages, group chats, and shared files via WhatsApp and Signal. They can also impersonate the compromised user to launch additional phishing attacks targeting the victim’s contacts.

The alert echoes a previous warning issued by the Cybersecurity and Infrastructure Security Agency (CISA), which reported that encrypted messaging apps are increasingly being used as entry points for spyware and phishing campaigns targeting high-value individuals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU approves signature of global AI framework

The European Parliament has approved the Council of Europe Framework Convention on Artificial Intelligence, the first international legally binding treaty on AI governance.

With 455 votes in favour, 101 against, and 74 abstentions, Parliament endorsed the EU’s signature to embed existing AI legislation in a global framework. The move reinforces the safe and rights-respecting deployment of AI across the EU and worldwide.

The convention sets standards for transparency, documentation, risk management, and oversight, applying to both public authorities and private actors acting on their behalf.

It establishes a global baseline for AI governance while allowing the EU to maintain higher protections under the AI Act, GDPR, and other EU legislation covering product safety, liability, and non-discrimination.

The EU co-rapporteurs highlighted that the agreement demonstrates the EU’s commitment to human-centric AI. By prioritising democracy, accountability, and fundamental rights, the framework aims to ensure AI strengthens open societies while supporting stable economic growth.

Negotiations on the convention began in 2022 with participation from the EU member states, international partners, civil society, academia, and industry. Current signatories include the EU, the UK, Ukraine, Canada, Israel, and the United States, with the convention open to additional global partners.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Writer files lawsuit against Grammarly over AI feature using experts’ identities

A journalist has filed a class action lawsuit against Grammarly after the company introduced an AI feature that generated editorial feedback by imitating well-known writers and public figures without their permission.

The legal complaint was submitted by investigative journalist Julia Angwin, who argued that the tool unlawfully used the identities and reputations of authors and commentators.

The feature, known as ‘Expert Review’, produced automated critiques presented as if they came from figures such as Stephen King, Carl Sagan and technology journalist Kara Swisher.

Such a feature was available to subscribers paying an annual fee and was designed to simulate professional editorial guidance.

Critics quickly questioned both the quality of the generated feedback and the decision to associate the tool with real individuals who had not authorised the use of their names or expertise.

Technology writer Casey Newton tested the system by submitting one of his own articles and receiving automated feedback attributed to an AI version of Swisher. The response appeared generic, casting doubt on the value of linking such commentary to prominent personalities.

Following criticism from writers and researchers, the feature was disabled. Shishir Mehrotra, chief executive of Grammarly’s parent company Superhuman, issued a public apology while defending the broader concept behind the tool.

The lawsuit reflects growing tensions around AI systems that replicate creative styles or professional expertise.

As generative AI technologies expand across writing and publishing industries, questions surrounding consent, intellectual labour and identity rights are becoming increasingly prominent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

DIGITALEUROPE urges changes to EU AI Act rules for industry

European industry representatives are urging policymakers to reconsider parts of the EU AI Act, arguing that the current framework could impose significant compliance costs on companies developing AI tools for industrial and medical technologies.

According to Cecilia Bonfeld-Dahl, director-general of DIGITALEUROPE, manufacturers of high-tech machines, medical devices, and radio equipment are already subject to strict product safety regulations. Adding AI-specific requirements could create unnecessary administrative burdens for companies already heavily regulated. She argues that policymakers should aim for balanced AI regulation that encourages innovation while maintaining safety standards.

Industry groups warn that classifying certain AI systems as high-risk under Annex I of the AI Act could be particularly costly for smaller firms. DIGITALEUROPE estimates that a company with around 50 employees developing an AI-based product could incur initial compliance costs of €320,000 to €600,000, followed by annual expenses of up to €150,000. According to the organisation, such costs could reduce profits significantly and discourage smaller companies from pursuing AI innovation.

Manufacturing and medical technology sectors across Europe employ millions of workers and increasingly rely on AI to improve product performance and safety. Industry representatives argue that many applications, such as AI systems used to enhance industrial equipment safety or improve medical devices, already operate under established regulatory frameworks. These existing frameworks could be adapted rather than introducing additional layers of regulation.

The broader regulatory landscape is also contributing to concerns among technology companies. Over the past six years, the EU has introduced nearly 40 new technology-related regulations, some of which overlap or impose similar compliance requirements. DIGITALEUROPE estimates that compliance with the AI Act could cost companies approximately €3.3 billion annually, while cybersecurity and data-sharing regulations add further financial obligations.

Industry leaders warn that rising compliance costs could affect investment in AI development across Europe. Current estimates suggest that the EU accounts for about 7.5% of global AI investment, significantly behind the United States and China.

DIGITALEUROPE has called on the EU institutions to consider postponing parts of the AI Act’s implementation timeline to allow further discussion on how high-risk AI systems should be defined. Supporters of this approach argue that additional consultation could help ensure the regulatory framework protects consumers while also enabling European companies to compete globally in the rapidly evolving AI sector.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU lawmakers move forward on AI Act changes

Members of the European Parliament have reached a preliminary political agreement on amendments to the EU Artificial Intelligence Act. The compromise will be reviewed by parliamentary committees before a scheduled vote in Brussels.

Lawmakers in the EU agreed to extend compliance deadlines for some high risk AI systems. The changes aim to give companies and regulators more time to prepare technical standards and enforcement frameworks.

The proposed amendments also include a ban on AI systems that create non consensual explicit deepfakes. Officials in the EU say the measure aims to strengthen consumer protection and improve online safety for children.

Industry groups in the EU have raised concerns about compliance burdens linked to the revised rules. Policymakers in the EU continue negotiations as the legislation moves toward committee approval.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot