AI chatbots found unreliable in suicide-related responses, according to a new study

A new study by the RAND Corporation has raised concerns about the ability of AI chatbots to answer questions related to suicide and self-harm safely.

Researchers tested ChatGPT, Claude and Gemini with 30 different suicide-related questions, repeating each one 100 times. Clinicians assessed the queries on a scale from low to high risk, ranging from general information-seeking to dangerous requests about methods of self-harm.

The study revealed that ChatGPT and Claude were more reliable at handling low-risk and high-risk questions, avoiding harmful instructions in dangerous scenarios. Gemini, however, produced more variable results.

While all three ΑΙ chatbots sometimes responded appropriately to medium-risk questions, such as offering supportive resources, they often failed to respond altogether, leaving potentially vulnerable users without guidance.

Experts warn that millions of people now use large language models as conversational partners instead of trained professionals, which raises serious risks when the subject matter involves mental health. Instances have already been reported where AI appeared to encourage self-harm or generate suicide notes.

The RAND team stressed that safeguards are urgently needed to prevent such tools from producing harmful content in response to sensitive queries.

The study also noted troubling inconsistencies. ChatGPT and Claude occasionally gave inappropriate details when asked about hazardous methods, while Gemini refused even basic factual queries about suicide statistics.

Researchers further observed that ChatGPT showed reluctance to recommend therapeutic resources, often avoiding direct mention of safe support channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New WhatsApp features help manage unwanted groups

WhatsApp is expanding its tools to give users greater control over the groups they join and the conversations they take part in.

When someone not saved in a user’s contacts adds them to a group, WhatsApp now provides details about that group so they can immediately decide whether to stay or leave. If a user chooses to exit, they can also report the group directly to WhatsApp.

Privacy settings allow people to decide who can add them to groups. By default, the setting is set to ‘Everyone,’ but it can be adjusted to ‘My contacts’ or ‘My contacts except…’ for more security. Messages within groups can also be reported individually, with users having the option to block the sender.

Reported messages and groups are sent to WhatsApp for review, including the sender’s or group’s ID, the time the message was sent, and the message type.

Although blocking an entire group is impossible, users can block or report individual members or administrators if they are sending spam or inappropriate content. Reporting a group will send up to five recent messages from that chat to WhatsApp without notifying other members.

Exiting a group remains straightforward: users can tap the group name and select ‘Exit group.’ With these tools, WhatsApp aims to strengthen user safety, protect privacy, and provide better ways to manage unwanted interactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FTC cautions US tech firms over compliance with EU and UK online safety laws

The US Federal Trade Commission (FTC) has warned American technology companies that following European Union and United Kingdom rules on online content and encryption could place them in breach of US legislation.

In a letter sent to chief executives, FTC Chair Andrew Ferguson said that restricting access to content for American users to comply with foreign legal requirements might amount to a violation of Section 5 of the Federal Trade Commission Act, which prohibits unfair or deceptive commercial practices.

Ferguson cited the EU’s Digital Services Act and the UK’s Online Safety Act, as well as reports of British efforts to gain access to encrypted Apple iCloud data, as examples of measures that could put companies at risk under US law.

Although Section 5 has traditionally been used in cases concerning consumer protection, Ferguson noted that the same principles could apply if companies changed their services for US users due to foreign regulation. He argued that such changes could ‘mislead’ American consumers, who would not reasonably expect their online activity to be governed by overseas restrictions.

The FTC chair invited company leaders to meet with his office to discuss how they intend to balance demands from international regulators while continuing to fulfil their legal obligations in the United States.

Earlier this week, a senior US intelligence official said the British government had withdrawn a proposed legal measure aimed at Apple’s encrypted iCloud data after discussions with US Vice President JD Vance.

The issue has arisen amid tensions over the enforcement of UK online safety rules. Several online platforms, including 4chan, Gab, and Kiwi Farms, have publicly refused to comply, and British authorities have indicated that internet service providers could ultimately be ordered to block access to such sites.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Copilot policy flaw allows unauthorized access to AI agents

Administrators found that Microsoft Copilot’s intended ‘NoUsersCanAccessAgent’ policy, which is designed to prevent user access to certain AI agents, is being ignored. Some agents, including ExpenseTrackerBot and HRQueryAgent, remain installable despite global restrictions.

Microsoft 365 tenants must now use per-agent PowerShell commands to disable access manually. This workaround is both time-consuming and error-prone, particularly in large organisations. The failure to enforce access policies raises concerns regarding operational overhead and audit risk.

The security implications are significant. Unauthorised agents can export data from SharePoint or OneDrive, run RPA workflows without oversight, or process sensitive information without compliance controls. The flaw undermined the purpose of access control settings and exposed the system to misuse.

To mitigate this risk, administrators are urged to audit agent inventories, enforce Conditional Access policies, for example, requiring MFA or device compliance, and consistently monitor agent usage through logs and dashboards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gmail accounts targeted in phishing wave after Google data leak

Hackers linked to the ShinyHunters group have compromised Google’s Salesforce systems, leading to a data leak that puts Gmail and Google Cloud users at risk of phishing attacks.

Google confirmed that customer and company names were exposed, though no passwords were stolen. Attackers are now exploiting the breach with phishing schemes, including fake account resets and malware injection attempts through outdated access points.

With Gmail and Google Cloud serving around 2.5 billion users worldwide, both companies and individuals could be targeted. Early reports on Reddit describe callers posing as Google staff warning of supposed account breaches.

Google urges users to strengthen protections by running its Security Checkup, enabling Advanced Protection, and switching to passkeys instead of passwords. The company emphasised that its staff never initiates unsolicited password resets by phone or email.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

INTERPOL reports over 1,200 arrests in Africa-wide cybercrime operation

INTERPOL has announced that a continent-wide law enforcement initiative targeting cybercrime and fraud networks led to more than 1,200 arrests between June and August 2025. The operation, known as Serengeti 2.0, was carried out across multiple African states and focused on ransomware, online fraud, and business email compromise schemes. Authorities reported the recovery of approximately USD 97.4 million, allegedly stolen from more than 88,000 victims worldwide.

In Angola, police closed 25 unauthorised cryptocurrency mining sites, reportedly operated by 60 Chinese nationals. In Zambia, authorities dismantled a large-scale fraudulent investment scheme involving cryptocurrency platforms, which is estimated to have defrauded around 65,000 individuals of roughly USD 300 million. Fifteen suspects were detained, and assets, including domains, mobile numbers, and bank accounts, were seized.

In a separate raid in Lusaka, police disrupted a suspected human trafficking network and confiscated hundreds of forged passports from seven different countries.

INTERPOL has previously noted that Africa’s rapid uptake of digital technologies, particularly in finance and e-commerce, has increased the scope for cybercriminal activity. At the same time, comparatively weak cybersecurity frameworks have left financial institutions and government systems exposed to data breaches, economic losses, and disruption to trade.

Separately, in June, a Nigerian court sentenced nine Chinese nationals to prison for running an online fraud syndicate that recruited young Nigerians. Following the verdict, China’s ambassador to Nigeria proposed the creation of a joint working group to investigate cybercrime involving Chinese nationals in the region.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud’s new AI tools expand enterprise threat protection

Following last week’s announcements on AI-driven cybersecurity, Google Cloud has unveiled further tools at its Security Summit 2025 aimed at protecting enterprise AI deployments and boosting efficiency for security teams.

The updates build on prior innovations instead of replacing them, reinforcing Google’s strategy of integrating AI directly into security operations.

Vice President and General Manager Jon Ramsey highlighted the growing importance of agentic approaches as AI agents operate across increasingly complex enterprise environments.

Building on the previous rollout, Google now introduces Model Armor protections, designed to shield AI agents from prompt injections, jailbreaking, and data leakage, enhancing safeguards without interrupting existing workflows.

Additional enhancements include the Alert Investigation agent, which automates event enrichment and analysis while offering actionable recommendations.

By combining Mandiant threat intelligence feeds with Google’s Gemini AI, organisations can now detect and respond to incidents across distributed agent networks more rapidly and efficiently than before.

SecOps Labs and updated SOAR dashboards provide early access to AI-powered threat detection experiments and comprehensive visualisations of security operations.

These tools allow teams to continue scaling agentic AI security, turning previous insights into proactive, enterprise-ready protections for real-world deployments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musicians report surge in AI fakes appearing on Spotify and iTunes

Folk singer Emily Portman has become the latest artist targeted by fraudsters releasing AI-generated music in her name. Fans alerted her to a fake album called Orca appearing on Spotify and iTunes, which she said sounded uncannily like her style but was created without her consent.

Portman has filed copyright complaints, but says the platforms were slow to act, and she has yet to regain control of her Spotify profile. Other artists, including Josh Kaufman, Jeff Tweedy, Father John Misty, Sam Beam, Teddy Thompson, and Jakob Dylan, have faced similar cases in recent weeks.

Many of the fake releases appear to originate from the same source, using similar AI artwork and citing record labels with Indonesian names. The tracks are often credited to the same songwriter, Zyan Maliq Mahardika, whose name also appears on imitations of artists in other genres.

Industry analysts say streaming platforms and distributors are struggling to keep pace with AI-driven fraud. Tatiana Cirisano of Midia Research noted that fraudsters exploit passive listeners to generate streaming revenue, while services themselves are turning to AI and machine learning to detect impostors.

Observers warn the issue is likely to worsen before it improves, drawing comparisons to the early days of online piracy. Artists and rights holders may face further challenges as law enforcement attempts to catch up with the evolving abuse of AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Energy and government sectors in Poland face mounting hacktivist threats

Poland has become the leading global target for politically and socially motivated cyberattacks, recording over 450 incidents in the second quarter of 2025, according to Spain’s Industrial Cybersecurity Center.

The report ranked Poland ahead of Ukraine, the UK, France, Germany, and other European states in hacktivist activity. Government institutions and the energy sector were among the most targeted, with organisations supporting Ukraine described as especially vulnerable.

ZIUR’s earlier first-quarter analysis had warned of a sharp rise in attacks against state bodies across Europe. Pro-Russian groups were identified as among the most active, increasingly turning to denial-of-service campaigns to disrupt critical operations.

Europe accounted for the largest share of global hacktivism in the second quarter, with more than 2,500 successful denial-of-service attacks recorded between April and June, underlining the region’s heightened exposure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global tech competition intensifies as the UK outlines a £1 trillion digital blueprint

The United Kingdom has unveiled a strategy to grow its digital economy to £1 trillion by harnessing AI, quantum computing, and cybersecurity. The plan emphasises public-private partnerships, training, and international collaboration to tackle skills shortages and infrastructure gaps.

The initiative builds on the UK tech sector’s £1.2 trillion valuation, with regional hubs in cities such as Bristol and Manchester fuelling expansion in emerging technologies. Experts, however, warn that outdated systems and talent deficits could stall progress unless workforce development accelerates.

AI is central to the plan, with applications spanning healthcare and finance. Quantum computing also features, with investments in research and cybersecurity aimed at strengthening resilience against supply disruptions and future threats.

The government highlights sustainability as a priority, promoting renewable energy and circular economies to ensure digital growth aligns with environmental goals. Regional investment in blockchain, agri-tech, and micro-factories is expected to create jobs and diversify innovation-driven growth.

By pursuing these initiatives, the UK aims to establish itself as a leading global tech player alongside the US and China. Ethical frameworks and adaptive strategies will be key to maintaining public trust and competitiveness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!