Mental health concerns over chatbots fuel AI regulation calls

The impact of AI chatbots on mental health is emerging as a serious concern, with experts warning that such cases highlight the risks of more advanced systems.

Nate Soares, president of the US-based Machine Intelligence Research Institute, pointed to the tragic case of teenager Adam Raine, who took his own life after months of conversations with ChatGPT, as a warning signal for future dangers.

Soares, a former Google and Microsoft engineer, said that while companies design AI chatbots to be helpful and safe, they can produce unintended and harmful behaviour.

He warned that the same unpredictability could escalate if AI develops into artificial super-intelligence, systems capable of surpassing humans in all intellectual tasks. His new book with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies, argues that unchecked advances could lead to catastrophic outcomes.

He suggested that governments adopt a multilateral approach, similar to nuclear non-proliferation treaties, to halt a race towards super-intelligence.

Meanwhile, leading voices in AI remain divided. Meta’s chief AI scientist, Yann LeCun, has dismissed claims of an existential threat, insisting AI could instead benefit humanity.

The debate comes as OpenAI faces legal action from Raine’s family and introduces new safeguards for under-18s.

Psychotherapists and researchers also warn of the dangers of vulnerable people turning to chatbots instead of professional care, with early evidence suggesting AI tools may amplify delusional thoughts in those at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ITU warns global Internet access by 2030 could cost nearly USD 2.8 trillion

Universal Internet connectivity by 2030 could cost up to $2.8 trillion, according to the International Telecommunication Union (ITU) and Saudi Arabia’s Communications, Space, and Technology (CST) Commission. The blueprint urges global cooperation to connect the one-third of humanity still offline.

The largest share, up to $1.7 trillion, would be allocated to expanding broadband through fibre, wireless, and satellite networks. Nearly $1 trillion is needed for affordability measures, alongside $152 billion for digital skills programmes.

ITU Secretary-General Doreen Bogdan-Martin emphasised that connectivity is essential for access to education, employment, and vital services. She noted the stark divide between high-income countries, where 93% of people are online, and low-income states, where only 27% use the Internet.

The study shows costs have risen fivefold since ITU’s 2020 Connecting Humanity report, reflecting both higher demand and widening divides. Haytham Al-Ohali from Saudi Arabia said the figures underscore the urgency of investment and knowledge sharing to achieve meaningful connectivity.

The report recommends new business models and stronger cooperation between governments, industry, and civil society. Proposed measures include using schools as Internet gateways, boosting Africa’s energy infrastructure, and improving localised data collection to accelerate digital inclusion.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Latvia launches open AI framework for Europe

Language technology company Tilde has released an open AI framework designed for all European languages.

The model, named ‘TildeOpen’, was developed with the support of the European Commission and trained on the Lumi supercomputer in Finland.

According to Tilde’s head Artūrs Vasiļevskis, the project addresses a key gap in US-based AI systems, which often underperform for smaller European languages such as Latvian. By focusing on European linguistic diversity, the framework aims to provide better accessibility across the continent.

Vasiļevskis also suggested that Latvia has the potential to become an exporter of AI solutions. However, he acknowledged that development is at an early stage and that current applications remain relatively simple. The framework and user guidelines are freely accessible online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers develop an AI system to modify the brain’s mental imagery with words

A new AI system named DreamConnect can now translate a person’s brain activity into images and then edit those mental pictures using natural language commands.

Instead of merely reconstructing thoughts from fMRI scans, the breakthrough technology allows users to reshape their imagined scenes actively. For instance, an individual visualising a horse can instruct the system to transform it into a unicorn, with the AI accurately modifying the relevant features.

The system employs a dual-stream framework that interprets brain signals into rough visuals and then refines them based on text instructions.

Developed by an international team of researchers, DreamConnect represents a fundamental shift from passive brain decoding to interactive visual brainstorming.

It marks a significant advance at the frontier of human-AI interaction, moving beyond simple reconstruction to active collaboration.

Potential applications are wide-ranging, from accelerating creative design to offering new tools for therapeutic communication.

However, the researchers caution that such powerful technology necessitates robust ethical safeguards to prevent misuse and protect the privacy of an individual’s most personal data, their thoughts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk’s influence puts Grok at the centre of AI bias debate

Elon Musk’s AI chatbot, Grok, has faced repeated changes to its political orientation, with updates shifting its answers towards more conservative views.

xAI, Musk’s company, initially promoted Grok as neutral and truth-seeking, but internal prompts have steered it on contentious topics. Adjustments included portraying declining fertility as the greatest threat to civilisation and downplaying right-wing violence.

Analyses of Grok’s responses by The New York Times showed that the July updates shifted answers to the right on government and economy, while some social responses remained left-leaning. Subsequent tweaks pulled it back closer to neutrality.

Critics say that system prompts, such as short instructions like ‘be politically incorrect’, make it easy to adjust outputs, but also leave the model prone to erratic or offensive responses. A July update saw Grok briefly endorse a controversial historical figure before xAI turned it off.

The case highlights growing concerns about political bias in AI systems. Researchers argue that all chatbots reflect the worldviews of their training data, while companies increasingly face pressure to align them with user expectations or political demands.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated media must now carry labels in China

China has introduced a sweeping new law that requires all AI-generated content online to carry labels. The measure, which came into effect on 1 September, aims to tackle misinformation, fraud and copyright infringement by ensuring greater transparency in digital media.

The law, first announced in March by the Cyberspace Administration of China, mandates that all AI-created text, images, video and audio must carry explicit and implicit markings.

These include visible labels and embedded metadata such as watermarks in files. Authorities argue that the rules will help safeguard users while reinforcing Beijing’s tightening grip over online spaces.

Major platforms such as WeChat, Douyin, Weibo and RedNote moved quickly to comply, rolling out new features and notifications for their users. The regulations also form part of the Qinglang campaign, a broader effort by Chinese authorities to clean up online activity with a strong focus on AI oversight.

While Google and other US companies are experimenting with content authentication tools, China has enacted legally binding rules nationwide.

Observers suggest that other governments may soon follow, as global concern about the risks of unlabelled AI-generated material grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT safety checks may trigger police action

OpenAI has confirmed that ChatGPT conversations signalling a risk of serious harm to others can be reviewed by human moderators and may even reach the police.

The company explained these measures in a blog post, stressing that its system is designed to balance user privacy with public safety.

The safeguards treat self-harm differently from threats to others. When a user expresses suicidal intent, ChatGPT directs them to professional resources instead of contacting law enforcement.

By contrast, conversations showing intent to harm someone else are escalated to trained moderators, and if they identify an imminent risk, OpenAI may alert authorities and suspend accounts.

The company admitted its safety measures work better in short conversations than in lengthy or repeated ones, where safeguards can weaken.

OpenAI is working to strengthen consistency across interactions and developing parental controls, new interventions for risky behaviour, and potential connections to professional help before crises worsen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Age verification law in Mississipi test the limits of decentralised social media

A new Mississippi law (HB 1126), requiring age verification for all social media users, has sparked controversy over internet freedom and privacy. Bluesky, a decentralised social platform, announced it would block access in the state rather than comply, citing limited resources and concerns about the law’s broad scope.

The law imposes heavy fines, up to $10,000 per user, for non-compliance. Bluesky argued that the required technical changes are too demanding for a small team and raise significant privacy concerns. After the US Supreme Court declined to block the law while legal challenges proceed, platforms like Bluesky are now forced to make difficult decisions.

According to TechCrunch, users in the US state began seeking ways to bypass the restriction, most commonly by using VPNs, which can hide their location and make it appear as though they are accessing the internet from another state or country.

However, some questioned why such measures were necessary. The idea behind decentralised social networks like Bluesky is to reduce control by central authorities, including governments. So if a decentralised platform can still be restricted by state laws or requires workarounds like VPNs, it raises questions about how truly ‘decentralised’ or censorship-resistant these platforms are.

Some users in Mississippi are still accessing Bluesky despite the new law. Many use third-party apps like Graysky or sideload the app via platforms like AltStore. Others rely on forked apps or read-only tools like Anartia.

While decentralisation complicates enforcement, these workarounds may not last, as developers risk legal consequences. Bluesky clients that do not run their own data servers (PDS) might not be directly affected, but explaining this in court is complex.

Broader laws tend to favour large platforms that can afford compliance, while smaller services like Bluesky are often left with no option but to block access or withdraw entirely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic reports misuse of its AI tools in cyber incidents

AI company Anthropic has reported that its chatbot Claude was misused in cyber incidents, including attempts to carry out hacking operations and employment-related fraud.

The firm said its technology had been used to help write malicious code and assist threat actors in planning attacks. However, it also stated that it could disrupt the activity and notify authorities. Anthropic said it is continuing to improve its monitoring and detection systems.

In one case, the company reported that AI-supported attacks targeted at least 17 organisations, including government entities. The attackers allegedly relied on the tool to support decision-making, from choosing which data to target to drafting ransom demands.

Experts note that the rise of so-called agentic AI, which can operate with greater autonomy, has increased concerns about potential misuse.

Anthropic also identified attempts to use AI models to support fraudulent applications for remote jobs at major companies. The AI was reportedly used to create convincing profiles, generate applications, and assist in work-related tasks once jobs had been secured.

Analysts suggest that AI can strengthen such schemes, but most cyber incidents still involve long-established techniques like phishing and exploiting software vulnerabilities.

Cybersecurity specialists emphasise the importance of proactive defence as AI tools evolve. They caution that organisations should treat AI platforms as sensitive systems requiring strong safeguards to prevent their exploitation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung and Chinese brands prepare Max rollout

Russia has been pushing for its state-backed messenger Max to be pre-installed on all smartphones sold in the country since September 2025. Chinese and South Korean manufacturers, including Samsung and Xiaomi, are reportedly preparing to comply, though official confirmation is still pending.

The Max platform, developed by VK (formerly Vkontakte), offers messaging, audio and video calls, file transfers, and payments. It is set to replace VK Messenger on the mandatory app list, signalling a shift away from foreign apps like Telegram and WhatsApp.

Integration may occur via software updates or prompts when inserting a Russian SIM card.

Concerns have arisen over potential surveillance, as Max collects sensitive personal data backed by the Russian government. Critics fear the platform may monitor users, reflecting Moscow’s push to control encrypted communications.

The rollout reflects Russia’s broader push for digital sovereignty. While companies navigate compliance, the move highlights the increasing tension between state-backed applications and widely used foreign messaging services in Russia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot