The Shanghai Cooperation Organisation (SCO) summit in Tianjin closed with leaders adopting the Tianjin Declaration, highlighting member states’ commitment to multilateralism, sovereignty, and shared security.
The discussions emphasised economic resilience, financial cooperation, and collective responses to security challenges.
Proposals included exploring joint financial mechanisms, such as common bonds and payment systems, to shield member economies from external disruptions.
Leaders also underlined the importance of strengthening cooperation in trade and investment, with China pledging additional funding and infrastructure support across the bloc. Observers noted that these measures reflect growing interest in alternative global finance and economic governance approaches.
Security issues are prominently featured, with agreements to enhance counter-terrorism initiatives and expand existing structures such as the Regional Anti-Terrorist Structure. Delegates also called for greater collaboration against cross-border crime, drug trafficking, and emerging security risks.
At the same time, they stressed the need for political solutions to ongoing regional conflicts, including those in Ukraine, Gaza, and Afghanistan.
With its expanding membership and combined economic weight, the SCO continues to position itself as a platform for cooperation beyond traditional regional security concerns.
While challenges remain, including diverging interests among key members, the Tianjin summit indicated the bloc’s growing role in discussions on multipolar governance and collective stability.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Following the agreement, the European Commission conducted further investigations to assess whether it offered adequate safeguards. On 10 July 2023, the Commission adopted an adequacy decision concluding that the USA ensures a sufficient level of protection comparable to that of the EU when transferring data from the EU to the USA, and that there is no need for supplementary data protection measures.
However, on 6 September 2023, Philippe Latombe, a member of the French Parliament, brought an action seeking annulment of the EU–US DPF.
He argued that the framework fails to ensure adequate protection of personal data transferred from the EU to the USA. Latombe also claimed that the Data Protection Review Court (DPRC), which is responsible for reviewing safeguards during such data transfers, lacks impartiality and independence and depends on the executive branch.
Finally, Latombe asserted that ‘the practice of the intelligence agencies of that country of collecting bulk personal data in transit from the European Union, without the prior authorisation of a court or an independent administrative authority, is not circumscribed in a sufficiently clear and precise manner and is, therefore, illegal.’As a result, the General Court of the EU dismissed the action for annulment, stating that:
The DPRC has sufficient safeguards to ensure judicial independence,
US intelligence agencies’ bulk data collection practices are compatible with the EU fundamental rights, and
The decision consolidates the European Commission’s ability to suspend or amend the framework if US legal safeguards change.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The regulatory approaches to AI in the EU and Australia are diverging significantly, creating a complex challenge for the global tech sector.
Instead of a unified global standard, companies must now navigate the EU’s stringent, risk-based AI Act and Australia’s more tentative, phased-in approach. The disparity underscores the necessity for sophisticated cross-border legal expertise to ensure compliance in different markets.
In the EU, the landmark AI Act is now in force, implementing a strict risk-based framework with severe financial penalties for non-compliance.
Conversely, Australia has yet to pass binding AI-specific laws, opting instead for a proposal paper outlining voluntary safety standards and 10 mandatory guardrails for high-risk applications currently under consultation.
It creates a markedly different compliance environment for businesses operating in both regions.
For tech companies, the evolving patchwork of international regulations turns AI governance into a strategic differentiator instead of a mere compliance obligation.
Understanding jurisdictional differences, particularly in areas like data governance, human oversight, and transparency, is becoming essential for successful and lawful global operations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.
With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?
Therapy keeps secrets; AI keeps data
Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.
The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.
Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.
To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.
According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.
Falling for your AI ‘therapist’
Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.
The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.
With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.
As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.
Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.
Who loses work when therapy goes digital?
Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.
Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.
Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.
Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.
Can AI ‘therapists’ handle crisis conversations
Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.
In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.
One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.
Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.
In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.
Chatbots are companions, not health professionals
AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.
While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The US General Services Administration (GSA) has agreed on a significant deal with Microsoft to provide federal agencies with discounted access to its AI and cloud tools suite.
Instead of managing separate contracts, the government-wide pact offers unified pricing on products including Microsoft 365, the Copilot AI assistant, and Azure cloud services, potentially saving agencies up to $3.1 billion in its first year.
The arrangement is designed to accelerate AI adoption and digital transformation across the federal government. It includes free access to the generative AI chatbot Microsoft 365 Copilot for up to 12 months, alongside discounts on cybersecurity tools and Dynamics 365.
Agencies can opt into any of the offers through September next year.
The deal leverages the federal government’s collective purchasing power to reduce costs and foster innovation.
It delivers on a White House AI action plan and follows similar arrangements the GSA announced last month with other tech giants, including Google, Amazon Web Services, and OpenAI.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) have announced a joint effort to clarify spot cryptocurrency trading. Regulators confirmed that US and foreign exchanges can list spot crypto products- leveraged and margin ones.
The guidance follows the President’s Working Group on Digital Asset Markets recommendations, which called for rules that keep blockchain innovation within the country.
Regulators said they are ready to review filings, address custody and clearing, and ensure spot markets meet transparency and investor protection standards.
Under the new approach, major venues such as the New York Stock Exchange, Nasdaq, CME Group and Cboe Global Markets could seek to list spot crypto assets. Foreign boards of trade recognised by the CFTC may also be eligible.
The move highlights a policy shift under President Donald Trump’s administration, with Congress and the White House pressing for greater regulatory clarity.
In July, the House of Representatives passed the CLARITY Act, a bill on crypto market structure now before the Senate. The moves and the regulators’ statement mark a key step in aligning US digital assets with established financial rules.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A report has highlighted a potential exposure of Apple ID logins after a 47.42 GB database was discovered on an unsecured web server, reportedly affecting up to 184 million accounts.
The database was identified by security researcher Jeremiah Fowler, who indicated it may include unencrypted credentials across Apple services and other platforms.
Security experts recommend users review account security, including updating passwords and enabling two-factor authentication.
The alleged database contains usernames, email addresses, and passwords, which could allow access to iCloud, App Store accounts, and data synced across devices.
Observers note that centralised credential management carries inherent risks, underscoring the importance of careful data handling practices.
Reports suggest that Apple’s email software flaws could theoretically increase risk if combined with exposed credentials.
Apple has acknowledged researchers’ contributions in identifying server issues and has issued security updates, while ongoing vigilance and standard security measures are recommended for users.
The case illustrates the challenges of safeguarding large-scale digital accounts and may prompt continued discussion about regulatory standards and personal data protection.
Users are advised to maintain strong credentials and monitor account activity.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A whistle-blower has reported that the Department of Government Efficiency (DOGE) allegedly transferred a copy of the US Social Security database to an Amazon Web Services cloud environment.
The action placed personal information for more than 300 million individuals in a system outside traditional federal oversight.
Known as NUMIDENT, the database contains information submitted for Social Security applications, including names, dates of birth, addresses, citizenship, and parental details.
DOGE personnel managed the cloud environment and gained administrative access to perform testing and operational tasks.
Federal officials have highlighted that standard security protocols and authorisations, such as those outlined under the Federal Information Security Management Act (FISMA) and the Privacy Act of 1974, are designed to protect sensitive data.
Internal reviews have been prompted by the transfer, raising questions about compliance with established federal security practices.
While DOGE has not fully clarified the purpose of the cloud deployment, observers note that such initiatives may relate to broader federal efforts to improve data accessibility or inter-agency information sharing.
The case is part of ongoing discussions on balancing operational flexibility with information security in government systems.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has introduced a sweeping new law that requires all AI-generated content online to carry labels. The measure, which came into effect on 1 September, aims to tackle misinformation, fraud and copyright infringement by ensuring greater transparency in digital media.
The law, first announced in March by the Cyberspace Administration of China, mandates that all AI-created text, images, video and audio must carry explicit and implicit markings.
These include visible labels and embedded metadata such as watermarks in files. Authorities argue that the rules will help safeguard users while reinforcing Beijing’s tightening grip over online spaces.
Major platforms such as WeChat, Douyin, Weibo and RedNote moved quickly to comply, rolling out new features and notifications for their users. The regulations also form part of the Qinglang campaign, a broader effort by Chinese authorities to clean up online activity with a strong focus on AI oversight.
While Google and other US companies are experimenting with content authentication tools, China has enacted legally binding rules nationwide.
Observers suggest that other governments may soon follow, as global concern about the risks of unlabelled AI-generated material grows.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has confirmed that ChatGPT conversations signalling a risk of serious harm to others can be reviewed by human moderators and may even reach the police.
The company explained these measures in a blog post, stressing that its system is designed to balance user privacy with public safety.
The safeguards treat self-harm differently from threats to others. When a user expresses suicidal intent, ChatGPT directs them to professional resources instead of contacting law enforcement.
By contrast, conversations showing intent to harm someone else are escalated to trained moderators, and if they identify an imminent risk, OpenAI may alert authorities and suspend accounts.
The company admitted its safety measures work better in short conversations than in lengthy or repeated ones, where safeguards can weaken.
OpenAI is working to strengthen consistency across interactions and developing parental controls, new interventions for risky behaviour, and potential connections to professional help before crises worsen.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!