CJEU dismisses bid to annul EU-US data privacy framework

The General Court of the Court of Justice of the European Union (CJEU) has dismissed an action seeking the annulment of the EU–US Data Privacy Framework (DPF). Essentially, the DPF is an agreement between the EU and the USA allowing personal data to be transferred from the EU to US companies without additional data protection safeguards.

Following the agreement, the European Commission conducted further investigations to assess whether it offered adequate safeguards. On 10 July 2023, the Commission adopted an adequacy decision concluding that the USA ensures a sufficient level of protection comparable to that of the EU when transferring data from the EU to the USA, and that there is no need for supplementary data protection measures.

However, on 6 September 2023, Philippe Latombe, a member of the French Parliament, brought an action seeking annulment of the EU–US DPF.

He argued that the framework fails to ensure adequate protection of personal data transferred from the EU to the USA. Latombe also claimed that the Data Protection Review Court (DPRC), which is responsible for reviewing safeguards during such data transfers, lacks impartiality and independence and depends on the executive branch.

Finally, Latombe asserted that ‘the practice of the intelligence agencies of that country of collecting bulk personal data in transit from the European Union, without the prior authorisation of a court or an independent administrative authority, is not circumscribed in a sufficiently clear and precise manner and is, therefore, illegal.’As a result, the General Court of the EU dismissed the action for annulment, stating that:

  • The DPRC has sufficient safeguards to ensure judicial independence,
  • US intelligence agencies’ bulk data collection practices are compatible with the EU fundamental rights, and
  • The decision consolidates the European Commission’s ability to suspend or amend the framework if US legal safeguards change.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

RBA develops internal AI chatbot

The Reserve Bank of Australia has developed and is testing an in-house, AI-powered chatbot to assist its staff with research and analysis.

Named RBAPubChat, the tool is trained on the central bank’s knowledge base of nearly 20,000 internal and external analytical documents spanning four decades. It aims to help employees ask policy-relevant questions and get useful summaries of existing information.

Speaking at the Shann memorial lecture in Perth, Governor Michele Bullock said that the AI is not being used to formulate or set monetary policy. Instead, it is intended to improve efficiency and amplify the impact of staff efforts.

A separate tool using natural language processing has also been developed to analyse over 22,000 conversations from the bank’s business liaison programme. The Reserve Bank of Australia has noted that this tool has already shown promise, helping to forecast wage growth more accurately than traditional models.

The RBA has also acquired its first enterprise-grade graphics processing unit to support developing and running advanced AI-driven tools.

The bank’s internal coding community is now a well-established part of its operations, with one in four employees using coding as a core part of their daily work. Governor Bullock stressed that the bank’s approach to technology is one of “deliberate, well-managed evolution” rather than disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Is AI therapy safe, effective, and ethical?

Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.

With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?

Therapy keeps secrets; AI keeps data

Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.

The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.

Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.

To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.

According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.

Falling for your AI ‘therapist’

Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.

The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.

With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.

As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.

Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.

Who loses work when therapy goes digital?

Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.

Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.

Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.

Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.

Can AI ‘therapists’ handle crisis conversations

Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.

In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.

One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.

Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.

In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.

Chatbots are companions, not health professionals

AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.

While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!


Google avoids forced breakup in search monopoly trial

A United States federal judge has ruled against a forced breakup of Google’s search business, instead opting for a series of behavioural changes to curb anticompetitive behaviour.

The ruling, from US District Court Judge Amit P. Mehta, bars Google from entering or maintaining exclusive deals that tie the distribution of its search products, such as Search, Chrome, and Gemini, to other apps or revenue agreements.

The tech giant will also have to share specific search data with rivals and offer search and search ad syndication services to competitors at standard rates.

The ruling comes a year after Judge Mehta found that Google had illegally maintained its monopoly in online search. The Department of Justice brought the case and pushed for stronger measures, including forcing Google to sell off its Chrome browser and Android operating system.

It also sought to end Google’s lucrative agreements with companies like Apple and Samsung, in which it pays billions to be the default search engine on their devices. The judge acknowledged during the trial that these default placements were ‘extremely valuable real estate’ that effectively locked out rivals.

A final judgement has not yet been issued, as Judge Mehta has given Google and the Department of Justice until 10 September to submit a revised plan. A technical committee will be established to help enforce the judgement, which will go into effect 60 days after entry and last for six years.

Experts say the ruling may influence a separate antitrust trial against Google’s advertising technology business, and that the search case itself is likely to face a lengthy appeals process, stretching into 2028.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US regulators offer clarity on spot crypto products

The US Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) have announced a joint effort to clarify spot cryptocurrency trading. Regulators confirmed that US and foreign exchanges can list spot crypto products- leveraged and margin ones.

The guidance follows the President’s Working Group on Digital Asset Markets recommendations, which called for rules that keep blockchain innovation within the country.

Regulators said they are ready to review filings, address custody and clearing, and ensure spot markets meet transparency and investor protection standards.

Under the new approach, major venues such as the New York Stock Exchange, Nasdaq, CME Group and Cboe Global Markets could seek to list spot crypto assets. Foreign boards of trade recognised by the CFTC may also be eligible.

The move highlights a policy shift under President Donald Trump’s administration, with Congress and the White House pressing for greater regulatory clarity.

In July, the House of Representatives passed the CLARITY Act, a bill on crypto market structure now before the Senate. The moves and the regulators’ statement mark a key step in aligning US digital assets with established financial rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Musk’s influence puts Grok at the centre of AI bias debate

Elon Musk’s AI chatbot, Grok, has faced repeated changes to its political orientation, with updates shifting its answers towards more conservative views.

xAI, Musk’s company, initially promoted Grok as neutral and truth-seeking, but internal prompts have steered it on contentious topics. Adjustments included portraying declining fertility as the greatest threat to civilisation and downplaying right-wing violence.

Analyses of Grok’s responses by The New York Times showed that the July updates shifted answers to the right on government and economy, while some social responses remained left-leaning. Subsequent tweaks pulled it back closer to neutrality.

Critics say that system prompts, such as short instructions like ‘be politically incorrect’, make it easy to adjust outputs, but also leave the model prone to erratic or offensive responses. A July update saw Grok briefly endorse a controversial historical figure before xAI turned it off.

The case highlights growing concerns about political bias in AI systems. Researchers argue that all chatbots reflect the worldviews of their training data, while companies increasingly face pressure to align them with user expectations or political demands.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta under fire over AI deepfake celebrity chatbots

Meta faces scrutiny after a Reuters investigation found its AI tools created deepfake chatbots and images of celebrities without consent. Some bots made flirtatious advances, encouraged meet-ups, and generated photorealistic sexualised images.

The affected celebrities include Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez.

The probe also uncovered a chatbot of 16-year-old actor Walker Scobell producing inappropriate images, raising serious child safety concerns. Meta admitted policy enforcement failures and deleted around a dozen bots shortly before publishing the report.

A spokesperson acknowledged that intimate depictions of adult celebrities and any sexualised content involving minors should not have been generated.

Following the revelations, Meta announced new safeguards to protect teenagers, including restricting access to certain AI characters and retraining models to reduce inappropriate content.

California Attorney General Rob Bonta called exposing children to sexualised content ‘indefensible,’ and experts warned Meta could face legal challenges over intellectual property and publicity laws.

The case highlights broader concerns about AI safety and ethical boundaries. It also raises questions about regulatory oversight as social media platforms deploy tools that can create realistic deepfake content without proper guardrails.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple creates Asa chatbot for staff training

Apple is moving forward with its integrated approach to AI by testing an internal chatbot designed for retail training. The company focuses on embedding AI into existing services rather than launching a consumer-facing chatbot like Google’s Gemini or ChatGPT.

The new tool, Asa, is being tested within Apple’s SEED app, which offers training resources for store employees and authorised resellers. Asa is expected to improve learning by allowing staff to ask open-ended questions and receive tailored responses.

Screenshots shared by analyst Aaron Perris show Asa handling queries about device features, comparisons, and use cases. Although still in testing, the chatbot is expected to expand across Apple’s retail network in the coming weeks.

The development occurs amid broader AI tensions, as Elon Musk’s xAI sued Apple and OpenAI for allegedly colluding to limit competition. Apple’s focus on internal AI tools like Asa contrasts with Musk’s legal action, highlighting disputes over AI market dominance and platform integration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Beijing seeks to curb excess AI investment while sustaining growth

China has pledged to rein in excessive competition in AI, signalling Beijing’s desire to avoid wasteful investment while keeping the technology central to its economic strategy.

The National Development and Reform Commission stated that provinces should develop AI in a coordinated manner, leveraging local strengths to prevent duplication and overlap. Officials in China emphasised the importance of orderly flows of talent, capital, and resources.

The move follows President Xi Jinping’s warnings about unchecked local investment. Authorities aim to prevent overcapacity problems, such as those seen in electric vehicles, which have fueled deflationary pressures in other industries.

While global investment in data centres has surged, Beijing is adopting a calibrated approach. The state also vowed stronger national planning and support for private firms, aiming to nurture new domestic leaders in AI.

At the same time, policymakers are pushing to attract private capital into traditional sectors, while considering more central spending on social projects to ease local government debt burdens and stimulate long-term consumption.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces turmoil as AI hiring spree backfires

Mark Zuckerberg’s ambitious plan to assemble a dream team of AI researchers at Meta has instead created internal instability.

High-profile recruits poached from rival firms have begun leaving within weeks of joining, citing cultural clashes and frustration with the company’s working style. Their departures have disrupted projects and unsettled long-time executives.

Meta had hoped its aggressive hiring spree would help the company rival OpenAI, Google, and Anthropic in developing advanced AI systems.

Instead of strengthening the company’s position, the strategy has led to delays in projects and uncertainty about whether Meta can deliver on its promises of achieving superintelligence.

The new arrivals were given extensive autonomy, fuelling tensions with existing teams and creating leadership friction. Some staff viewed the hires as destabilising, while others expressed concern about the direction of the AI division.

The resulting turnover has left Meta struggling to maintain momentum in its most critical area of research.

As Meta faces mounting pressure to demonstrate progress in AI, the setbacks highlight the difficulty of retaining elite talent in a fiercely competitive field.

Zuckerberg’s recruitment drive, rather than propelling Meta ahead, risks slowing down the company’s ability to compete at the highest level of AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!