RBA develops internal AI chatbot

The Reserve Bank of Australia has developed and is testing an in-house, AI-powered chatbot to assist its staff with research and analysis.

Named RBAPubChat, the tool is trained on the central bank’s knowledge base of nearly 20,000 internal and external analytical documents spanning four decades. It aims to help employees ask policy-relevant questions and get useful summaries of existing information.

Speaking at the Shann memorial lecture in Perth, Governor Michele Bullock said that the AI is not being used to formulate or set monetary policy. Instead, it is intended to improve efficiency and amplify the impact of staff efforts.

A separate tool using natural language processing has also been developed to analyse over 22,000 conversations from the bank’s business liaison programme. The Reserve Bank of Australia has noted that this tool has already shown promise, helping to forecast wage growth more accurately than traditional models.

The RBA has also acquired its first enterprise-grade graphics processing unit to support developing and running advanced AI-driven tools.

The bank’s internal coding community is now a well-established part of its operations, with one in four employees using coding as a core part of their daily work. Governor Bullock stressed that the bank’s approach to technology is one of “deliberate, well-managed evolution” rather than disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Is AI therapy safe, effective, and ethical?

Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.

With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?

Therapy keeps secrets; AI keeps data

Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.

The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.

Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.

To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.

According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.

Falling for your AI ‘therapist’

Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.

The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.

With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.

As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.

Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.

Who loses work when therapy goes digital?

Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.

Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.

Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.

Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.

Can AI ‘therapists’ handle crisis conversations

Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.

In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.

One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.

Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.

In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.

Chatbots are companions, not health professionals

AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.

While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!


US government and New Mexico team up on quantum computing

A new partnership between the federal government and New Mexico’s state and local businesses aims to establish the state as a leader in quantum computing.

The initiative will see the Defence Advanced Research Projects Agency (DARPA) working alongside local researchers and companies to develop and commercialise next-generation technology. A total of up to $120 million could be invested in the project over four years.

New Mexico’s selection for the project is due to its long history of innovation, its two national defence labs, and a high concentration of leading scientists in the field.

The goal is to harness the ‘brainpower’ of the state to build computers that can solve currently impossible problems, such as developing materials that resist corrosion or finding cures for diseases. One of the project’s aims is to test the technology and differentiate between genuine breakthroughs and mere hype.

Roadrunner Venture Studios will be assisting in developing new quantum computing businesses within the state. A successful venture would bring economic gains and jobs and position New Mexico to lead the nation in solving some of its most pressing challenges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT hit by widespread outage: ‘Our work partner is down’

A significant outage has struck ChatGPT, leaving many users unable to receive responses from the popular AI chatbot. Instead of generating answers, the service failed to react to prompts, causing widespread frustration, particularly during the busy morning work period.

Owner OpenAI has officially launched an investigation into the mysterious malfunction of ChatGPT after its status page confirmed a problem was detected.

Over a thousand complaints were registered on the outage tracking site Down Detector. Social media was flooded with reports from affected users, with one calling it an unprecedented event and another joking that their ‘work partner is down’.

Instead of a full global blackout, initial tests suggested the issue might be limited to some users, as functionality remained for others.

If you find ChatGPT is unresponsive, you can attempt several fixes instead of simply waiting. First, verify the outage is on your end by checking OpenAI’s official status page or Down Detector instead of assuming your connection is at fault.

If the service is operational, try switching to a different browser or an incognito window to rule out local cache issues. Alternatively, use the official ChatGPT mobile app to access it.

For a more thorough solution, clear your browser’s cache and cookies, or as a last resort, consider using an alternative AI service like Microsoft Copilot or Google Gemini to continue your work without interruption.

OpenAI is working to resolve the problem. The company advises users to check its official service status page for updates, rather than relying solely on social media reports.

The incident highlights the growing dependence on AI tools for daily tasks and the disruption caused when such a centralised service experiences technical difficulties.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google avoids forced breakup in search monopoly trial

A United States federal judge has ruled against a forced breakup of Google’s search business, instead opting for a series of behavioural changes to curb anticompetitive behaviour.

The ruling, from US District Court Judge Amit P. Mehta, bars Google from entering or maintaining exclusive deals that tie the distribution of its search products, such as Search, Chrome, and Gemini, to other apps or revenue agreements.

The tech giant will also have to share specific search data with rivals and offer search and search ad syndication services to competitors at standard rates.

The ruling comes a year after Judge Mehta found that Google had illegally maintained its monopoly in online search. The Department of Justice brought the case and pushed for stronger measures, including forcing Google to sell off its Chrome browser and Android operating system.

It also sought to end Google’s lucrative agreements with companies like Apple and Samsung, in which it pays billions to be the default search engine on their devices. The judge acknowledged during the trial that these default placements were ‘extremely valuable real estate’ that effectively locked out rivals.

A final judgement has not yet been issued, as Judge Mehta has given Google and the Department of Justice until 10 September to submit a revised plan. A technical committee will be established to help enforce the judgement, which will go into effect 60 days after entry and last for six years.

Experts say the ruling may influence a separate antitrust trial against Google’s advertising technology business, and that the search case itself is likely to face a lengthy appeals process, stretching into 2028.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft to supply AI tools to federal agencies in a cost-saving pact

The US General Services Administration (GSA) has agreed on a significant deal with Microsoft to provide federal agencies with discounted access to its AI and cloud tools suite.

Instead of managing separate contracts, the government-wide pact offers unified pricing on products including Microsoft 365, the Copilot AI assistant, and Azure cloud services, potentially saving agencies up to $3.1 billion in its first year.

The arrangement is designed to accelerate AI adoption and digital transformation across the federal government. It includes free access to the generative AI chatbot Microsoft 365 Copilot for up to 12 months, alongside discounts on cybersecurity tools and Dynamics 365.

Agencies can opt into any of the offers through September next year.

The deal leverages the federal government’s collective purchasing power to reduce costs and foster innovation.

It delivers on a White House AI action plan and follows similar arrangements the GSA announced last month with other tech giants, including Google, Amazon Web Services, and OpenAI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini upgrade for Google Home coming soon

An upcoming upgrade for Google Home devices is set to bring a new AI assistant, Gemini, to the smart home ecosystem. A recent post by the Made by Google account on X revealed that more details will be announced on 1 October.

The move follows months of user complaints about Google Home’s performance, including issues with connectivity and the assistant’s failure to recognise basic commands.

With Gemini’s superior ability to understand natural language, the upgrade is expected to improve how users interact with their smart devices significantly. Home devices should better execute complex commands with multiple actions, such as dimming some lights while leaving others on.

However, the update will also introduce ‘Gemini Live’ to compatible devices, a feature allowing for natural, back-and-forth conversations with the AI chatbot.

The Gemini for Google Home upgrade will initially be rolled out on an early access basis. It will be available in free and paid tiers, suggesting that some more advanced features may be locked behind a subscription.

The update is anticipated to make Google Home and Nest devices more reliable and to handle complex requests easily.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hackers exploited flaws in WhatsApp and Apple devices, company says

WhatsApp has disclosed a hacking attempt that combined flaws in its app with a vulnerability in Apple’s operating system. The company has since fixed the issues.

The exploit, tracked as CVE-2025-55177 in WhatsApp and CVE-2025-43300 in iOS, allowed attackers to hijack devices via malicious links. Fewer than 200 users worldwide are believed to have been affected.

Amnesty International reported that some victims appeared to be members of civic organisations. Its Security Lab is collecting forensic data and warned that iPhone and Android users were impacted.

WhatsApp credited its security team for identifying the loopholes, describing the operation as highly advanced but narrowly targeted. The company also suggested that other apps could have been hit in the same campaign.

The disclosure highlights ongoing risks to secure messaging platforms, even those with end-to-end encryption. Experts stress that keeping apps and operating systems up to date remains essential to reducing exposure to sophisticated exploits.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Salesforce cuts 4,000 support jobs as AI handles half of customer queries

Salesforce CEO Marc Benioff has confirmed that the company cut 4,000 customer support positions in 2025 after deploying its Agentforce AI agents. Support staff numbers fell from 9,000 to roughly 5,000.

Agentforce AI now conducts approximately 50 percent of customer interactions and has helped Salesforce reconnect with over 100 million previously neglected sales leads. The move enabled rebalancing of headcount and increased capacity for sales operations.

This development follows earlier claims that AI would augment rather than replace human roles. The company emphasises that AI handles standard cases while humans oversee complex or ambiguous ones, likening the interaction to a ‘self-driving’ model where the human steps in when needed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI eyes India for large-scale AI infrastructure

According to Bloomberg, OpenAI is weighing partnerships in India to build a data centre of at least 1 gigawatt capacity as part of its Stargate project. Such a facility would represent one of Asia’s most significant AI infrastructure investments.

The company recently registered as a legal entity in India and is recruiting a local team. It also announced plans in August to open its first office in New Delhi later this year, highlighting the importance of India’s second-largest market by user base.

The prospective data centre is linked to Stargate, a private-sector AI investment programme valued at up to $500 billion and backed by SoftBank, OpenAI and Oracle. The project was first introduced in January by US President Donald Trump.

Details on the timing and location of the Indian facility remain unclear. Reports suggest that OpenAI chief executive Sam Altman could provide further information during a visit to India in September.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot