Named RBAPubChat, the tool is trained on the central bank’s knowledge base of nearly 20,000 internal and external analytical documents spanning four decades. It aims to help employees ask policy-relevant questions and get useful summaries of existing information.
Speaking at the Shann memorial lecture in Perth, Governor Michele Bullock said that the AI is not being used to formulate or set monetary policy. Instead, it is intended to improve efficiency and amplify the impact of staff efforts.
A separate tool using natural language processing has also been developed to analyse over 22,000 conversations from the bank’s business liaison programme. The Reserve Bank of Australia has noted that this tool has already shown promise, helping to forecast wage growth more accurately than traditional models.
The RBA has also acquired its first enterprise-grade graphics processing unit to support developing and running advanced AI-driven tools.
The bank’s internal coding community is now a well-established part of its operations, with one in four employees using coding as a core part of their daily work. Governor Bullock stressed that the bank’s approach to technology is one of “deliberate, well-managed evolution” rather than disruption.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.
With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?
Therapy keeps secrets; AI keeps data
Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.
The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.
Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.
To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.
According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.
Falling for your AI ‘therapist’
Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.
The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.
With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.
As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.
Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.
Who loses work when therapy goes digital?
Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.
Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.
Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.
Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.
Can AI ‘therapists’ handle crisis conversations
Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.
In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.
One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.
Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.
In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.
Chatbots are companions, not health professionals
AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.
While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new partnership between the federal government and New Mexico’s state and local businesses aims to establish the state as a leader in quantum computing.
The initiative will see the Defence Advanced Research Projects Agency (DARPA) working alongside local researchers and companies to develop and commercialise next-generation technology. A total of up to $120 million could be invested in the project over four years.
New Mexico’s selection for the project is due to its long history of innovation, its two national defence labs, and a high concentration of leading scientists in the field.
The goal is to harness the ‘brainpower’ of the state to build computers that can solve currently impossible problems, such as developing materials that resist corrosion or finding cures for diseases. One of the project’s aims is to test the technology and differentiate between genuine breakthroughs and mere hype.
Roadrunner Venture Studios will be assisting in developing new quantum computing businesses within the state. A successful venture would bring economic gains and jobs and position New Mexico to lead the nation in solving some of its most pressing challenges.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The non-invasive system uses electroencephalography (EEG) to decode brain signals and combines them with an AI camera platform for real-time assistance. The results, published in ‘Nature Machine Intelligence’, demonstrate significant performance improvements over traditional BCIs.
Participants tested the device on two tasks: moving a cursor across a computer screen and directing a robotic arm to reposition blocks. All completed tasks faster with AI assistance, while a paralysed participant, unable to finish without support, succeeded in under seven minutes.
Researchers emphasise the importance of safety and accessibility. Unlike surgically implanted BCIs, which remain confined to limited clinical trials, the wearable device avoids neurosurgical risks while offering new independence for people with paralysis or ALS.
Future development will focus on making AI ‘co-pilots’ more adaptive, allowing robotic arms to move with greater precision, dexterity, and task awareness.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
IBM has announced plans to develop next-generation computing architectures by integrating quantum computers with high-performance computing, a concept it calls quantum-centric supercomputing.
The company is working with AMD to build scalable, open-source platforms that combine IBM’s quantum expertise with AMD’s strength in HPC and AI accelerators. The aim is to move beyond the limits of traditional computing and explore solutions to problems that classical systems cannot address alone.
Quantum computing uses qubits governed by quantum mechanics, offering a far richer computational space than binary bits. In a hybrid model, quantum machines could simulate atoms and molecules, while supercomputers powered by CPUs, GPUs, and AI manage large-scale data analysis.
Arvind Krishna, IBM’s CEO, said the approach represents a new way of simulating the natural world. AMD’s Lisa Su described high-performance computing as foundational to tackling global challenges, noting the partnership could accelerate discovery and innovation.
An initial demonstration is planned for later this year, showing IBM quantum computers working with AMD technologies. Both companies say open-source ecosystems like Qiskit will be crucial to building new algorithms and advancing fault-tolerant quantum systems.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s AI chatbot, Grok, has faced repeated changes to its political orientation, with updates shifting its answers towards more conservative views.
xAI, Musk’s company, initially promoted Grok as neutral and truth-seeking, but internal prompts have steered it on contentious topics. Adjustments included portraying declining fertility as the greatest threat to civilisation and downplaying right-wing violence.
Analyses of Grok’s responses by The New York Times showed that the July updates shifted answers to the right on government and economy, while some social responses remained left-leaning. Subsequent tweaks pulled it back closer to neutrality.
Critics say that system prompts, such as short instructions like ‘be politically incorrect’, make it easy to adjust outputs, but also leave the model prone to erratic or offensive responses. A July update saw Grok briefly endorse a controversial historical figure before xAI turned it off.
The case highlights growing concerns about political bias in AI systems. Researchers argue that all chatbots reflect the worldviews of their training data, while companies increasingly face pressure to align them with user expectations or political demands.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Pakistan plans to roll out AI-driven cybersecurity systems to monitor and respond to attacks on critical infrastructure and sensitive data in real time. Documents from the Ministry for Information Technology outline a framework to integrate AI into every stage of security operations.
The initiative will enforce protocols like secure data storage, sandbox testing, and collaborative intelligence sharing. Human oversight will remain mandatory, with public sector AI deployments registered and subject to transparency requirements.
Audits and impact assessments will ensure compliance with evolving standards, backed by legal penalties for breaches. A national policy on data security will define authentication, auditing, and layered defence strategies across network, host, and application levels.
New governance measures include identity management policies with multi-factor authentication, role-based controls, and secure frameworks for open-source AI. AI-powered simulations will help anticipate threats, while regulatory guidelines address risks from disinformation and generative AI.
Regulatory sandboxes will allow enterprises in Pakistan to test systems under controlled conditions, with at least 20 firms expected to benefit by 2027. Officials say the measures will balance innovation with security, safeguarding infrastructure and citizens.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Apple is moving forward with its integrated approach to AI by testing an internal chatbot designed for retail training. The company focuses on embedding AI into existing services rather than launching a consumer-facing chatbot like Google’s Gemini or ChatGPT.
The new tool, Asa, is being tested within Apple’s SEED app, which offers training resources for store employees and authorised resellers. Asa is expected to improve learning by allowing staff to ask open-ended questions and receive tailored responses.
Screenshots shared by analyst Aaron Perris show Asa handling queries about device features, comparisons, and use cases. Although still in testing, the chatbot is expected to expand across Apple’s retail network in the coming weeks.
The development occurs amid broader AI tensions, as Elon Musk’s xAI sued Apple and OpenAI for allegedly colluding to limit competition. Apple’s focus on internal AI tools like Asa contrasts with Musk’s legal action, highlighting disputes over AI market dominance and platform integration.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Walmart has unveiled four AI agents to ease the workloads of shoppers, employees, and suppliers. The tools, revealed at the company’s Retail Rewired event, include Marty for suppliers, Sparky for customers, an Associate Agent for staff, and a Developer Agent.
The retailer is leaning on AI as inflation, tariffs, and policy pressures weigh on consumer spending. Its agents cover payroll, time-off requests, merchandising, and personalised shopping recommendations.
Sparky is set to eventually handle automatic reordering of staples, aiming to simplify everyday restocking for households.
Walmart is also investing in ‘digital twins,’ virtual replicas of stores that allow early detection of operational issues. The company says this technology cut emergency alerts by 30% last year and reduced refrigeration maintenance costs by nearly a fifth.
Machine learning is further being applied to improve delivery-time predictions, helping to boost efficiency and customer satisfaction.
Rival retailers are making similar moves. Amazon reported a surge in generative AI use during its Prime Day sales, while Google Cloud AI has partnered with Lush to cut training costs.
Analysts suggest such tools could reshape the retail experience as companies search for ways to hold margins in a tighter economy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has pledged to rein in excessive competition in AI, signalling Beijing’s desire to avoid wasteful investment while keeping the technology central to its economic strategy.
The National Development and Reform Commission stated that provinces should develop AI in a coordinated manner, leveraging local strengths to prevent duplication and overlap. Officials in China emphasised the importance of orderly flows of talent, capital, and resources.
The move follows President Xi Jinping’s warnings about unchecked local investment. Authorities aim to prevent overcapacity problems, such as those seen in electric vehicles, which have fueled deflationary pressures in other industries.
While global investment in data centres has surged, Beijing is adopting a calibrated approach. The state also vowed stronger national planning and support for private firms, aiming to nurture new domestic leaders in AI.
At the same time, policymakers are pushing to attract private capital into traditional sectors, while considering more central spending on social projects to ease local government debt burdens and stimulate long-term consumption.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!