Apple is confronting a significant exodus of AI talent, with key researchers departing for rival firms instead of advancing projects in-house.
The company lost its lead robotics researcher, Jian Zhang, to Meta’s Robotics Studio, alongside several core Foundation Models team members responsible for the Apple Intelligence platform. The brain drain has triggered internal concerns about Apple’s strategic direction and declining staff morale.
Instead of relying entirely on its own systems, Apple is reportedly considering a shift towards using external AI models. The departures include experts like Ruoming Pang, who accepted a multi-year package from Meta reportedly worth $200 million.
Other AI researchers are set to join leading firms like OpenAI and Anthropic, highlighting a fierce industry-wide battle for specialised expertise.
At the centre of the talent war is Meta CEO Mark Zuckerberg, offering lucrative packages worth up to $100 million to secure leading researchers for Meta’s ambitious AI and robotics initiatives.
The aggressive recruitment strategy is strengthening Meta’s capabilities while simultaneously weakening the internal development efforts of competitors like Apple.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.
With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?
Therapy keeps secrets; AI keeps data
Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.
The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.
Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.
To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.
According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.
Falling for your AI ‘therapist’
Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.
The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.
With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.
As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.
Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.
Who loses work when therapy goes digital?
Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.
Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.
Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.
Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.
Can AI ‘therapists’ handle crisis conversations
Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.
In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.
One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.
Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.
In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.
Chatbots are companions, not health professionals
AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.
While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A significant outage has struck ChatGPT, leaving many users unable to receive responses from the popular AI chatbot. Instead of generating answers, the service failed to react to prompts, causing widespread frustration, particularly during the busy morning work period.
Owner OpenAI has officially launched an investigation into the mysterious malfunction of ChatGPT after its status page confirmed a problem was detected.
Over a thousand complaints were registered on the outage tracking site Down Detector. Social media was flooded with reports from affected users, with one calling it an unprecedented event and another joking that their ‘work partner is down’.
Instead of a full global blackout, initial tests suggested the issue might be limited to some users, as functionality remained for others.
If you find ChatGPT is unresponsive, you can attempt several fixes instead of simply waiting. First, verify the outage is on your end by checking OpenAI’s official status page or Down Detector instead of assuming your connection is at fault.
If the service is operational, try switching to a different browser or an incognito window to rule out local cache issues. Alternatively, use the official ChatGPT mobile app to access it.
For a more thorough solution, clear your browser’s cache and cookies, or as a last resort, consider using an alternative AI service like Microsoft Copilot or Google Gemini to continue your work without interruption.
OpenAI is working to resolve the problem. The company advises users to check its official service status page for updates, rather than relying solely on social media reports.
The incident highlights the growing dependence on AI tools for daily tasks and the disruption caused when such a centralised service experiences technical difficulties.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The move follows months of user complaints about Google Home’s performance, including issues with connectivity and the assistant’s failure to recognise basic commands.
With Gemini’s superior ability to understand natural language, the upgrade is expected to improve how users interact with their smart devices significantly. Home devices should better execute complex commands with multiple actions, such as dimming some lights while leaving others on.
However, the update will also introduce ‘Gemini Live’ to compatible devices, a feature allowing for natural, back-and-forth conversations with the AI chatbot.
The Gemini for Google Home upgrade will initially be rolled out on an early access basis. It will be available in free and paid tiers, suggesting that some more advanced features may be locked behind a subscription.
The update is anticipated to make Google Home and Nest devices more reliable and to handle complex requests easily.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) have announced a joint effort to clarify spot cryptocurrency trading. Regulators confirmed that US and foreign exchanges can list spot crypto products- leveraged and margin ones.
The guidance follows the President’s Working Group on Digital Asset Markets recommendations, which called for rules that keep blockchain innovation within the country.
Regulators said they are ready to review filings, address custody and clearing, and ensure spot markets meet transparency and investor protection standards.
Under the new approach, major venues such as the New York Stock Exchange, Nasdaq, CME Group and Cboe Global Markets could seek to list spot crypto assets. Foreign boards of trade recognised by the CFTC may also be eligible.
The move highlights a policy shift under President Donald Trump’s administration, with Congress and the White House pressing for greater regulatory clarity.
In July, the House of Representatives passed the CLARITY Act, a bill on crypto market structure now before the Senate. The moves and the regulators’ statement mark a key step in aligning US digital assets with established financial rules.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
SK Telecom has expanded its partnership with Schneider Electric to develop an AI Data Centre (AIDC) in Ulsan.
Under the deal, Schneider Electric will supply mechanical, electrical and plumbing equipment, such as switchgear, transformers, automated control systems and Uninterruptible Power Supply units.
The agreement builds on a partnership announced at Mobile World Congress 2025 and includes using Schneider’s Electrical Transient Analyser Program within SK Telecom’s data centre management system.
It will allow operations to be optimised through a digital twin model instead of relying only on traditional monitoring tools.
Both companies have also agreed on prefabricated solutions to shorten construction times, reference designs for new facilities, and joint efforts to grow the Energy-as-a-Service business.
A Memorandum of Understanding extends the partnership to other SK Group affiliates, combining battery technologies with Uninterruptible Power Supply and Energy Storage Systems.
Executives said the collaboration would help set new standards for AI data centres and create synergies across the SK Group. It is also expected to support SK Telecom’s broader AI strategy while contributing to sustainable and efficient infrastructure development.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI data centres face growing pressure as computing demands exceed the capacity of single facilities. Traditional Ethernet networks face high latency and inconsistent transfers, forcing companies to build larger centres or risk performance issues.
NVIDIA aims to tackle these challenges with its new Spectrum-XGS Ethernet technology, introducing ‘scale-across’ capabilities. The system links multiple AI data centres using distance-adaptive algorithms, congestion control, latency management, and end-to-end telemetry.
NVIDIA claims the improvements can nearly double GPU communication performance, supporting what it calls ‘giga-scale AI super-factories.’
CoreWeave plans to be among the first adopters, connecting its facilities into a single distributed supercomputer. The deployment will test if Spectrum-XGS can deliver fast, reliable AI across multiple sites without needing massive single-location centres.
While the technology promises greater efficiency and distributed computing power, its effectiveness depends on real-world infrastructure, regulatory compliance, and data synchronisation.
If successful, it could reshape AI data centre design, enabling faster services and potentially lower operational costs across industries.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The FBI, US agencies, and international partners have issued a joint advisory on a cyber campaign called ‘Salt Typhoon.’
The operation is said to have affected more than 200 US companies across 80 countries.
The advisory, co-released by the FBI, the National Security Agency, the Cybersecurity and Infrastructure Security Agency, and the Department of Defence Cyber Crime Centre, was also supported by agencies in the UK, Canada, Australia, Germany, Italy and Japan.
According to the statement, Salt Typhoon has focused on exploiting network infrastructure such as routers, virtual private networks and other edge devices.
The group has been previously linked to campaigns targeting US telecommunications networks in 2024. It has also been connected with activity involving a US National Guard network, the advisory names three Chinese companies allegedly providing products and services used in their operations.
Telecommunications, defence, transportation and hospitality organisations are advised to strengthen cybersecurity measures. Recommended actions include patching vulnerabilities, adopting zero-trust approaches and using the technical details included in the advisory.
Salt Typhoon, also known as Earth Estrie and Ghost Emperor, has been observed since at least 2019 and is reported to maintain long-term access to compromised devices.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Alphabet’s Google has confirmed plans to invest $9 billion in Virginia by 2026, strengthening the state’s role as a hub for data infrastructure in the US.
The focus will be on AI and cloud computing, positioning Virginia at the forefront of global technological competition.
The plan includes a new Chesterfield County facility and expansion at existing campuses in Loudoun and Prince William counties. These centres are part of the digital backbone that supports cloud services and AI workloads.
Dominion Energy will supply power for the new Chesterfield project, which may take up to seven years before it is fully operational.
The rapid growth of data centres in Virginia has increased concerns about energy demand. Google said it is working with partners on efficiency and power management solutions and funding community development.
Earlier in August, the company announced a $1 billion initiative to provide every college student in Virginia with one year of free access to its AI Pro plan and training opportunities.
Google’s move follows a broader trend in the technology sector. Microsoft, Amazon, Alphabet, and Meta are expected to spend hundreds of billions of dollars on AI-related projects, with much dedicated to new data centres.
Northern Virginia remains the boom’s epicentre, with Loudoun County earning the name’ Data Centre Alley’ because it has concentrated facilities.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft’s AI chief and DeepMind co-founder, Mustafa Suleyman, has warned that society is unprepared for AI systems that convincingly mimic human consciousness. He warned that ‘seemingly conscious’ AI could make the public treat machines as sentient.
Suleyman highlighted potential risks including demands for AI rights, welfare, and even AI citizenship. Since the launch of ChatGPT in 2022, AI developers have increasingly designed systems to act ‘more human’.
Experts caution that such technology could intensify mental health problems and distort perceptions of reality. The phenomenon known as AI Psychosis sees users forming intense emotional attachments or believing AI to be conscious or divine.
Suleyman called for clear boundaries in AI development, emphasising that these systems should be tools for people rather than digital persons. He urged careful management of human-AI interaction without calling for a halt to innovation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!