Apple is facing a new copyright lawsuit after two authors alleged the company used pirated copies of their books to train its OpenELM AI models. Filed in Northern California, the case claims Apple used the authors’ works without permission, payment, or credit.
The lawsuit seeks class-action status, adding Apple to a growing list of technology firms accused of misusing copyrighted works for AI training.
The action comes amid a wider legal storm engulfing AI companies. Anthropic recently agreed to a $1.5 billion settlement with authors who alleged its Claude chatbot was trained on their works without authorisation, in what lawyers called the most significant copyright recovery in history.
Microsoft, Meta, and OpenAI also face similar lawsuits over alleged reliance on unlicensed material in their datasets.
Analysts warn Apple could face heightened scrutiny given its reputation as a privacy-focused company. Any finding that its AI models were trained on pirated material could cause serious reputational harm alongside potential financial penalties.
The case also underscores the broader unresolved debate over whether AI training constitutes fair use or unlawful exploitation of creative works.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The impact of AI chatbots on mental health is emerging as a serious concern, with experts warning that such cases highlight the risks of more advanced systems.
Nate Soares, president of the US-based Machine Intelligence Research Institute, pointed to the tragic case of teenager Adam Raine, who took his own life after months of conversations with ChatGPT, as a warning signal for future dangers.
Soares, a former Google and Microsoft engineer, said that while companies design AI chatbots to be helpful and safe, they can produce unintended and harmful behaviour.
He warned that the same unpredictability could escalate if AI develops into artificial super-intelligence, systems capable of surpassing humans in all intellectual tasks. His new book with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies, argues that unchecked advances could lead to catastrophic outcomes.
He suggested that governments adopt a multilateral approach, similar to nuclear non-proliferation treaties, to halt a race towards super-intelligence.
Meanwhile, leading voices in AI remain divided. Meta’s chief AI scientist, Yann LeCun, has dismissed claims of an existential threat, insisting AI could instead benefit humanity.
The debate comes as OpenAI faces legal action from Raine’s family and introduces new safeguards for under-18s.
Psychotherapists and researchers also warn of the dangers of vulnerable people turning to chatbots instead of professional care, with early evidence suggesting AI tools may amplify delusional thoughts in those at risk.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US AI firm OpenAI has introduced a new ChatGPT feature that allows users to branch conversations into separate threads and explore different tones, styles, or directions without altering the original chat.
The update, rolled out on 5 September, is available to anyone logged into ChatGPT through the web version.
The branching tool lets users copy a conversation from a chosen point and continue in a new thread while preserving the earlier exchange.
Marketing teams, for example, could test formal, informal, or humorous versions of advertising content within parallel chats, avoiding the need to overwrite or restart a conversation.
OpenAI described the update as a response to user requests for greater flexibility. Many users had previously noted that a linear dialogue structure limited efficiency by forcing them to compare and copy content repeatedly.
Early reactions online have compared the new tool to Git, which enables software developers to branch and merge code.
The feature has been welcomed by ChatGPT users who are experimenting with brainstorming, project analysis, or layered problem-solving. Analysts suggest it also reduces cognitive load by allowing users to test multiple scenarios more naturally.
Alongside the update, OpenAI is working on other projects, including a new AI-powered jobs platform to connect workers and companies more effectively.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Financial services firms are adapting rapidly to the rise of AI in cybersecurity, according to David Ramirez, CISO at Broadridge. He said AI is changing the balance between attackers and defenders while also reshaping the skills security teams require.
On the defensive side, AI is already streamlining governance, risk management and compliance tasks, while also speeding up incident detection and training. He highlighted its growing role in areas like access management and data loss prevention.
He also stressed the importance of aligning cyber strategy with business goals and improving board-level visibility. While AI tools are advancing quickly, he urged CISOs not to lose sight of risk assessments and fundamentals in building resilient systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has announced plans to launch an AI-powered hiring platform to compete with LinkedIn directly. The service, OpenAI Jobs Platform, is expected to debut by mid-2026.
Applications CEO Fidji Simo said the platform will help businesses and employees find ideal matches using AI, with tailored options for small businesses and local governments. The Texas Association of Business plans to use the platform to connect employers with talent.
The move highlights OpenAI’s efforts to expand beyond ChatGPT into a broader range of applications, including a browser, a social media app, and recruitment. The company faces intense competition from Microsoft-owned LinkedIn, which has been adding AI features of its own.
Alongside the hiring initiative, OpenAI is preparing to pilot its Certifications programme through the OpenAI Academy. The scheme will provide certificates for AI proficiency, with Walmart among the first partners.
OpenAI aims to certify 10 million Americans by 2030 as part of its commitment to advancing AI literacy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The field guide distils key linguistic and formatting traits commonly found in AI output, such as overblown symbolism, promotional tone, repetitive transitions, rule-of-three phrasing and editorial commentary that breaks Wikipedia’s standards.
The initiative stems from the community’s ongoing challenge against AI-generated content, which has grown enough to warrant the creation of a dedicated project named WikiProject AI Cleanup.
Volunteers have developed tools like speedy deletion policies to quickly remove suspicious entries and tagged over 500 articles for review.
While the guide aims to strengthen detection, editors caution that it should not be treated as a shortcut but should complement human judgement, oversight, and trusted community processes. Such layered scrutiny helps preserve Wikipedia’s reputation for reliability.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The proprietary software firm Atlassian is entering the AI browser market with a $610 million deal to acquire The Browser Company of New York, creator of Arc and Dia. The move signals an attempt to turn browsers into intelligent assistants instead of leaving them as passive tools.
Traditional browsers are blank slates, forcing users to juggle tabs and applications without context. Arc and Dia promise a different approach by connecting tasks, offering in-line AI support, and adapting to user behaviour. Atlassian believes these features could transform productivity for knowledge workers.
Analysts note, however, that AI browsers are still experimental. While they offer potential to integrate workflows and reduce distractions, rivals like Chrome, Edge and Safari already dominate with established ecosystems and security features. Convincing users to change habits may prove difficult.
Industry observers suggest Atlassian’s move is more a long-term bet on natural language and agentic browsing than an immediate market shift. For now, AI browsers remain promising but unproven alternatives to conventional tools.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Language technology company Tilde has released an open AI framework designed for all European languages.
The model, named ‘TildeOpen’, was developed with the support of the European Commission and trained on the Lumi supercomputer in Finland.
According to Tilde’s head Artūrs Vasiļevskis, the project addresses a key gap in US-based AI systems, which often underperform for smaller European languages such as Latvian. By focusing on European linguistic diversity, the framework aims to provide better accessibility across the continent.
Vasiļevskis also suggested that Latvia has the potential to become an exporter of AI solutions. However, he acknowledged that development is at an early stage and that current applications remain relatively simple. The framework and user guidelines are freely accessible online.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.
With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?
Therapy keeps secrets; AI keeps data
Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.
The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.
Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.
To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.
According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.
Falling for your AI ‘therapist’
Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.
The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.
With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.
As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.
Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.
Who loses work when therapy goes digital?
Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.
Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.
Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.
Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.
Can AI ‘therapists’ handle crisis conversations
Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.
In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.
One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.
Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.
In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.
Chatbots are companions, not health professionals
AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.
While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A significant outage has struck ChatGPT, leaving many users unable to receive responses from the popular AI chatbot. Instead of generating answers, the service failed to react to prompts, causing widespread frustration, particularly during the busy morning work period.
Owner OpenAI has officially launched an investigation into the mysterious malfunction of ChatGPT after its status page confirmed a problem was detected.
Over a thousand complaints were registered on the outage tracking site Down Detector. Social media was flooded with reports from affected users, with one calling it an unprecedented event and another joking that their ‘work partner is down’.
Instead of a full global blackout, initial tests suggested the issue might be limited to some users, as functionality remained for others.
If you find ChatGPT is unresponsive, you can attempt several fixes instead of simply waiting. First, verify the outage is on your end by checking OpenAI’s official status page or Down Detector instead of assuming your connection is at fault.
If the service is operational, try switching to a different browser or an incognito window to rule out local cache issues. Alternatively, use the official ChatGPT mobile app to access it.
For a more thorough solution, clear your browser’s cache and cookies, or as a last resort, consider using an alternative AI service like Microsoft Copilot or Google Gemini to continue your work without interruption.
OpenAI is working to resolve the problem. The company advises users to check its official service status page for updates, rather than relying solely on social media reports.
The incident highlights the growing dependence on AI tools for daily tasks and the disruption caused when such a centralised service experiences technical difficulties.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!