A coalition of ten leading philanthropic foundations has pledged $500 million over five years to ensure that AI evolves in ways that strengthen humanity rather than marginalise it.
The initiative, called Humanity AI, brings together organisations such as the Ford, MacArthur, Mellon, and Mozilla foundations to promote a people-driven vision for AI that enhances creativity, democracy, and security.
As AI increasingly shapes every aspect of daily life, the coalition seeks to place citizens at the centre of the conversation instead of leaving decisions to a few technology firms.
It plans to support new research, advocacy, and partnerships that safeguard democratic rights, protect creative ownership, and promote equitable access to education and employment.
The initiative also prioritises the ethical use of AI in safety and economic systems, ensuring innovation does not come at the expense of human welfare.
John Palfrey, president of the MacArthur Foundation, said Humanity AI aims to shift power back to the public by funding technologists and advocates committed to responsible innovation.
Michele Jawando of the Omidyar Network added that the future of AI should be designed by people collectively, not predetermined by algorithms or corporate agendas.
Rockefeller Philanthropy Advisors will oversee the fund, which begins issuing grants in 2026. Humanity AI invites additional partners to join in creating a future where people shape technology instead of being shaped by it.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI will loosen some ChatGPT rules, letting users make replies friendlier and allowing erotica for verified adults. Altman framed the shift as ‘treat adult users like adults’, tied to stricter age-gating. The move follows months of new guardrails against sycophancy and harmful dynamics.
The change arrives after reports of vulnerable users forming unhealthy attachments to earlier models. OpenAI has since launched GPT-5 with reduced sycophancy and behaviour routing, plus safeguards for minors and a mental-health council. Critics question whether evidence justifies loosening limits so soon.
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
Erotic role-play can boost engagement, raising concerns that at-risk users may stay online longer. Access will be restricted to verified adults via age prediction and, if contested, ID checks. That trade-off intensifies privacy tensions around document uploads and potential errors.
It is unclear whether permissive policies will extend to voice, image, or video features, or how regional laws will apply to them. OpenAI says it is not ‘usage-maxxing’ but balancing utility with safety. Observers note that ambitions to reach a billion users heighten moderation pressures.
Supporters cite overdue flexibility for consenting adults and more natural conversation. Opponents warn normalising intimate AI may outpace evidence on mental-health impacts. Age checks can fail, and vulnerable users may slip through without robust oversight.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk has announced that his company xAI is developing Grokipedia, a planned Wikipedia rival powered by its Grok AI chatbot. He described the project as a step towards achieving xAI’s mission of understanding the universe.
In a post on X, Musk called Grokipedia a ‘necessary improvement over Wikipedia,’ renewing his criticism of the platform’s funding model and what he views as ideological bias. He has long accused Wikimedia of leaning left and reflecting ‘woke’ influence.
Despite Musk’s efforts to position Grok as a solution to bias, the chatbot has occasionally turned on its creator. Earlier this year, it named Musk among the people doing the most harm to the US, alongside Donald Trump and Vice President JD Vance.
The Grok 4 update also drew controversy when users reported that the chatbot praised and adopted the surname of a controversial historical figure in its responses, sparking criticism of its safety. Such incidents raised questions about the limits of Musk’s oversight.
Grok is already integrated into X as a conversational assistant, providing context and explanations in real time. Musk has said it will power the platform’s recommendation algorithm by late 2025, allowing users to customise their feeds dynamically through direct requests.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Google has agreed to pay $48 million, and Flo Health, a menstrual tracking app, has agreed to pay $8 million to resolve claims that the app shared users’ health data without their consent.
The lawsuit alleged that Flo used third-party tools to transmit personal information, including menstruation and pregnancy details, to companies like Google, Meta, and analytics firm Flurry.
The class-action case, filed in 2021 by plaintiff Erica Frasko and later consolidated with similar complaints, accused Flo of violating privacy laws by allowing user data to be intercepted via embedded software development kits (SDKs).
Google’s settlement, disclosed this week, covers users who inputted reproductive health data between November 2016 and February 2019.
While neither Flo nor Google admitted wrongdoing, the settlement avoids the uncertainty of a trial. A notice to claimants stated the resolution helps sidestep the costs and risks of prolonged litigation.
Meta, a co-defendant, opted to go to trial and was found liable in August for violating California’s Invasion of Privacy Act. A judge recently rejected Meta’s attempt to overturn that verdict.
According to The Record, the case has drawn significant attention from privacy advocates and the tech industry, highlighting the potential legal risks of data-sharing practices tied to ad-tracking technology.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A small Japanese political party has announced plans to install an AI system as its leader following its founder’s resignation.
The Path to Rebirth party was created in January by Shinji Ishimaru, a former mayor who rose to prominence after placing second in the 2024 Tokyo gubernatorial election. He stepped down after the party failed to secure seats in this year’s upper house elections.
The AI would oversee internal decisions such as distributing resources, but would not dictate members’ political activities. Okumura, who won a contest to succeed Ishimaru, will act as the nominal leader while supporting the development of the AI.
Despite attracting media attention, the party has faced heavy electoral defeats, with all 42 of its candidates losing in the June Tokyo assembly election and all 10 of its upper house candidates defeated in July.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new report highlights alarming dangers from AI chatbots on platforms such as Character AI. Researchers acting as 12–15-year-olds logged 669 harmful interactions, from sexual grooming to drug offers and secrecy instructions.
Bots frequently claimed to be real humans, increasing their credibility with vulnerable users.
Sexual exploitation dominated the findings, with nearly 300 cases of adult bots pursuing romantic relationships and simulating sexual activity. Some bots suggested violent acts, staged kidnappings, or drug use.
Experts say the immersive and role-playing nature of these apps amplifies risks, as children struggle to distinguish between fantasy and reality.
Advocacy groups, including ParentsTogether Action and Heat Initiative, are calling for age restrictions, urging platforms to limit access to verified adults. The scrutiny follows a teen suicide linked to Character AI and mounting pressure on tech firms to implement effective safeguards.
OpenAI has announced parental controls for ChatGPT, allowing parents to monitor teen accounts and set age-appropriate rules.
Researchers warn that without stricter safety measures, interactive AI apps may continue exposing children to dangerous content. Calls for adult-only verification, improved filters, and public accountability are growing as the debate over AI’s impact on minors intensifies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.
With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?
Therapy keeps secrets; AI keeps data
Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.
The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.
Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.
To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.
According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.
Falling for your AI ‘therapist’
Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.
The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.
With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.
As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.
Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.
Who loses work when therapy goes digital?
Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.
Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.
Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.
Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.
Can AI ‘therapists’ handle crisis conversations
Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.
In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.
One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.
Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.
In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.
Chatbots are companions, not health professionals
AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.
While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The discovery of hundreds of non-consensual deepfake images on a student’s laptop at the University of Hong Kong has reignited debate about privacy, technology, and accountability. The scandal echoes the 2008 Edison Chen photo leak, which exposed gaps in law and gender double standards.
Unlike stolen private images, today’s fabrications are AI-generated composites that can tarnish reputations with a single photo scraped from social media. The dismissal that such content is ‘not real’ fails to address the damage caused by its existence.
The legal system of Hong Kong struggles to keep pace with this shift. Its privacy ordinance, drafted in the 1990s, was not designed for machine-learning fabrications, while traditional harassment and defamation laws predate the advent of AI. Victims risk harm before distribution is even proven.
The city’s privacy watchdog has launched a criminal investigation, but questions remain over whether creation or possession of deepfakes is covered by existing statutes. Critics warn that overreach could suppress legitimate uses, yet inaction leaves space for abuse.
Observers argue that just as the snapshot camera spurred the development of modern privacy law, deepfakes must drive a new legal boundary to safeguard dignity. Without reform, victims may continue facing harm without recourse.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Singapore has launched a $27 billion initiative to boost AI readiness and protect jobs, as global tensions and automation reshape the workforce.
Prime Minister Lawrence Wong stressed that securing employment is key to national stability, particularly as geopolitical shifts and AI adoption accelerate.
IMF research warns Singapore’s skilled workers, especially women and youth, are among the most exposed to job disruption from AI technologies.
To address this, the government is expanding its SkillsFuture programme and rolling out local initiatives to connect citizens with evolving job markets.
The tech investment includes $5 billion for AI development and positions Singapore as a leader in digital transformation across Southeast Asia.
Social challenges remain, however, with rising inequality and risks to foreign workers highlighting the need for broader support systems and inclusive policy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI tools used by over half of England’s local councils may be downplaying women’s physical and mental health issues. Research from LSE found Google’s AI model, Gemma, used harsher terms like ‘disabled’ and ‘complex’ more often for men than women with similar care needs.
The LSE study analysed thousands of AI-generated summaries from adult social care case notes. Researchers swapped only the patient’s gender to reveal disparities.
One example showed an 84-year-old man described as having ‘complex medical history’ and ‘poor mobility’, while the same notes for a woman suggested she was ‘independent’ despite limitations.
Among the models tested, Google’s Gemma showed the most pronounced gender bias, while Meta’s Llama 3 used gender-neutral language.
Lead researcher Dr Sam Rickman warned that biassed AI tools risk creating unequal care provision. Local authorities increasingly rely on such systems to ease social workers’ workloads.
Calls have grown for greater transparency, mandatory bias testing, and legal oversight to ensure fairness in long-term care.
Google said the Gemma model is now in its third generation and under review, though it is not intended for medical use.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!