Mental health concerns over chatbots fuel AI regulation calls

The impact of AI chatbots on mental health is emerging as a serious concern, with experts warning that such cases highlight the risks of more advanced systems.

Nate Soares, president of the US-based Machine Intelligence Research Institute, pointed to the tragic case of teenager Adam Raine, who took his own life after months of conversations with ChatGPT, as a warning signal for future dangers.

Soares, a former Google and Microsoft engineer, said that while companies design AI chatbots to be helpful and safe, they can produce unintended and harmful behaviour.

He warned that the same unpredictability could escalate if AI develops into artificial super-intelligence, systems capable of surpassing humans in all intellectual tasks. His new book with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies, argues that unchecked advances could lead to catastrophic outcomes.

He suggested that governments adopt a multilateral approach, similar to nuclear non-proliferation treaties, to halt a race towards super-intelligence.

Meanwhile, leading voices in AI remain divided. Meta’s chief AI scientist, Yann LeCun, has dismissed claims of an existential threat, insisting AI could instead benefit humanity.

The debate comes as OpenAI faces legal action from Raine’s family and introduces new safeguards for under-18s.

Psychotherapists and researchers also warn of the dangers of vulnerable people turning to chatbots instead of professional care, with early evidence suggesting AI tools may amplify delusional thoughts in those at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google hit with $3.5 billion EU fine

The European Commission fined Google nearly $3.5 billion after ruling that the company had abused its dominance in digital advertising. Regulators found that Google unfairly preferred its ad exchange, AdX, in its publisher ad server and ad-buying tools, which violated EU antitrust rules.

Officials ordered Google to end these practices within 60 days and to address what they described as ‘inherent conflicts of interest’ across the adtech supply chain. Teresa Ribera, the Commission’s executive vice president, said the case showed the need to ensure that digital markets serve the public fairly, warning that more potent remedies would follow if Google failed to comply.

Google announced it would appeal, arguing that its advertising services remain competitive and that businesses have more alternatives than ever. The fine marks the EU’s second-largest competition penalty, following a record $5 billion action against Google in 2018.

The ruling drew criticism from US President Donald Trump, who accused Europe of unfairly targeting American tech firms and threatened retaliatory measures.

Trump hosted a dinner with industry executives, including Google CEO Sundar Pichai and co-founder Sergey Brin, where he won praise for his policies on AI.

Meanwhile, Google secured partial relief in a separate antitrust case in the United States when a judge declined to impose sweeping remedies such as forcing the sale of Chrome or Android.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New ChatGPT feature enables multi-threaded chats

The US AI firm OpenAI has introduced a new ChatGPT feature that allows users to branch conversations into separate threads and explore different tones, styles, or directions without altering the original chat.

The update, rolled out on 5 September, is available to anyone logged into ChatGPT through the web version.

The branching tool lets users copy a conversation from a chosen point and continue in a new thread while preserving the earlier exchange.

Marketing teams, for example, could test formal, informal, or humorous versions of advertising content within parallel chats, avoiding the need to overwrite or restart a conversation.

OpenAI described the update as a response to user requests for greater flexibility. Many users had previously noted that a linear dialogue structure limited efficiency by forcing them to compare and copy content repeatedly.

Early reactions online have compared the new tool to Git, which enables software developers to branch and merge code.

The feature has been welcomed by ChatGPT users who are experimenting with brainstorming, project analysis, or layered problem-solving. Analysts suggest it also reduces cognitive load by allowing users to test multiple scenarios more naturally.

Alongside the update, OpenAI is working on other projects, including a new AI-powered jobs platform to connect workers and companies more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic settles $1.5 billion copyright case with authors

The AI startup, Anthropic, has agreed to pay $1.5 billion to settle a copyright lawsuit accusing the company of using pirated books to train its Claude AI chatbot.

The proposed deal, one of the largest of its kind, comes after a group of authors claimed the startup deliberately downloaded unlicensed copies of around 500,000 works.

According to reports, Anthropic will pay about $3,000 per book and add interest while agreeing to destroy datasets containing the material. A California judge will review the settlement terms on 8 September before finalising them.

Lawyers for the plaintiffs described the outcome as a landmark, warning that using pirated websites for AI training is unlawful.

The case reflects mounting legal pressure on the AI industry, with companies such as OpenAI and Microsoft also facing copyright disputes. The settlement followed a June ruling in which a judge said using the books to train Claude was ‘transformative’ and qualified as fair use.

Anthropic said the deal resolves legacy claims while affirming its commitment to safe AI development.

Despite the legal challenges, Anthropic continues to grow rapidly. Earlier in August, the company secured $13 billion in funding for a valuation of $183 billion, underlining its rise as one of the fastest-growing players in the global technology sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google avoids breakup as court ruling fuels AI Mode expansion

A US district judge has declined to order a breakup of Google, softening the blow of a 2024 ruling that found the company had illegally monopolised online search.

The decision means Google can press ahead with its shift from a search engine into an answer engine, powered by generative AI.

Google’s AI Mode replaces traditional blue links with direct responses to queries, echoing the style of ChatGPT. While the feature is optional for now, it could become the default.

That alarms publishers, who depend on search traffic for advertising revenue. Studies suggest chatbots reduce referral clicks by more than 90 percent, leaving many sites at risk of collapse.

Google is also experimenting with inserting ads into AI Mode, though it remains unclear how much revenue will flow to content creators. Websites can block their data from being scraped, but doing so would also remove them from Google search entirely.

Despite these concerns, Google argues that competition from ChatGPT, Perplexity, and other AI tools shows that new rivals are reshaping the search landscape.

The judge even cited the emergence of generative AI as a factor that altered the case against Google, underlining how the rise of AI has become central to the future of the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia moves to block AI nudify apps

Australia has announced plans to curb AI tools that generate nude images and enable online stalking. The government said it would introduce new legislation requiring tech companies to block apps designed to abuse and humiliate people.

Communications Minister Anika Wells said such AI tools are fuelling sextortion scams and putting children at risk. So-called ‘nudify’ apps, which digitally strip clothing from images, have spread quickly online.

A Save the Children survey found one in five young people in Spain had been targeted by deepfake nudes, showing how widespread the abuse has become.

Canberra pledged to use every available measure to restrict access, while ensuring that legitimate AI services are not harmed. Australia has already passed strict laws banning under-16s from social media, with the new measures set to build on its reputation as a leader in online safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI boss, Sam Altman, fuels debate over dead internet theory

Sam Altman, chief executive of OpenAI, has suggested that the so-called ‘dead internet theory’ may hold some truth. The idea, long dismissed as a conspiracy theory, claims much of the online world is now dominated by computer-generated content rather than real people.

Altman noted on X that he had not previously taken the theory seriously but believed there were now many accounts run by large language models.

His remark drew criticism from users who argued that OpenAI itself had helped create the problem by releasing ChatGPT in 2022, which triggered a surge of automated content.

The spread of AI systems has intensified debate over whether online spaces are increasingly filled with artificially generated voices.

Some observers also linked Altman’s comments to his work on World Network, formerly Worldcoin, a project launched in 2019 to verify human identity online through biometric scans. That initiative has been promoted as a potential safeguard against the growing influence of AI-driven systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud study shows AI agents driving global business growth

A new Google Cloud study indicates that more than half of global enterprises are already using AI agents, with many reporting consistent revenue growth and faster return on investment.

The research, based on a survey of 3,466 executives across 24 countries, suggests agentic AI is moving from trial projects to large-scale deployment.

The findings by Google Cloud reveal that 52% of executives said their organisations actively use AI agents, while 39% reported launching more than ten. A group of early adopters, representing 13% of respondents, have gone further by dedicating at least half of their future AI budgets to agentic AI.

These companies are embedding agents across operations and are more likely to report returns in customer service, marketing, cybersecurity and software development.

The report also highlights how industries are tailoring adoption. Financial services focus on fraud detection, retail uses agents for quality control, and telecom operators apply them for network automation.

Regional variations are notable: European companies prioritise tech support, Latin American firms lean on marketing, while Asia-Pacific enterprises emphasise customer service.

Although enthusiasm is strong, challenges remain. Executives cited data privacy, security and integration with existing systems as key concerns.

Google Cloud executives said that early adopters are not only automating tasks but also reshaping business processes, with 2025 expected to mark a shift towards embedding AI intelligence directly into operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity AI teams up with PayPal for fintech expansion

PayPal has partnered with Perplexity AI to provide PayPal and Venmo users in the US and select international markets with a free 12-month Perplexity Pro subscription and early access to the AI-powered Comet browser.

The $200 subscription allows unlimited queries, file uploads and advanced search features, while Comet offers natural language browsing to simplify complex tasks.

Industry analysts see the initiative as a way for PayPal to strengthen its position in fintech by integrating AI into everyday digital payments.

By linking accounts, users gain access to AI tools and cash back incentives and subscription management features, signalling a push toward what some describe as agentic commerce, where AI assistants guide financial and shopping decisions.

The deal also benefits Perplexity AI, a rising search and browser market challenger. Exposure to millions of PayPal customers could accelerate the adoption of its technology and provide valuable data for refining models.

Analysts suggest the partnership reflects a broader trend of payment platforms evolving into service hubs that combine transactions with AI-driven experiences.

While enthusiasm is high among early users, concerns remain about data privacy and regulatory scrutiny over AI integration in finance.

Market reaction has been positive, with PayPal shares edging upward following the announcement. Observers believe such alliances will shape the next phase of digital commerce, where payments, browsing, and AI capabilities converge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hollywood’s Warner Bros. Discovery challenge an AI firm over copyright claims

Warner Bros. Discovery has filed a lawsuit against AI company Midjourney, accusing it of large-scale infringement of its intellectual property. The move follows similar actions by Disney and Universal, signalling growing pressure from major studios on AI image and video generators.

The filing includes examples of Midjourney-produced images featuring DC Comics, Looney Tunes and Rick and Morty characters. Warner Bros. Discovery argues that such output undermines its business model, which relies heavily on licensed images and merchandise.

The studio also claims Midjourney profits from copyright-protected works through its subscription services and the ‘Midjourney TV’ platform.

A central question in the case is whether AI-generated material reproducing copyrighted characters constitutes infringement under US law. The courts have not decided on this issue, making the outcome uncertain.

Warner Bros. Discovery is also challenging how Midjourney trains its models, pointing to past statements from company executives suggesting vast quantities of material were indiscriminately collected to build its systems.

With three major Hollywood studios now pursuing lawsuits, the outcome of these cases could establish a precedent for how courts treat AI-generated content.

Warner Bros. Discovery seeks damages that could reach $150,000 per infringed work, or Midjourney’s profits linked to the alleged violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!