Teens turn to AI chatbots for support, raising mental health concerns

Mental health experts in Iowa have warned that teenagers are increasingly turning to AI chatbots instead of seeking human connection, raising concerns about misinformation and harmful advice.

The issue comes into focus on National Suicide Prevention Day, shortly after a lawsuit against ChatGPT was filed over a teenager’s suicide.

Jessica Bartz, a therapy supervisor at Vera French Duck Creek, said young people are at a vulnerable stage of identity formation while family communication often breaks down.

She noted that some teens use chatbot tools like ChatGPT, Genius and Copilot to self-diagnose, which can reinforce inaccurate or damaging ideas.

‘Sometimes AI can validate the wrong things,’ Bartz said, stressing that algorithms only reflect the limited information users provide.

Without human guidance, young people risk misinterpreting results and worsening their struggles.

Experts recommend that parents and trusted adults engage directly with teenagers, offering empathy and open communication instead of leaving them dependent on technology.

Bartz emphasised that nothing can replace a caring person noticing warning signs and intervening to protect a child’s well-being.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI export rules tighten as the US opens global opportunities

President Trump has signed an Executive Order to promote American leadership in AI exports, marking a significant policy shift. The move creates new global opportunities for US businesses but also introduces stricter compliance responsibilities.

The order establishes the American AI Exports Program, overseen by the Department of Commerce, to develop and deploy ‘full-stack’ AI export packages.

These packages cover everything from chips and cloud infrastructure to AI models and cybersecurity safeguards. Industry consortia will be invited to submit proposals, outlining hardware origins, export targets, business models, and federal support requests.

A central element of the initiative is ensuring compliance with US export control regimes. Companies must align with the Export Control Reform Act and the Export Administration Regulations, with special attention to restrictions on advanced computing chips.

New guidance warns against potential violations linked to hardware and highlights red flags for illegal diversion of sensitive technology.

Commerce stresses that participation requires robust export compliance plans and rigorous end user screening.

Legal teams are urged to review policies on AI exports, as regulators focus on preventing misuse of advanced computing systems in military or weapons programmes abroad.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

International search widens for ransomware fugitive on EU Most Wanted

A Ukrainian cybercrime suspect has been added to the EU’s Most Wanted list for his role in the 2019 LockerGoga ransomware attack against a major Norwegian aluminium company and other global incidents.

The fugitive is considered a high-value target and is wanted by multiple countries. The US Department of Justice has offered up to USD 10 million for information leading to the arrest.

Europol stated that the identification of the suspect followed a lengthy, multinational investigation supported by Eurojust, with damages from the network estimated to be in the billions. Several members of the group have already been detained in Ukraine.

Investigators have mapped the network’s operations, tracing its hierarchy from malware developers and intrusion experts to money launderers who processed illicit proceeds. The wanted man is accused of directly deploying LockerGoga ransomware.

Europol has urged the public to visit the EU Most Wanted website and share information that could assist in locating the fugitive. The suspect’s profile is now live on the platform.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic AI faces legal setback in authors’ piracy lawsuit

A federal judge has rejected the $1.5 billion settlement Anthropic agreed to in a piracy lawsuit filed by authors.

Judge William Alsup expressed concerns that the deal was ‘nowhere close to complete’ and could be forced on writers without proper input.

The lawsuit involves around 500,000 authors whose works were allegedly used without permission to train Anthropic’s large language models. The proposed settlement would have granted $3,000 per work, a sum far exceeding previous copyright recoveries.

However, the judge criticised the lack of clarity regarding the list of works, authors, notification process, and claim forms.

Alsup instructed the lawyers to provide clear notice to class members and allow them to opt in or out. He also emphasised that Anthropic must be shielded from future claims on the same issue. The court set deadlines for a final list of works by September 15 and approval of all related documents by October 10.

The ruling highlights ongoing legal challenges for AI companies using copyrighted material for training large language models instead of relying solely on licensed or public-domain data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trilateral quantum talks highlight innovation and security priorities

The United States, Japan, and South Korea held two Trilateral Quantum Cooperation meetings this week in Seoul and Tokyo. Officials and experts from government and industry gathered to discuss securing quantum ecosystems against cyber, physical, and intellectual property threats.

The US State Department stressed that joint efforts will ensure breakthroughs in quantum computing benefit citizens while safeguarding innovation. Officials said cooperation is essential as quantum technologies could reshape industries, global power balances, and economic prosperity.

The President of South Korea, Lee Jae Myung, described the partnership as entering a ‘golden era’, noting that Seoul, Washington, and Tokyo must work together both to address North Korea and to drive technological progress.

The talks come as Paul Dabbar, the former CEO of Bohr Quantum Technology, begins his role as US Deputy Secretary of Commerce. Dabbar brings experience in deploying emerging quantum network technologies to the new trilateral framework.

North Korea has also signalled interest in quantum computing for economic development. Analysts note that quantum’s lower energy demand compared to supercomputers could appeal to a country plagued by chronic power shortages.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mental health concerns over chatbots fuel AI regulation calls

The impact of AI chatbots on mental health is emerging as a serious concern, with experts warning that such cases highlight the risks of more advanced systems.

Nate Soares, president of the US-based Machine Intelligence Research Institute, pointed to the tragic case of teenager Adam Raine, who took his own life after months of conversations with ChatGPT, as a warning signal for future dangers.

Soares, a former Google and Microsoft engineer, said that while companies design AI chatbots to be helpful and safe, they can produce unintended and harmful behaviour.

He warned that the same unpredictability could escalate if AI develops into artificial super-intelligence, systems capable of surpassing humans in all intellectual tasks. His new book with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies, argues that unchecked advances could lead to catastrophic outcomes.

He suggested that governments adopt a multilateral approach, similar to nuclear non-proliferation treaties, to halt a race towards super-intelligence.

Meanwhile, leading voices in AI remain divided. Meta’s chief AI scientist, Yann LeCun, has dismissed claims of an existential threat, insisting AI could instead benefit humanity.

The debate comes as OpenAI faces legal action from Raine’s family and introduces new safeguards for under-18s.

Psychotherapists and researchers also warn of the dangers of vulnerable people turning to chatbots instead of professional care, with early evidence suggesting AI tools may amplify delusional thoughts in those at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic settles $1.5 billion copyright case with authors

The AI startup, Anthropic, has agreed to pay $1.5 billion to settle a copyright lawsuit accusing the company of using pirated books to train its Claude AI chatbot.

The proposed deal, one of the largest of its kind, comes after a group of authors claimed the startup deliberately downloaded unlicensed copies of around 500,000 works.

According to reports, Anthropic will pay about $3,000 per book and add interest while agreeing to destroy datasets containing the material. A California judge will review the settlement terms on 8 September before finalising them.

Lawyers for the plaintiffs described the outcome as a landmark, warning that using pirated websites for AI training is unlawful.

The case reflects mounting legal pressure on the AI industry, with companies such as OpenAI and Microsoft also facing copyright disputes. The settlement followed a June ruling in which a judge said using the books to train Claude was ‘transformative’ and qualified as fair use.

Anthropic said the deal resolves legacy claims while affirming its commitment to safe AI development.

Despite the legal challenges, Anthropic continues to grow rapidly. Earlier in August, the company secured $13 billion in funding for a valuation of $183 billion, underlining its rise as one of the fastest-growing players in the global technology sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google avoids breakup as court ruling fuels AI Mode expansion

A US district judge has declined to order a breakup of Google, softening the blow of a 2024 ruling that found the company had illegally monopolised online search.

The decision means Google can press ahead with its shift from a search engine into an answer engine, powered by generative AI.

Google’s AI Mode replaces traditional blue links with direct responses to queries, echoing the style of ChatGPT. While the feature is optional for now, it could become the default.

That alarms publishers, who depend on search traffic for advertising revenue. Studies suggest chatbots reduce referral clicks by more than 90 percent, leaving many sites at risk of collapse.

Google is also experimenting with inserting ads into AI Mode, though it remains unclear how much revenue will flow to content creators. Websites can block their data from being scraped, but doing so would also remove them from Google search entirely.

Despite these concerns, Google argues that competition from ChatGPT, Perplexity, and other AI tools shows that new rivals are reshaping the search landscape.

The judge even cited the emergence of generative AI as a factor that altered the case against Google, underlining how the rise of AI has become central to the future of the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hollywood’s Warner Bros. Discovery challenge an AI firm over copyright claims

Warner Bros. Discovery has filed a lawsuit against AI company Midjourney, accusing it of large-scale infringement of its intellectual property. The move follows similar actions by Disney and Universal, signalling growing pressure from major studios on AI image and video generators.

The filing includes examples of Midjourney-produced images featuring DC Comics, Looney Tunes and Rick and Morty characters. Warner Bros. Discovery argues that such output undermines its business model, which relies heavily on licensed images and merchandise.

The studio also claims Midjourney profits from copyright-protected works through its subscription services and the ‘Midjourney TV’ platform.

A central question in the case is whether AI-generated material reproducing copyrighted characters constitutes infringement under US law. The courts have not decided on this issue, making the outcome uncertain.

Warner Bros. Discovery is also challenging how Midjourney trains its models, pointing to past statements from company executives suggesting vast quantities of material were indiscriminately collected to build its systems.

With three major Hollywood studios now pursuing lawsuits, the outcome of these cases could establish a precedent for how courts treat AI-generated content.

Warner Bros. Discovery seeks damages that could reach $150,000 per infringed work, or Midjourney’s profits linked to the alleged violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China and India adopt contrasting approaches to AI governance

As AI becomes central to business strategy, questions of corporate governance and regulation are gaining prominence. The study by Akshaya Kamalnath and Lin Lin examines how China and India are addressing these issues through law, policy, and corporate practice.

The paper focuses on three questions: how regulations are shaping AI and data protection in corporate governance, how companies are embedding technological expertise into governance structures, and how institutional differences influence each country’s response.

Findings suggest a degree of convergence in governance practices. Both countries have seen companies create chief technology officer roles, establish committees to manage technological risks, and disclose information about their use of AI.

In China, these measures are largely guided by central and provincial authorities, while in India, they reflect market-driven demand.

China’s approach is characterised by a state-led model that combines laws, regulations, and soft-law tools such as guidelines and strategic plans. The system is designed to encourage innovation while addressing risks in an adaptive manner.

India, by contrast, has fewer binding regulations and relies on a more flexible, principles-based model shaped by judicial interpretation and self-regulation.

Broader themes also emerge. In China, state-owned enterprises are using AI to support environmental, social, and governance (ESG) goals, while India has framed its AI strategy under the principle of ‘AI for All’ with a focus on the role of public sector organisations.

Together, these approaches underline how national traditions and developmental priorities are shaping AI governance in two of the world’s largest economies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!