AI chatbot shows promise in mental health assistance

Dartmouth College researchers have trialled an AI chatbot, Therabot, designed to assist with mental health care. In a groundbreaking clinical trial, the app was tested on individuals with major depressive disorder (MDD), generalised anxiety disorder (GAD), and those at risk for eating disorders.

The results showed encouraging improvements, with users reporting up to a 51% reduction in depression and a 31% decrease in anxiety. These outcomes were comparable to traditional outpatient therapy.

The trial also revealed that Therabot was effective in helping individuals with eating disorder risks, leading to a 19% reduction in harmful thoughts about body image and weight issues.

Researchers noted that after eight weeks of engagement with the app, participants showed significant symptom reduction, marking progress comparable to standard cognitive therapy.

While Therabot’s success offers hope, experts highlight the importance of balancing AI with human oversight, especially in sensitive mental health applications.

The study’s authors emphasised that while AI can help improve access to therapy, particularly for those unable to access in-person care, generative AI tools must be used cautiously, as errors could have serious consequences for individuals at risk of self-harm.

For more information on these topics, visit diplomacy.edu.

NHS contractor fined after ransomware attack

The tech firm Advanced, which provides services to the NHS, has been fined over £3 million by the UK data watchdog following a major ransomware attack in 2022.

The breach disrupted NHS systems and exposed personal data from tens of thousands across the country.

Originally facing a £6 million penalty, Advanced saw the fine halved after settling with the Information Commissioner’s Office.

Regulators said the firm failed to implement multi-factor authentication, allowing hackers to access systems using stolen login details.

The LockBit attack caused widespread outages, including access to UK patient data. While Advanced acknowledged the resolution, it declined to offer further comment or name a spokesperson when contacted.

For more information on these topics, visit diplomacy.edu.

Mobile coverage from space may soon be reality

Satellite-based mobile coverage could arrive in the UK by the end of 2025, with Ofcom launching a consultation on licensing direct-to-smartphone services.

The move would allow users to stay connected in areas without mast coverage using an ordinary mobile phone.

The proposal favours mobile networks teaming up with satellite operators to share frequencies in unserved regions, offering limited services like text messaging at first, with voice and data to follow.

Ofcom plans strict interference controls, and Vodafone is among those preparing to roll out such technology.

If approved, the service would be available across the UK mainland and surrounding seas, but not yet in places like the Channel Islands.

The public has until May to respond, as Ofcom seeks to modernise mobile access and help close the digital divide.

For more information on these topics, visit diplomacy.edu.

AI agents take centre stage in Oracle fusion

Oracle has launched its AI Agent Studio, a new platform designed to let businesses orchestrate and customise AI agents within its Fusion Applications suite.

Announced during the OracleCloud World Tour in London, the studio enables companies to coordinate teams of AI agents that handle tasks across enterprise resource planning, HR, supply chain, and customer experience systems.

The AI Agent Studio allows businesses to adapt prebuilt Oracle agents to suit their own processes. Users can modify agents by adjusting logic, integrating external tools, or adding custom prompts.

It also offers flexibility in choosing from a range of large language models optimised for Oracle or industry-specific use cases, such as Llama and Cohere.

Oracle’s move builds on earlier AI deployments in its cloud applications, where agents have been embedded to manage routine operations like invoice processing or recruitment steps.

The new platform advances that effort by allowing these agents to operate collaboratively and be tailored to more complex workflows.

Industry leaders including Accenture, Deloitte, and PwC have praised the development, calling it a significant step toward smarter enterprise automation.

Analysts echo this sentiment, noting that Oracle’s approach allows businesses to maximise AI efficiency across departments without added cost, offering a powerful edge in today’s rapidly evolving digital workplace.

For more information on these topics, visit diplomacy.edu.

Apple accused of misleading AI advertising

Apple is facing a class-action lawsuit in the United States over delays in delivering its much-promoted Apple Intelligence features.

The legal action, filed in a US based San Jose federal court, claims the company misled customers by advertising advanced AI tools that have yet to materialise on supported devices.

The complaint argues that buyers of new iPhones and other Apple products were promised ‘transformative’ AI capabilities at launch, only to find these features were either severely limited or completely absent.

According to the plaintiffs, Apple’s marketing created a “reasonable consumer expectation” that was ultimately not met.

This legal challenge adds to mounting pressure on the company, which has struggled to roll out its next-generation AI tools.

A recent Bloomberg report suggested internal tensions, revealing that CEO Tim Cook has reportedly lost confidence in AI chief John Giannandrea’s ability to deliver on the company’s ambitions.

The case reflects growing scrutiny of tech firms’ promises around AI, especially as consumer trust becomes more closely tied to the reality behind flashy announcements.

For more information on these topics, visit diplomacy.edu.

US judge says Social Security unlawfully shared data with Musk’s aides

A federal judge has ruled that the Social Security Administration (SSA) likely violated privacy laws by granting Elon Musk’s Department of Government Efficiency (DOGE) unrestricted access to millions of Americans’ personal data.

The ruling halts further data sharing and requires DOGE to delete unlawfully accessed records. United States District Judge Ellen Lipton Hollander stated that while tackling fraud is important, government agencies must not ignore privacy laws to achieve their goals.

The case has drawn attention to the extent of DOGE’s access to sensitive government databases, including Numident, which contains detailed personal information on Social Security applicants.

The SSA’s leadership allowed DOGE staffers to review vast amounts of data in an effort to identify fraudulent payments. Critics, including advocacy groups and labour unions, argue that the process lacked proper oversight and risked compromising individuals’ privacy.

The ruling marks a major legal setback for DOGE, which has been expanding its influence across multiple federal agencies. The White House condemned the decision, calling it judicial overreach, while SSA officials indicated they would comply with the order.

The controversy highlights growing concerns over government data security and the limits of executive power in managing public records.

For more information on these topics, visit diplomacy.edu.

ChatGPT wrongly accuses man of murder

A Norwegian man has lodged a complaint against OpenAI after ChatGPT falsely claimed he had murdered his two sons and was serving a 21-year prison sentence.

Arve Hjalmar Holmen, who has never been accused of any crime, says the chatbot’s response was deeply damaging, leading him to seek action from the Norwegian Data Protection Authority.

Digital rights group Noyb, representing Holmen, argues the incident violates European data protection laws regarding the accuracy of personal data.

The error highlights a growing concern over AI ‘hallucinations,’ where chatbots generate false information and present it as fact.

Holmen received the incorrect response when searching for his own name, with ChatGPT fabricating a detailed and defamatory account of a crime that never occurred. Although the chatbot carries a disclaimer about potential inaccuracies,

Noyb insists this is not enough, arguing that spreading false information cannot be justified by a simple warning label.

AI-generated hallucinations have plagued multiple platforms, including Apple and Google, with some errors being bizarre but others causing real harm.

Experts remain uncertain about the underlying causes of these inaccuracies in large language models, making them a key focus of ongoing research.

While OpenAI has since updated ChatGPT’s model to incorporate current news sources, the case raises questions about accountability and the transparency of AI-generated content.

For more information on these topics, visit diplomacy.edu.

OpenAI and Google face lawsuits while advocating for AI copyright exceptions

OpenAI and Google have urged the US government to allow AI models to be trained on copyrighted material under fair use.

The companies submitted feedback to the White House’s ‘AI Action Plan,’ arguing that restrictions could slow AI progress and give countries like China a competitive edge. Google stressed the importance of copyright and privacy exceptions, stating that text and data mining provisions are critical for innovation.

Anthropic also responded to the White House’s request but focused more on AI risks to national security and infrastructure rather than copyright concerns.

Meanwhile, OpenAI and Google are facing multiple lawsuits from news organisations and content creators, including Sarah Silverman and George R.R. Martin, who allege their works were used without permission for AI training.

Other companies, including Apple and Nvidia, have also been accused of improperly using copyrighted material, such as YouTube subtitles, to train AI models.

As legal challenges continue, major tech firms remain committed to pushing for regulations that support AI development while navigating the complexities of intellectual property rights.

For more information on these topics, visit diplomacy.edu.

New AI model by Stability AI creates 3D videos from images

Stability AI has unveiled its latest AI model, Stable Virtual Camera, designed to convert 2D images into dynamic 3D video scenes. Announced in a company blog post, the model enables users to create immersive videos with realistic depth and perspective using up to 32 input images. It generates ‘novel views’ of a scene, offering various preset camera movements, including Spiral, Dolly Zoom, Move, and Pan.

The tool is currently available as a research preview and allows users to generate videos in square (1:1), portrait (9:16), and landscape (16:9) formats, with a maximum length of 1,000 frames. However, Stability AI warns that certain images, such as those with people, animals, or complex textures like water, may produce lower-quality results. Highly ambiguous or irregularly shaped objects may also lead to visual artifacts.

Stable Virtual Camera is available for research use under a non-commercial license and can be downloaded from AI development platform Hugging Face. The launch follows a turbulent period for Stability AI, which has recently undergone leadership changes, secured new investments, and expanded into new AI applications, including generative audio. With this latest innovation, the company aims to solidify its position in the competitive AI market.

For more information on these topics, visit diplomacy.edu.

California’s attempt to regulate online platforms faces legal setback

A federal judge in California has blocked a state law requiring online platforms to take extra measures to protect children, ruling it imposes unconstitutional burdens on tech companies.

The law, signed by Governor Gavin Newsom in 2022, aimed to prevent harm to young users by mandating businesses to assess risks, adjust privacy settings, and estimate users’ ages. Companies faced fines of up to $7,500 per child for intentional violations.

Judge Beth Freeman ruled that the law was too broad and infringed on free speech, siding with NetChoice, a group representing major tech firms, including Amazon, Google, Meta, and Netflix.

NetChoice argued the legislation effectively forced companies to act as government censors under the pretext of protecting privacy.

The ruling marks a victory for the tech industry, which has repeatedly challenged state-level regulations on content moderation and user protections.

California Attorney General Rob Bonta expressed disappointment in the decision and pledged to continue defending the law. The legal battle is expected to continue, as a federal appeals court had previously ordered a reassessment of the injunction.

The case highlights the ongoing conflict between government efforts to regulate online spaces and tech companies’ claims of constitutional overreach.

For more information on these topics, visit diplomacy.edu.