AI Mode in Google Search adds multilingual support to Hindi and four more languages

Google has announced an expansion of AI Mode in Search to five new languages, including Hindi, Indonesian, Japanese, Korean and Brazilian Portuguese. The feature was first introduced in English in March and aims to compete with AI-powered search platforms such as ChatGPT Search and Perplexity AI.

The company highlighted that building a global search experience requires more than translation. Google’s custom version of Gemini 2.5 uses advanced reasoning and multimodal capabilities to provide locally relevant and useful search results instead of offering generic answers.

AI Mode now also supports agentic tasks such as booking restaurant reservations, with plans to include local service appointments and event ticketing.

Currently, these advanced functions are available to Google AI Ultra subscribers in the US, while India received the rollout of the language expansion in July.

These developments reinforce Google’s strategy to integrate AI deeply into its search ecosystem, enhancing user experience across diverse regions instead of limiting sophisticated AI tools to English-language users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Social media authenticity questioned as Altman points to bot-like behaviour

Sam Altman, X enthusiast and Reddit shareholder, has expressed doubts over whether social media content can still be distinguished from bot activity. His remarks followed an influx of praise for OpenAI Codex on Reddit, where users questioned whether such posts were genuine.

Altman noted that humans are increasingly adopting quirks of AI-generated language, blurring the line between authentic and synthetic speech. He also pointed to factors such as social media optimisation for engagement and astroturfing campaigns, which amplify suspicions of fakery.

The comments follow OpenAI’s backlash over the rollout of GPT-5, which saw Reddit communities shift from celebratory to critical. Altman acknowledged flaws in a Reddit AMA, but the fallout left lasting scepticism and lower enthusiasm among AI users.

Underlying this debate is the wider reality that bots dominate much of the online environment. Imperva estimates that more than half of 2024’s internet traffic was non-human, while X’s own Grok chatbot admitted to hundreds of millions of bots on the platform.

Some observers suggest Altman’s comments may foreshadow an OpenAI-backed social media venture. Whether such a project could avoid the same bot-related challenges remains uncertain, with research suggesting that even bot-only networks eventually create echo chambers of their own.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic AI faces legal setback in authors’ piracy lawsuit

A federal judge has rejected the $1.5 billion settlement Anthropic agreed to in a piracy lawsuit filed by authors.

Judge William Alsup expressed concerns that the deal was ‘nowhere close to complete’ and could be forced on writers without proper input.

The lawsuit involves around 500,000 authors whose works were allegedly used without permission to train Anthropic’s large language models. The proposed settlement would have granted $3,000 per work, a sum far exceeding previous copyright recoveries.

However, the judge criticised the lack of clarity regarding the list of works, authors, notification process, and claim forms.

Alsup instructed the lawyers to provide clear notice to class members and allow them to opt in or out. He also emphasised that Anthropic must be shielded from future claims on the same issue. The court set deadlines for a final list of works by September 15 and approval of all related documents by October 10.

The ruling highlights ongoing legal challenges for AI companies using copyrighted material for training large language models instead of relying solely on licensed or public-domain data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Orson Welles lost film reconstructed with AI

More than 80 years after Orson Welles’ The Magnificent Ambersons was cut and lost, AI is being used to restore 43 missing minutes of the film.

Amazon-backed Showrunner, led by Edward Saatchi, is experimenting with AI technology to rebuild the destroyed sequences as part of a broader push to reimagine how Hollywood might use AI in storytelling.

The project is not intended for commercial release, since Showrunner has not secured rights from Warner Bros. or Concord, but instead aims to explore what could have been the director’s original vision.

The initiative marks a shift in the role of AI in filmmaking. Rather than serving only as a tool for effects, dubbing or storyboarding, it is being positioned as a foundation for long-form narrative creation.

Showrunner is developing AI models capable of sustaining complex plots, with the goal of eventually generating entire films. Saatchi envisions the platform as a type of ‘Netflix of AI,’ where audiences might one day interact with intellectual property and generate their own stories.

To reconstruct The Magnificent Ambersons, the company is combining traditional techniques with AI tools. New sequences will be shot with actors, while AI will be used for face and pose transfer to replicate the original cast.

Thousands of archival set photographs are being used to digitally recreate the film’s environments.

Filmmaker Brian Rose, who has rebuilt 30,000 missing frames over five years, has reconstructed set movements and timing to match the lost scenes, while VFX expert Tom Clive will assist in refining the likenesses of the original actors.

A project that underlines both the creative possibilities and ethical tensions surrounding AI in cinema. While the reconstructed footage will not be commercially exploited, it raises questions about the use of copyrighted material in training AI and the risk of replacing human creators.

For many, however, the experiment offers a glimpse of what Welles’ ambitious work might have looked like had it survived intact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI study links AI hallucinations to flawed testing incentives

OpenAI researchers say large language models continue to hallucinate because current evaluation methods encourage them to guess rather than admit uncertainty.

Hallucinations, defined as confident but false statements, persist despite advances in models such as GPT-5. Low-frequency facts, like specific dates or names, are particularly vulnerable.

The study argues that while pretraining predicts the next word without true or false labels, the real problem lies in accuracy-based testing. Evaluations that reward lucky guesses discourage models from saying ‘I don’t know’.

Researchers suggest penalising confident errors more heavily than uncertainty, and awarding partial credit when AI models acknowledge limits in knowledge. They argue that only by reforming evaluation methods can hallucinations be meaningfully reduced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mental health concerns over chatbots fuel AI regulation calls

The impact of AI chatbots on mental health is emerging as a serious concern, with experts warning that such cases highlight the risks of more advanced systems.

Nate Soares, president of the US-based Machine Intelligence Research Institute, pointed to the tragic case of teenager Adam Raine, who took his own life after months of conversations with ChatGPT, as a warning signal for future dangers.

Soares, a former Google and Microsoft engineer, said that while companies design AI chatbots to be helpful and safe, they can produce unintended and harmful behaviour.

He warned that the same unpredictability could escalate if AI develops into artificial super-intelligence, systems capable of surpassing humans in all intellectual tasks. His new book with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies, argues that unchecked advances could lead to catastrophic outcomes.

He suggested that governments adopt a multilateral approach, similar to nuclear non-proliferation treaties, to halt a race towards super-intelligence.

Meanwhile, leading voices in AI remain divided. Meta’s chief AI scientist, Yann LeCun, has dismissed claims of an existential threat, insisting AI could instead benefit humanity.

The debate comes as OpenAI faces legal action from Raine’s family and introduces new safeguards for under-18s.

Psychotherapists and researchers also warn of the dangers of vulnerable people turning to chatbots instead of professional care, with early evidence suggesting AI tools may amplify delusional thoughts in those at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google hit with $3.5 billion EU fine

The European Commission fined Google nearly $3.5 billion after ruling that the company had abused its dominance in digital advertising. Regulators found that Google unfairly preferred its ad exchange, AdX, in its publisher ad server and ad-buying tools, which violated EU antitrust rules.

Officials ordered Google to end these practices within 60 days and to address what they described as ‘inherent conflicts of interest’ across the adtech supply chain. Teresa Ribera, the Commission’s executive vice president, said the case showed the need to ensure that digital markets serve the public fairly, warning that more potent remedies would follow if Google failed to comply.

Google announced it would appeal, arguing that its advertising services remain competitive and that businesses have more alternatives than ever. The fine marks the EU’s second-largest competition penalty, following a record $5 billion action against Google in 2018.

The ruling drew criticism from US President Donald Trump, who accused Europe of unfairly targeting American tech firms and threatened retaliatory measures.

Trump hosted a dinner with industry executives, including Google CEO Sundar Pichai and co-founder Sergey Brin, where he won praise for his policies on AI.

Meanwhile, Google secured partial relief in a separate antitrust case in the United States when a judge declined to impose sweeping remedies such as forcing the sale of Chrome or Android.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New ChatGPT feature enables multi-threaded chats

The US AI firm OpenAI has introduced a new ChatGPT feature that allows users to branch conversations into separate threads and explore different tones, styles, or directions without altering the original chat.

The update, rolled out on 5 September, is available to anyone logged into ChatGPT through the web version.

The branching tool lets users copy a conversation from a chosen point and continue in a new thread while preserving the earlier exchange.

Marketing teams, for example, could test formal, informal, or humorous versions of advertising content within parallel chats, avoiding the need to overwrite or restart a conversation.

OpenAI described the update as a response to user requests for greater flexibility. Many users had previously noted that a linear dialogue structure limited efficiency by forcing them to compare and copy content repeatedly.

Early reactions online have compared the new tool to Git, which enables software developers to branch and merge code.

The feature has been welcomed by ChatGPT users who are experimenting with brainstorming, project analysis, or layered problem-solving. Analysts suggest it also reduces cognitive load by allowing users to test multiple scenarios more naturally.

Alongside the update, OpenAI is working on other projects, including a new AI-powered jobs platform to connect workers and companies more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic settles $1.5 billion copyright case with authors

The AI startup, Anthropic, has agreed to pay $1.5 billion to settle a copyright lawsuit accusing the company of using pirated books to train its Claude AI chatbot.

The proposed deal, one of the largest of its kind, comes after a group of authors claimed the startup deliberately downloaded unlicensed copies of around 500,000 works.

According to reports, Anthropic will pay about $3,000 per book and add interest while agreeing to destroy datasets containing the material. A California judge will review the settlement terms on 8 September before finalising them.

Lawyers for the plaintiffs described the outcome as a landmark, warning that using pirated websites for AI training is unlawful.

The case reflects mounting legal pressure on the AI industry, with companies such as OpenAI and Microsoft also facing copyright disputes. The settlement followed a June ruling in which a judge said using the books to train Claude was ‘transformative’ and qualified as fair use.

Anthropic said the deal resolves legacy claims while affirming its commitment to safe AI development.

Despite the legal challenges, Anthropic continues to grow rapidly. Earlier in August, the company secured $13 billion in funding for a valuation of $183 billion, underlining its rise as one of the fastest-growing players in the global technology sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google avoids breakup as court ruling fuels AI Mode expansion

A US district judge has declined to order a breakup of Google, softening the blow of a 2024 ruling that found the company had illegally monopolised online search.

The decision means Google can press ahead with its shift from a search engine into an answer engine, powered by generative AI.

Google’s AI Mode replaces traditional blue links with direct responses to queries, echoing the style of ChatGPT. While the feature is optional for now, it could become the default.

That alarms publishers, who depend on search traffic for advertising revenue. Studies suggest chatbots reduce referral clicks by more than 90 percent, leaving many sites at risk of collapse.

Google is also experimenting with inserting ads into AI Mode, though it remains unclear how much revenue will flow to content creators. Websites can block their data from being scraped, but doing so would also remove them from Google search entirely.

Despite these concerns, Google argues that competition from ChatGPT, Perplexity, and other AI tools shows that new rivals are reshaping the search landscape.

The judge even cited the emergence of generative AI as a factor that altered the case against Google, underlining how the rise of AI has become central to the future of the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!