AI Mode in Google Search adds multilingual support to Hindi and four more languages

Google has announced an expansion of AI Mode in Search to five new languages, including Hindi, Indonesian, Japanese, Korean and Brazilian Portuguese. The feature was first introduced in English in March and aims to compete with AI-powered search platforms such as ChatGPT Search and Perplexity AI.

The company highlighted that building a global search experience requires more than translation. Google’s custom version of Gemini 2.5 uses advanced reasoning and multimodal capabilities to provide locally relevant and useful search results instead of offering generic answers.

AI Mode now also supports agentic tasks such as booking restaurant reservations, with plans to include local service appointments and event ticketing.

Currently, these advanced functions are available to Google AI Ultra subscribers in the US, while India received the rollout of the language expansion in July.

These developments reinforce Google’s strategy to integrate AI deeply into its search ecosystem, enhancing user experience across diverse regions instead of limiting sophisticated AI tools to English-language users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI adoption drops at large US companies for the first time since 2023

Despite the hype surrounding AI, new data suggests corporate adoption of AI is slowing.

A biweekly survey by the US Census Bureau found AI use among firms with over 250 employees dropped from nearly 14 percent in mid-June to under 12 percent in August, marking the largest decline since the survey began in November 2023.

Smaller companies with fewer than four workers saw a slight increase, but mid-sized businesses largely reported flat or falling AI adoption. The findings are worrying for tech investors and CEOs, who have invested heavily in enterprise AI in the hope of boosting productivity and revenue across industries.

So far, up to 95 per cent of companies using AI have not generated new income from the technology.

The decline comes amid underwhelming performance from high-profile AI releases. OpenAI’s GPT-5, expected to revolutionise enterprise AI, underperformed in benchmark tests, while some companies are rehiring human staff after previously reducing headcount based on AI promises.

Analysts warn that AI innovations may have plateaued, leaving enterprise adoption struggling to justify prior investments.

Unless enterprise AI starts delivering measurable results, corporate usage could continue to decline, signalling a potential slowdown in the broader AI-driven growth many had anticipated.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic AI faces legal setback in authors’ piracy lawsuit

A federal judge has rejected the $1.5 billion settlement Anthropic agreed to in a piracy lawsuit filed by authors.

Judge William Alsup expressed concerns that the deal was ‘nowhere close to complete’ and could be forced on writers without proper input.

The lawsuit involves around 500,000 authors whose works were allegedly used without permission to train Anthropic’s large language models. The proposed settlement would have granted $3,000 per work, a sum far exceeding previous copyright recoveries.

However, the judge criticised the lack of clarity regarding the list of works, authors, notification process, and claim forms.

Alsup instructed the lawyers to provide clear notice to class members and allow them to opt in or out. He also emphasised that Anthropic must be shielded from future claims on the same issue. The court set deadlines for a final list of works by September 15 and approval of all related documents by October 10.

The ruling highlights ongoing legal challenges for AI companies using copyrighted material for training large language models instead of relying solely on licensed or public-domain data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI study links AI hallucinations to flawed testing incentives

OpenAI researchers say large language models continue to hallucinate because current evaluation methods encourage them to guess rather than admit uncertainty.

Hallucinations, defined as confident but false statements, persist despite advances in models such as GPT-5. Low-frequency facts, like specific dates or names, are particularly vulnerable.

The study argues that while pretraining predicts the next word without true or false labels, the real problem lies in accuracy-based testing. Evaluations that reward lucky guesses discourage models from saying ‘I don’t know’.

Researchers suggest penalising confident errors more heavily than uncertainty, and awarding partial credit when AI models acknowledge limits in knowledge. They argue that only by reforming evaluation methods can hallucinations be meaningfully reduced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI threatens the future of entry level jobs

The rise of AI puts traditional entry-level roles under pressure, raising concerns that career ladders may no longer function as they once did. Industry leaders, including Anthropic CEO Dario Amodei, warn that AI could replace half of all entry-level jobs as machines operate nonstop.

A venture capital firm, SignalFire, found that hiring for graduates with under one year of experience at major tech firms fell by 50% between 2019 and 2024. The decline has been consistent across business functions, from sales and marketing to engineering and operations.

Analysts argue that while career pathways are being reshaped, the ladder’s bottom rung is disappearing, forcing graduates to acquire skills independently before entering the workforce.

Experts stress that the shift does not mean careers are over for new graduates, but it does signal a more challenging transition. Universities are already adapting by striking partnerships with AI companies, while some economists point out that past technological revolutions took decades to reshape employment.

Yet others warn that unchecked AI could eventually threaten entry-level roles and all levels of work, raising questions about the future stability of corporate structures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mistral AI pushes growth with new funding and global deals

Founded in 2023 by ex-Google DeepMind and Meta researchers, Mistral has quickly gained global attention with its open-source models and consumer app, which hit one million downloads within two weeks of launch.

Mistral AI is now seeking fresh funding at a reported $14 billion valuation, more than double its worth just a year ago. Its investors include Microsoft, Nvidia, Cisco, and Bpifrance, and it has signed partnerships with AFP, Stellantis, Orange, and France’s army.

Its growing suite of models spans large language, audio, coding, and reasoning systems, while its enterprise tools integrate with services such as Asana and Google Drive. French president Emmanuel Macron has openly endorsed the firm, framing it as a strategic alternative to US dominance in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI backs AI-generated film Critterz for 2026 release

OpenAI is supporting the production of Critterz, an AI-assisted animated film set for a global theatrical release in 2026. The project aims to show that AI can streamline filmmaking, cutting costs and production time.

Partnering with Vertigo Films and Native Foreign, the film is being produced in nine months, far faster than the usual three years for animated features.

The film, budgeted under $30 million, combines OpenAI’s GPT-5 and DALL·E with traditional voice acting and hand-drawn elements. Building on the acclaimed 2023 short, Critterz will debut at the Cannes Film Festival and expand on a story where humans and AI creatures share the same world.

Writers James Lamont and Jon Foster, known for Paddington in Peru, have been brought in to shape the screenplay.

While producers highlight AI’s creative potential, concerns remain about authenticity and job security in the industry. Some fear AI films could feel impersonal, while major studios continue to defend intellectual property.

Warner Bros., Disney, and Universal are in court with Midjourney over alleged copyright violations.

Despite the debate, OpenAI remains committed to its role in pushing generative storytelling. The company is also expanding its infrastructure, forecasting spending of $115 billion by 2029, with $8 billion planned for this year alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyberattack forces Jaguar Land Rover to halt production

Production at Jaguar Land Rover (JLR) is to remain halted until at least next week after a cyberattack crippled the carmaker’s operations. Disruption is expected to last through September and possibly into October.

The UK’s largest car manufacturer, owned by Tata, has suspended activity at its plants in Halewood, Solihull, and Wolverhampton. Thousands of staff have been told to stay home on full pay while ‘banking’ hours are to be recovered later.

Suppliers, including Evtec, WHS Plastics, SurTec, and OPmobility, which employ more than 6,000 people in the UK, have also paused their operations. The Sunday Times reported speculation that the outage could drag on for most of September.

While there is no evidence of a data breach, JLR has notified the Information Commissioner’s Office about potential risks. Dozens of internal systems, including spare parts databases, remain offline, forcing dealerships to revert to manual processes.

Hackers linked to the groups Scattered Spider, Lapsus$, and ShinyHunters have claimed responsibility for the incident. JLR stated that it was collaborating with cybersecurity experts and law enforcement to restore systems in a controlled and safe manner.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic settles $1.5 billion copyright case with authors

The AI startup, Anthropic, has agreed to pay $1.5 billion to settle a copyright lawsuit accusing the company of using pirated books to train its Claude AI chatbot.

The proposed deal, one of the largest of its kind, comes after a group of authors claimed the startup deliberately downloaded unlicensed copies of around 500,000 works.

According to reports, Anthropic will pay about $3,000 per book and add interest while agreeing to destroy datasets containing the material. A California judge will review the settlement terms on 8 September before finalising them.

Lawyers for the plaintiffs described the outcome as a landmark, warning that using pirated websites for AI training is unlawful.

The case reflects mounting legal pressure on the AI industry, with companies such as OpenAI and Microsoft also facing copyright disputes. The settlement followed a June ruling in which a judge said using the books to train Claude was ‘transformative’ and qualified as fair use.

Anthropic said the deal resolves legacy claims while affirming its commitment to safe AI development.

Despite the legal challenges, Anthropic continues to grow rapidly. Earlier in August, the company secured $13 billion in funding for a valuation of $183 billion, underlining its rise as one of the fastest-growing players in the global technology sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google avoids breakup as court ruling fuels AI Mode expansion

A US district judge has declined to order a breakup of Google, softening the blow of a 2024 ruling that found the company had illegally monopolised online search.

The decision means Google can press ahead with its shift from a search engine into an answer engine, powered by generative AI.

Google’s AI Mode replaces traditional blue links with direct responses to queries, echoing the style of ChatGPT. While the feature is optional for now, it could become the default.

That alarms publishers, who depend on search traffic for advertising revenue. Studies suggest chatbots reduce referral clicks by more than 90 percent, leaving many sites at risk of collapse.

Google is also experimenting with inserting ads into AI Mode, though it remains unclear how much revenue will flow to content creators. Websites can block their data from being scraped, but doing so would also remove them from Google search entirely.

Despite these concerns, Google argues that competition from ChatGPT, Perplexity, and other AI tools shows that new rivals are reshaping the search landscape.

The judge even cited the emergence of generative AI as a factor that altered the case against Google, underlining how the rise of AI has become central to the future of the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!