Claude AI gains powerful file editing tools for documents and spreadsheets

Anthropic’s Claude has expanded its role as a leading AI assistant by adding advanced tools for creating and editing files. Instead of manually working with different programs, users can now describe their needs in plain language and let the AI produce or update Word, Excel, PowerPoint, and PDF files.

A feature that supports uploads of CSV and TSV data and can generate charts, graphs, or images where needed, with a 30MB size limit applying to uploads and downloads.

The real breakthrough lies in editing. Instead of opening a document or spreadsheet, users can simply type instructions such as replacing text, changing currencies, or updating job titles. Claude processes the prompt and makes all the changes in one pass, preserving the original formatting.

It positions the AI as more efficient than rivals, as Gemini can only export reports but not directly modify existing files.

The feature preview is available on web and desktop for subscribers on Max, Team, or Enterprise plans. Analysts suggest the update could reshape productivity tools, especially after reports that Microsoft has partnered with Anthropic to explore using Claude for Office 365 functions.

By removing repetitive tasks and making file handling conversational, Claude is pushing productivity software into a new phase of automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NotebookLM turns notes into flashcards podcasts and quizzes

Google’s learning-focused AI tool NotebookLM has gained a major update, making studying and teaching more interactive.

Instead of offering only static summaries, it now generates flashcards that condense key information into easy-to-remember notes, helping users recall knowledge more effectively.

Reports can also be transformed into quizzes with customisable topics and difficulty, which can then be shared with friends or colleagues through a simple link.

The update extends to audio learning, where NotebookLM’s podcast-style Audio Overviews are evolving with new formats. Instead of a single style, users can now create Brief, Debate, or Critique episodes, giving greater flexibility in how material is explained or discussed.

Google is also strengthening its teaching tools. A new Blog Post format offers contextual suggestions such as strategy papers or explainers, while the ability to create custom report formats allows users to design study resources tailored to their needs.

The most significant addition, however, is the Learning Guide. Acting like a personal tutor, it promotes deeper understanding by asking open-ended questions, breaking problems into smaller steps, and adapting explanations to suit each learner.

With these features, NotebookLM is moving closer to becoming a comprehensive learning assistant, offering a mix of interactive study aids and adaptable teaching methods that go beyond simple note-taking.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Growing concern over AI fatigue among students and teachers

Experts say growing exposure to AI is leaving many people exhausted, a phenomenon increasingly described as ‘AI fatigue’.

Educators and policymakers note that AI adoption surged before society had time to thoroughly weigh its ethical or social effects. The technology now underpins tasks from homework writing to digital art, leaving some feeling overwhelmed or displaced.

University students are among those most affected, with many relying heavily on AI for assignments. Teachers say it has become challenging to identify AI-generated work, as detection tools often produce inconsistent results.

Some educators are experimenting with low-tech classrooms, banning phones and requiring handwritten work. They report deeper conversations and stronger engagement when distractions are removed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens turn to AI chatbots for support, raising mental health concerns

Mental health experts in Iowa have warned that teenagers are increasingly turning to AI chatbots instead of seeking human connection, raising concerns about misinformation and harmful advice.

The issue comes into focus on National Suicide Prevention Day, shortly after a lawsuit against ChatGPT was filed over a teenager’s suicide.

Jessica Bartz, a therapy supervisor at Vera French Duck Creek, said young people are at a vulnerable stage of identity formation while family communication often breaks down.

She noted that some teens use chatbot tools like ChatGPT, Genius and Copilot to self-diagnose, which can reinforce inaccurate or damaging ideas.

‘Sometimes AI can validate the wrong things,’ Bartz said, stressing that algorithms only reflect the limited information users provide.

Without human guidance, young people risk misinterpreting results and worsening their struggles.

Experts recommend that parents and trusted adults engage directly with teenagers, offering empathy and open communication instead of leaving them dependent on technology.

Bartz emphasised that nothing can replace a caring person noticing warning signs and intervening to protect a child’s well-being.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI export rules tighten as the US opens global opportunities

President Trump has signed an Executive Order to promote American leadership in AI exports, marking a significant policy shift. The move creates new global opportunities for US businesses but also introduces stricter compliance responsibilities.

The order establishes the American AI Exports Program, overseen by the Department of Commerce, to develop and deploy ‘full-stack’ AI export packages.

These packages cover everything from chips and cloud infrastructure to AI models and cybersecurity safeguards. Industry consortia will be invited to submit proposals, outlining hardware origins, export targets, business models, and federal support requests.

A central element of the initiative is ensuring compliance with US export control regimes. Companies must align with the Export Control Reform Act and the Export Administration Regulations, with special attention to restrictions on advanced computing chips.

New guidance warns against potential violations linked to hardware and highlights red flags for illegal diversion of sensitive technology.

Commerce stresses that participation requires robust export compliance plans and rigorous end user screening.

Legal teams are urged to review policies on AI exports, as regulators focus on preventing misuse of advanced computing systems in military or weapons programmes abroad.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Mode in Google Search adds multilingual support to Hindi and four more languages

Google has announced an expansion of AI Mode in Search to five new languages, including Hindi, Indonesian, Japanese, Korean and Brazilian Portuguese. The feature was first introduced in English in March and aims to compete with AI-powered search platforms such as ChatGPT Search and Perplexity AI.

The company highlighted that building a global search experience requires more than translation. Google’s custom version of Gemini 2.5 uses advanced reasoning and multimodal capabilities to provide locally relevant and useful search results instead of offering generic answers.

AI Mode now also supports agentic tasks such as booking restaurant reservations, with plans to include local service appointments and event ticketing.

Currently, these advanced functions are available to Google AI Ultra subscribers in the US, while India received the rollout of the language expansion in July.

These developments reinforce Google’s strategy to integrate AI deeply into its search ecosystem, enhancing user experience across diverse regions instead of limiting sophisticated AI tools to English-language users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic AI faces legal setback in authors’ piracy lawsuit

A federal judge has rejected the $1.5 billion settlement Anthropic agreed to in a piracy lawsuit filed by authors.

Judge William Alsup expressed concerns that the deal was ‘nowhere close to complete’ and could be forced on writers without proper input.

The lawsuit involves around 500,000 authors whose works were allegedly used without permission to train Anthropic’s large language models. The proposed settlement would have granted $3,000 per work, a sum far exceeding previous copyright recoveries.

However, the judge criticised the lack of clarity regarding the list of works, authors, notification process, and claim forms.

Alsup instructed the lawyers to provide clear notice to class members and allow them to opt in or out. He also emphasised that Anthropic must be shielded from future claims on the same issue. The court set deadlines for a final list of works by September 15 and approval of all related documents by October 10.

The ruling highlights ongoing legal challenges for AI companies using copyrighted material for training large language models instead of relying solely on licensed or public-domain data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple sued over use of pirated books in AI training

Apple is facing a new copyright lawsuit after two authors alleged the company used pirated copies of their books to train its OpenELM AI models. Filed in Northern California, the case claims Apple used the authors’ works without permission, payment, or credit.

The lawsuit seeks class-action status, adding Apple to a growing list of technology firms accused of misusing copyrighted works for AI training.

The action comes amid a wider legal storm engulfing AI companies. Anthropic recently agreed to a $1.5 billion settlement with authors who alleged its Claude chatbot was trained on their works without authorisation, in what lawyers called the most significant copyright recovery in history.

Microsoft, Meta, and OpenAI also face similar lawsuits over alleged reliance on unlicensed material in their datasets.

Analysts warn Apple could face heightened scrutiny given its reputation as a privacy-focused company. Any finding that its AI models were trained on pirated material could cause serious reputational harm alongside potential financial penalties.

The case also underscores the broader unresolved debate over whether AI training constitutes fair use or unlawful exploitation of creative works.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mental health concerns over chatbots fuel AI regulation calls

The impact of AI chatbots on mental health is emerging as a serious concern, with experts warning that such cases highlight the risks of more advanced systems.

Nate Soares, president of the US-based Machine Intelligence Research Institute, pointed to the tragic case of teenager Adam Raine, who took his own life after months of conversations with ChatGPT, as a warning signal for future dangers.

Soares, a former Google and Microsoft engineer, said that while companies design AI chatbots to be helpful and safe, they can produce unintended and harmful behaviour.

He warned that the same unpredictability could escalate if AI develops into artificial super-intelligence, systems capable of surpassing humans in all intellectual tasks. His new book with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies, argues that unchecked advances could lead to catastrophic outcomes.

He suggested that governments adopt a multilateral approach, similar to nuclear non-proliferation treaties, to halt a race towards super-intelligence.

Meanwhile, leading voices in AI remain divided. Meta’s chief AI scientist, Yann LeCun, has dismissed claims of an existential threat, insisting AI could instead benefit humanity.

The debate comes as OpenAI faces legal action from Raine’s family and introduces new safeguards for under-18s.

Psychotherapists and researchers also warn of the dangers of vulnerable people turning to chatbots instead of professional care, with early evidence suggesting AI tools may amplify delusional thoughts in those at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ChatGPT feature enables multi-threaded chats

The US AI firm OpenAI has introduced a new ChatGPT feature that allows users to branch conversations into separate threads and explore different tones, styles, or directions without altering the original chat.

The update, rolled out on 5 September, is available to anyone logged into ChatGPT through the web version.

The branching tool lets users copy a conversation from a chosen point and continue in a new thread while preserving the earlier exchange.

Marketing teams, for example, could test formal, informal, or humorous versions of advertising content within parallel chats, avoiding the need to overwrite or restart a conversation.

OpenAI described the update as a response to user requests for greater flexibility. Many users had previously noted that a linear dialogue structure limited efficiency by forcing them to compare and copy content repeatedly.

Early reactions online have compared the new tool to Git, which enables software developers to branch and merge code.

The feature has been welcomed by ChatGPT users who are experimenting with brainstorming, project analysis, or layered problem-solving. Analysts suggest it also reduces cognitive load by allowing users to test multiple scenarios more naturally.

Alongside the update, OpenAI is working on other projects, including a new AI-powered jobs platform to connect workers and companies more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!