Google’s AI Edge Gallery boosts privacy with on-device model use

Google has released an experimental app called AI Edge Gallery, allowing Android users to run AI models directly on their devices without needing an internet connection.

The app supports several publicly available models from Hugging Face, including Google’s own lightweight Gemma 3n, and offers tools for image generation, Q&A, and code assistance.

The key feature of the app is its local processing capability, which means data never leaves the user’s device.

This addresses rising concerns over privacy and data security, particularly when interacting with AI tools. By running models locally, users benefit from faster response times and greater control over their data.

AI Edge Gallery includes features such as ‘AI Chat,’ ‘Ask Image,’ and a ‘Prompt Lab,’ where users can experiment with tasks like text summarisation and single-turn AI interactions.

While the app is optimised for lighter models like Gemma 3—just 529MB in size—Google notes that performance will depend on the hardware of the user’s device, with more powerful phones delivering faster results.

Currently in Alpha, the app is open-source and available under the Apache 2.0 licence via GitHub, encouraging developers to explore and contribute. Google is also inviting feedback to shape future updates and improvements.

To enhance app security, especially as AI features become more embedded in mobile experiences, Google suggests integrating secure, passwordless login methods.

Solutions like MojoAuth—offering OTP-based logins via phone or email—can reduce risks of data breaches while offering a smooth, user-friendly authentication process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic Intelligence set to automate complex tasks with human oversight

Thomson Reuters has unveiled a new AI platform, Agentic Intelligence, designed to automate complex workflows for professionals in tax, legal, and compliance sectors.

The platform integrates directly with existing professional tools, enabling AI to plan, reason, and act on tasks while maintaining audit trails and data control to meet regulatory standards.

A key component of the launch is CoCounsel for Tax, a tool aimed at tax, audit, and accounting professionals. It consolidates firm-specific data, internal knowledge, and regulatory materials into a unified workspace.

Early adopters have reported significant productivity gains, with one accounting firm, BLISS 1041, cutting time spent on residency and filing code reviews from several days to under an hour.

Agentic Intelligence leverages over 20 billion proprietary and public documents and is supported by a network of 4,500 subject matter experts.

Built on partnerships with OpenAI, Anthropic, Google Cloud, and AWS, the platform reflects Thomson Reuters’ strategic shift towards embedding AI across sectors traditionally dependent on manual expertise.

David Wong, chief product officer at Thomson Reuters, said the new platform represents more than a technological upgrade. ‘Agentic AI isn’t a marketing buzzword. It’s a new blueprint for how complex work gets done,’ he said.

‘These systems don’t just assist — they operate within professional workflows, break down tasks, act independently, and escalate where needed, all under human oversight.’

Following CoCounsel for Tax, the next product — Ready to Review — will focus on automating tax return preparation.

The platform is expected to expand into legal, compliance, and risk sectors throughout 2025, building on previous acquisitions such as Materia and Casetext, which have helped lay the foundation for Thomson Reuters’ AI-centric growth strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk’s xAI seeks billions to expand AI data centres

Elon Musk is raising $5 billion in debt for his AI company xAI Corp., in a move that signals a renewed focus on his business ventures after stepping away from a prominent political role.

Investment bank Morgan Stanley is leading the offering, which includes a floating-rate loan, a fixed-rate loan, and senior secured notes — all priced with double-digit interest rates, according to people familiar with the deal.

Proceeds will be used for general corporate purposes, including accelerating development of xAI’s infrastructure, such as its vast Memphis-based data centre, Colossus.

The site currently houses 200,000 GPUs and could soon expand to over one million as Musk ramps up efforts to train advanced AI models. The debt package has already attracted over $3.5 billion in early demand, with commitments due by 17 June.

Musk’s move to raise capital for xAI comes after a string of fundraising rounds across his companies, including $650 million for Neuralink and a $300 million secondary stock sale in xAI.

He has also merged xAI with his social media platform X into a new entity, XAI Holdings, further aligning his ventures in AI, communications, and computing.

Musk’s focus on his business empire follows a controversial period in politics. As a senior adviser and key backer of Donald Trump during the 2024 election, Musk faced scrutiny both personally and in relation to the performance of Tesla, whose stock has dropped 20% since the presidential inauguration.

Morgan Stanley’s continued involvement underscores the bank’s deep ties with Musk, having previously advised on his $44 billion acquisition of Twitter (now X).

While that deal initially left lenders with billions in risky debt, recent improvements in Musk’s business standing helped the bank clear the remaining liabilities earlier this year.

The latest xAI debt sale is another indicator of investor appetite for AI ventures, especially when tied to high-profile figures like Musk. If successful, it will also strengthen the infrastructure needed to support Musk’s vision of AI leadership through xAI and its associated platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI turns ChatGPT into AI gateway

OpenAI plans to reinvent ChatGPT as an all-in-one ‘super assistant’ that knows its users and becomes their primary gateway to the internet.

Details emerged from a partly redacted internal strategy document shared during the US government’s antitrust case against Google.

Rather than limiting ChatGPT to existing apps and websites, OpenAI envisions a future where the assistant supports everyday life—from suggesting recipes at home to taking notes at work or guiding users while travelling.

The company says the AI should evolve into a reliable, emotionally intelligent helper capable of handling a various personal and professional tasks.

OpenAI also believes hardware will be key to this transformation. It recently acquired io, a start-up founded by former Apple designer Jony Ive, for $6.4 billion to develop AI-powered devices.

The company’s strategy outlines how upcoming models like o2 and o3, alongside tools like multimodality and generative user interfaces, could make ChatGPT capable of taking meaningful action instead of simply offering responses.

The document also reveals OpenAI’s intention to back a regulation requiring tech platforms to allow users to set ChatGPT as their default assistant. Confident in its fast growth, research lead, and independence from ads, the company aims to maintain its advantage through bold decisions, speed, and self-disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp fixes deleted message privacy gap

WhatsApp is rolling out a privacy improvement that ensures deleted messages no longer linger in quoted replies, addressing a long-standing issue that exposed partial content users had intended to remove.

The update applies automatically, with no toggle required, and has begun reaching iOS users through version 25.12.73, with wider availability expected soon.

Until now, deleting a message for everyone in a chat has not removed it from quoted replies. That allowed fragments of deleted content to remain visible, undermining the purpose of deletion.

WhatsApp removes the associated quoted message entirely instead of keeping it in conversation threads, even in group or community chats.

WABetaInfo, which first spotted the update, noted that users delete messages for privacy or personal reasons, and leave behind quoted traces conflicted with those intentions.

The change ensures conversations reflect user expectations by entirely erasing deleted content, not only from the original message but also from any references.

Meta continues to develop new features for WhatsApp. Recent additions include voice chat in groups and a native interface for iPad. The company is also testing tools like AI-generated wallpapers, message summaries, and more refined privacy settings to enhance user control and experience further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek claims R1 model matches OpenAI

Chinese AI start-up DeepSeek has announced a major update to its R1 reasoning model, claiming it now performs on par with leading systems from OpenAI and Google.

The R1-0528 version, released following the model’s initial launch in January, reportedly surpasses Alibaba’s Qwen3, which debuted only weeks earlier in April.

According to DeepSeek, the upgrade significantly enhances reasoning, coding, and creative writing while cutting hallucination rates by half.

These improvements stem largely from greater computational resources applied after the training phase, allowing the model to outperform domestic rivals in benchmark tests involving maths, logic, and programming.

Unlike many Western competitors, DeepSeek takes an open-source approach. The company recently shared eight GitHub projects detailing methods to optimise computing, communication, and storage efficiency during training.

Its transparency and resource-efficient design have attracted attention, especially since its smaller distilled model rivals Alibaba’s Qwen3-235B while being nearly 30 times lighter.

Major Chinese tech firms, including Tencent, Baidu and ByteDance, plan to integrate R1-0528 into their cloud services for enterprise clients. DeepSeek’s progress signals China’s continued push into globally competitive AI, driven by a young team determined to offer high performance with fewer resources

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Courts consider limits on AI evidence

A newly proposed rule by the Federal Judicial Conference could reshape how AI-generated evidence is treated in court. Dubbed Rule 707, it would allow such machine-generated evidence to be admitted only if it meets the same reliability standards required of expert testimony under Rule 702.

However, it would not apply to outputs from simple scientific instruments or widely used commercial software. The rule aims to address concerns about the reliability and transparency of AI-driven analysis, especially when used without a supporting expert witness.

Critics argue that the limitation to non-expert presentation renders the rule overly narrow, as the underlying risks of bias and interpretability persist regardless of whether an expert is involved. They suggest that all machine-generated evidence in US courts should be subject to robust scrutiny.

The Advisory Committee is also considering the scope of terminology such as ‘machine learning’ to prevent Rule 707 from encompassing more than intended. Meanwhile, a separate proposed rule regarding deepfakes has been shelved because courts already have tools to address the forgery.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Shoppers can now let AI find and buy deals

Tech giants are pushing deeper into e-commerce with AI-powered digital aides that can understand shoppers’ tastes, try on clothes virtually, hunt for bargains, and even place orders independently.

The so-called ‘AI agent’ mark a new phase in retail, combining personalisation with automation to reshape how people shop online.

Google recently introduced a suite of tools under its new AI Mode, allowing users to upload a photo and preview how clothing would look on their own body. The AI adjusts sizes and fabric drape, enhancing realism.

Shoppers can also set their price and let the AI search for the best deal, alerting them when it’s found and offering to complete the purchase using Google’s payment platform.

OpenAI, Perplexity AI, and Amazon have also added shopping features to their platforms, while Walmart and other retailers are working to ensure their products remain visible to AI shoppers.

Payment giants Visa and Mastercard have upgraded their systems to allow AI agents to process transactions autonomously, cementing the role of digital agents in the online shopping journey.

Experts say this growing ‘agent economy’ offers powerful convenience but raises questions about consumer privacy, trust, and control.

While AI shoppers are unlikely to disrupt e-commerce overnight, analysts note that companies like Google and Meta are particularly well-positioned due to their vast user data and AI leadership.

The next evolution of shopping may not depend on what consumers choose, but on whether they trust machines to choose for them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IIT Bombay and BharatGen lead AI push with cultural datasets

In a landmark effort to support AI research grounded in Indian knowledge systems, IIT Bombay has digitised 30 ancient textbooks covering topics such as astronomy, medicine and mathematics—some dating back as far as 18 centuries.

The initiative, part of the government-backed AIKosh portal, has produced a dataset comprising approximately 218,000 sentences and 1.5 million words, now available to researchers across the country.

Launched in March, AIKosh serves as a national repository for datasets, models and toolkits to foster home-grown AI innovation.

Alongside BharatGen—a consortium led by IIT Bombay and comprising IIT Kanpur, IIT Madras, IIT Hyderabad, IIT Mandi, IIM Indore and IIIT Hyderabad—the institute has contributed 37 diverse models and datasets to the platform.

These contributions include 16 culturally significant datasets from IIT Bombay alone, as well as 21 AI models from BharatGen, which is supported by the Department of Science and Technology.

Professor Ganesh Ramakrishnan, who leads the initiative, said the team is developing sovereign AI models for India, trained from scratch and not merely fine-tuned versions of existing tools.

These models aim to be data- and compute-efficient while being culturally and linguistically relevant. The collection also includes datasets for audio-visual learning—such as tutorials on organic farming and waste-to-toy creation—mathematical reasoning in Hindi and English, image-based question answering, and video-text recognition.

One dataset even features question-answering derived from the works of historian Dharampal. ‘This is about setting benchmarks for the AI ecosystem in India,’ said Ramakrishnan, noting that the resources are openly available to researchers, enterprises and academic institutions alike.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China creates AI to detect real nuclear warheads

Chinese scientists have created the world’s first AI-based system capable of identifying real nuclear warheads from decoys, marking a significant step in arms control verification.

The breakthrough, developed by the China Institute of Atomic Energy (CIAE), could strengthen Beijing’s hand in stalled disarmament talks, although it also raises difficult questions about AI’s growing role in managing weapons of mass destruction.

The technology builds on a long-standing US–China proposal but faced key obstacles: how to train AI using sensitive nuclear data, gain military approval without risking secret leaks, and persuade sceptical nations like the US to move past Cold War-era inspection methods.

So far, only the AI training has been completed, with the rest of the process still pending international acceptance.

The AI system uses deep learning and cryptographic protocols to analyse scrambled radiation signals from warheads behind a polythene wall, ensuring the weapons’ internal designs remain hidden.

The machine can verify a warhead’s chain-reaction potential without accessing classified details. According to CIAE, repeated randomised tests reduce the chance of deception to nearly zero.

While both China and the US have pledged not to let AI control nuclear launch decisions, the new system underlines AI’s expanding role in national defence.

Beijing insists the AI can be jointly trained and sealed before use to ensure transparency, but sceptics remain wary of trust, backdoor access and growing militarisation of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!