Google’s AI Edge Gallery boosts privacy with on-device model use

Google has released an experimental app called AI Edge Gallery, allowing Android users to run AI models directly on their devices without needing an internet connection.

The app supports several publicly available models from Hugging Face, including Google’s own lightweight Gemma 3n, and offers tools for image generation, Q&A, and code assistance.

The key feature of the app is its local processing capability, which means data never leaves the user’s device.

This addresses rising concerns over privacy and data security, particularly when interacting with AI tools. By running models locally, users benefit from faster response times and greater control over their data.

AI Edge Gallery includes features such as ‘AI Chat,’ ‘Ask Image,’ and a ‘Prompt Lab,’ where users can experiment with tasks like text summarisation and single-turn AI interactions.

While the app is optimised for lighter models like Gemma 3—just 529MB in size—Google notes that performance will depend on the hardware of the user’s device, with more powerful phones delivering faster results.

Currently in Alpha, the app is open-source and available under the Apache 2.0 licence via GitHub, encouraging developers to explore and contribute. Google is also inviting feedback to shape future updates and improvements.

To enhance app security, especially as AI features become more embedded in mobile experiences, Google suggests integrating secure, passwordless login methods.

Solutions like MojoAuth—offering OTP-based logins via phone or email—can reduce risks of data breaches while offering a smooth, user-friendly authentication process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic Intelligence set to automate complex tasks with human oversight

Thomson Reuters has unveiled a new AI platform, Agentic Intelligence, designed to automate complex workflows for professionals in tax, legal, and compliance sectors.

The platform integrates directly with existing professional tools, enabling AI to plan, reason, and act on tasks while maintaining audit trails and data control to meet regulatory standards.

A key component of the launch is CoCounsel for Tax, a tool aimed at tax, audit, and accounting professionals. It consolidates firm-specific data, internal knowledge, and regulatory materials into a unified workspace.

Early adopters have reported significant productivity gains, with one accounting firm, BLISS 1041, cutting time spent on residency and filing code reviews from several days to under an hour.

Agentic Intelligence leverages over 20 billion proprietary and public documents and is supported by a network of 4,500 subject matter experts.

Built on partnerships with OpenAI, Anthropic, Google Cloud, and AWS, the platform reflects Thomson Reuters’ strategic shift towards embedding AI across sectors traditionally dependent on manual expertise.

David Wong, chief product officer at Thomson Reuters, said the new platform represents more than a technological upgrade. ‘Agentic AI isn’t a marketing buzzword. It’s a new blueprint for how complex work gets done,’ he said.

‘These systems don’t just assist — they operate within professional workflows, break down tasks, act independently, and escalate where needed, all under human oversight.’

Following CoCounsel for Tax, the next product — Ready to Review — will focus on automating tax return preparation.

The platform is expected to expand into legal, compliance, and risk sectors throughout 2025, building on previous acquisitions such as Materia and Casetext, which have helped lay the foundation for Thomson Reuters’ AI-centric growth strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia tightens rules for crypto ATMs

Australia has imposed stricter rules on crypto ATM operators to curb scams and ensure compliance with anti-money laundering laws. A $5,000 AUD limit now applies to cash deposits and withdrawals, with scam warnings required on all machines.

Operators must also step up customer verification and improve transaction monitoring. These measures follow an AUSTRAC-led investigation that revealed older Australians, particularly those aged 60 to 70, account for a large share of crypto ATM activity.

Authorities noted that some victims were tricked into handing over life savings via these machines.

AUSTRAC has already denied registration renewal to one provider, Harro’s Empires, due to ongoing misuse risks.

The agency warned that other non-compliant operators could face similar penalties. It also urged broader adoption of cash limits across exchanges to reduce financial crime exposure.

To strengthen awareness, AUSTRAC and the federal police have released educational materials to be displayed near ATMs. The move comes amid rising scam reports, with 150 confirmed cases and over $3.1 million AUD in losses reported within a year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

TikTok bans ‘SkinnyTok’ hashtag worldwide

TikTok has globally banned the hashtag ‘SkinnyTok’ after pressure from the French government, which accused the platform of promoting harmful eating habits among young users. The decision comes as part of the platform’s broader effort to improve user safety, particularly around content linked to unhealthy weight loss practices.

The move was hailed as a win by France’s Digital Minister, Clara Chappaz, who led the charge and called it a ‘first collective victory.’ She, along with other top French digital and data protection officials, travelled to Dublin to engage directly with TikTok’s Trust and Safety team. Notably, no representatives from the European Commission were present during these discussions, raising questions about the EU’s role and influence in enforcing digital regulations.

While the European Commission had already opened a broader investigation into TikTok over child protection issues in early 2024 under the Digital Services Act (DSA), it has yet to comment on the SkinnyTok case specifically. Despite this, the Commission says it is still coordinating with French authorities on matters related to DSA enforcement.

The episode has spotlighted national governments’ power in pushing for online safety reforms and the uncertain role of the EU institutions in urgent digital policy actions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

WhatsApp fixes deleted message privacy gap

WhatsApp is rolling out a privacy improvement that ensures deleted messages no longer linger in quoted replies, addressing a long-standing issue that exposed partial content users had intended to remove.

The update applies automatically, with no toggle required, and has begun reaching iOS users through version 25.12.73, with wider availability expected soon.

Until now, deleting a message for everyone in a chat has not removed it from quoted replies. That allowed fragments of deleted content to remain visible, undermining the purpose of deletion.

WhatsApp removes the associated quoted message entirely instead of keeping it in conversation threads, even in group or community chats.

WABetaInfo, which first spotted the update, noted that users delete messages for privacy or personal reasons, and leave behind quoted traces conflicted with those intentions.

The change ensures conversations reflect user expectations by entirely erasing deleted content, not only from the original message but also from any references.

Meta continues to develop new features for WhatsApp. Recent additions include voice chat in groups and a native interface for iPad. The company is also testing tools like AI-generated wallpapers, message summaries, and more refined privacy settings to enhance user control and experience further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSO asks court to overturn WhatsApp verdict

Israeli spyware company NSO Group has requested a new trial after a US jury ordered it to pay $168 million in damages to WhatsApp.

The company, which has faced mounting legal and financial troubles, filed a motion in a California federal court last week seeking to reduce the verdict or secure a retrial.

The May verdict awarded WhatsApp $444,719 in compensatory damages and $167.25 million in punitive damages. Jurors found that NSO exploited vulnerabilities in the encrypted platform and sold the exploit to clients who allegedly used it to target journalists, activists and political rivals.

WhatsApp, owned by Meta, filed the lawsuit in 2019.

NSO claims the punitive award is unconstitutional, arguing it is over 376 times greater than the compensatory damages and far exceeds the US Supreme Court’s general guidance of a 4:1 ratio.

The firm also said it cannot afford the penalty, citing losses of $9 million in 2023 and $12 million in 2024. Its CEO testified that the company is ‘struggling to keep our heads above water’.

WhatsApp, responding to TechCrunch in a statement, said NSO was once again trying to evade accountability. The company vowed to continue its legal campaign, including efforts to secure a permanent injunction that would prevent NSO from ever targeting WhatsApp or its users again.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

184 million passwords exposed in massive data breach

A major data breach has exposed over 184 million user credentials, including emails, passwords, and account details for platforms such as Google, Microsoft and government portals. It is still unclear whether this was due to negligence or deliberate criminal activity.

The unencrypted, unprotected database was discovered online by cybersecurity researcher Jeremiah Fowler, who confirmed many of the credentials were current and accurate. The breach highlights ongoing failures by data handlers to apply even the most basic security measures.

Fowler believes the data was gathered using infostealer malware, which silently extracts login information from compromised devices and sells it on the dark web. After the database was reported, the hosting provider took it offline, but the source remains unknown.

Security experts urge users to update passwords across all platforms, enable two-factor authentication, and use password managers and data removal services. In today’s hyper-connected world, the exposure of such critical information without encryption is seen as both avoidable and unacceptable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Courts consider limits on AI evidence

A newly proposed rule by the Federal Judicial Conference could reshape how AI-generated evidence is treated in court. Dubbed Rule 707, it would allow such machine-generated evidence to be admitted only if it meets the same reliability standards required of expert testimony under Rule 702.

However, it would not apply to outputs from simple scientific instruments or widely used commercial software. The rule aims to address concerns about the reliability and transparency of AI-driven analysis, especially when used without a supporting expert witness.

Critics argue that the limitation to non-expert presentation renders the rule overly narrow, as the underlying risks of bias and interpretability persist regardless of whether an expert is involved. They suggest that all machine-generated evidence in US courts should be subject to robust scrutiny.

The Advisory Committee is also considering the scope of terminology such as ‘machine learning’ to prevent Rule 707 from encompassing more than intended. Meanwhile, a separate proposed rule regarding deepfakes has been shelved because courts already have tools to address the forgery.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Shoppers can now let AI find and buy deals

Tech giants are pushing deeper into e-commerce with AI-powered digital aides that can understand shoppers’ tastes, try on clothes virtually, hunt for bargains, and even place orders independently.

The so-called ‘AI agent’ mark a new phase in retail, combining personalisation with automation to reshape how people shop online.

Google recently introduced a suite of tools under its new AI Mode, allowing users to upload a photo and preview how clothing would look on their own body. The AI adjusts sizes and fabric drape, enhancing realism.

Shoppers can also set their price and let the AI search for the best deal, alerting them when it’s found and offering to complete the purchase using Google’s payment platform.

OpenAI, Perplexity AI, and Amazon have also added shopping features to their platforms, while Walmart and other retailers are working to ensure their products remain visible to AI shoppers.

Payment giants Visa and Mastercard have upgraded their systems to allow AI agents to process transactions autonomously, cementing the role of digital agents in the online shopping journey.

Experts say this growing ‘agent economy’ offers powerful convenience but raises questions about consumer privacy, trust, and control.

While AI shoppers are unlikely to disrupt e-commerce overnight, analysts note that companies like Google and Meta are particularly well-positioned due to their vast user data and AI leadership.

The next evolution of shopping may not depend on what consumers choose, but on whether they trust machines to choose for them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI copyright clash stalls UK data bill

A bitter standoff over AI and copyright has returned to the House of Lords, as ministers and peers clash over how to protect creative workers while fostering technological innovation.

At the centre of the debate is the proposed Data (Use and Access) Bill, which was expected to pass smoothly but is now stuck in parliamentary limbo due to growing resistance.

The bill would allow AI firms to access copyrighted material unless rights holders opt out, a proposal that many artists and peers believe threatens the UK’s £124bn creative industry.

Nearly 300 Lords have called for AI developers to disclose what content they use and seek licences instead of relying on blanket access. Former film director Baroness Kidron described the policy as ‘state-sanctioned theft’ and warned it would sacrifice British talent to benefit large tech companies.

Supporters of the bill, like former Meta executive Sir Nick Clegg, argue that forcing AI firms to seek individual permissions would severely damage the UK’s AI sector. The Department for Science, Innovation and Technology insists it will only consider changes if they are proven to benefit creators.

If no resolution is found, the bill risks being shelved entirely. That would also scrap unrelated proposals bundled into it, such as new NHS data-sharing rules and plans for a nationwide underground map.

Despite the bill’s wide scope, the fight over copyright remains its most divisive and emotionally charged feature.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!