WhatsApp fixes deleted message privacy gap

WhatsApp is rolling out a privacy improvement that ensures deleted messages no longer linger in quoted replies, addressing a long-standing issue that exposed partial content users had intended to remove.

The update applies automatically, with no toggle required, and has begun reaching iOS users through version 25.12.73, with wider availability expected soon.

Until now, deleting a message for everyone in a chat has not removed it from quoted replies. That allowed fragments of deleted content to remain visible, undermining the purpose of deletion.

WhatsApp removes the associated quoted message entirely instead of keeping it in conversation threads, even in group or community chats.

WABetaInfo, which first spotted the update, noted that users delete messages for privacy or personal reasons, and leave behind quoted traces conflicted with those intentions.

The change ensures conversations reflect user expectations by entirely erasing deleted content, not only from the original message but also from any references.

Meta continues to develop new features for WhatsApp. Recent additions include voice chat in groups and a native interface for iPad. The company is also testing tools like AI-generated wallpapers, message summaries, and more refined privacy settings to enhance user control and experience further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSO asks court to overturn WhatsApp verdict

Israeli spyware company NSO Group has requested a new trial after a US jury ordered it to pay $168 million in damages to WhatsApp.

The company, which has faced mounting legal and financial troubles, filed a motion in a California federal court last week seeking to reduce the verdict or secure a retrial.

The May verdict awarded WhatsApp $444,719 in compensatory damages and $167.25 million in punitive damages. Jurors found that NSO exploited vulnerabilities in the encrypted platform and sold the exploit to clients who allegedly used it to target journalists, activists and political rivals.

WhatsApp, owned by Meta, filed the lawsuit in 2019.

NSO claims the punitive award is unconstitutional, arguing it is over 376 times greater than the compensatory damages and far exceeds the US Supreme Court’s general guidance of a 4:1 ratio.

The firm also said it cannot afford the penalty, citing losses of $9 million in 2023 and $12 million in 2024. Its CEO testified that the company is ‘struggling to keep our heads above water’.

WhatsApp, responding to TechCrunch in a statement, said NSO was once again trying to evade accountability. The company vowed to continue its legal campaign, including efforts to secure a permanent injunction that would prevent NSO from ever targeting WhatsApp or its users again.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

184 million passwords exposed in massive data breach

A major data breach has exposed over 184 million user credentials, including emails, passwords, and account details for platforms such as Google, Microsoft and government portals. It is still unclear whether this was due to negligence or deliberate criminal activity.

The unencrypted, unprotected database was discovered online by cybersecurity researcher Jeremiah Fowler, who confirmed many of the credentials were current and accurate. The breach highlights ongoing failures by data handlers to apply even the most basic security measures.

Fowler believes the data was gathered using infostealer malware, which silently extracts login information from compromised devices and sells it on the dark web. After the database was reported, the hosting provider took it offline, but the source remains unknown.

Security experts urge users to update passwords across all platforms, enable two-factor authentication, and use password managers and data removal services. In today’s hyper-connected world, the exposure of such critical information without encryption is seen as both avoidable and unacceptable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Courts consider limits on AI evidence

A newly proposed rule by the Federal Judicial Conference could reshape how AI-generated evidence is treated in court. Dubbed Rule 707, it would allow such machine-generated evidence to be admitted only if it meets the same reliability standards required of expert testimony under Rule 702.

However, it would not apply to outputs from simple scientific instruments or widely used commercial software. The rule aims to address concerns about the reliability and transparency of AI-driven analysis, especially when used without a supporting expert witness.

Critics argue that the limitation to non-expert presentation renders the rule overly narrow, as the underlying risks of bias and interpretability persist regardless of whether an expert is involved. They suggest that all machine-generated evidence in US courts should be subject to robust scrutiny.

The Advisory Committee is also considering the scope of terminology such as ‘machine learning’ to prevent Rule 707 from encompassing more than intended. Meanwhile, a separate proposed rule regarding deepfakes has been shelved because courts already have tools to address the forgery.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Shoppers can now let AI find and buy deals

Tech giants are pushing deeper into e-commerce with AI-powered digital aides that can understand shoppers’ tastes, try on clothes virtually, hunt for bargains, and even place orders independently.

The so-called ‘AI agent’ mark a new phase in retail, combining personalisation with automation to reshape how people shop online.

Google recently introduced a suite of tools under its new AI Mode, allowing users to upload a photo and preview how clothing would look on their own body. The AI adjusts sizes and fabric drape, enhancing realism.

Shoppers can also set their price and let the AI search for the best deal, alerting them when it’s found and offering to complete the purchase using Google’s payment platform.

OpenAI, Perplexity AI, and Amazon have also added shopping features to their platforms, while Walmart and other retailers are working to ensure their products remain visible to AI shoppers.

Payment giants Visa and Mastercard have upgraded their systems to allow AI agents to process transactions autonomously, cementing the role of digital agents in the online shopping journey.

Experts say this growing ‘agent economy’ offers powerful convenience but raises questions about consumer privacy, trust, and control.

While AI shoppers are unlikely to disrupt e-commerce overnight, analysts note that companies like Google and Meta are particularly well-positioned due to their vast user data and AI leadership.

The next evolution of shopping may not depend on what consumers choose, but on whether they trust machines to choose for them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI copyright clash stalls UK data bill

A bitter standoff over AI and copyright has returned to the House of Lords, as ministers and peers clash over how to protect creative workers while fostering technological innovation.

At the centre of the debate is the proposed Data (Use and Access) Bill, which was expected to pass smoothly but is now stuck in parliamentary limbo due to growing resistance.

The bill would allow AI firms to access copyrighted material unless rights holders opt out, a proposal that many artists and peers believe threatens the UK’s £124bn creative industry.

Nearly 300 Lords have called for AI developers to disclose what content they use and seek licences instead of relying on blanket access. Former film director Baroness Kidron described the policy as ‘state-sanctioned theft’ and warned it would sacrifice British talent to benefit large tech companies.

Supporters of the bill, like former Meta executive Sir Nick Clegg, argue that forcing AI firms to seek individual permissions would severely damage the UK’s AI sector. The Department for Science, Innovation and Technology insists it will only consider changes if they are proven to benefit creators.

If no resolution is found, the bill risks being shelved entirely. That would also scrap unrelated proposals bundled into it, such as new NHS data-sharing rules and plans for a nationwide underground map.

Despite the bill’s wide scope, the fight over copyright remains its most divisive and emotionally charged feature.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI takes over eCommerce tasks as Visa and Mastercard adapt

Visa and Mastercard have announced major AI initiatives that could reshape the future of e-commerce, marking a significant step in the evolution of retail technology.

The initiatives—Visa’s Intelligent Commerce and Mastercard’s Agent Pay—move beyond traditional recommendation engines to empower AI agents to make purchases directly on behalf of consumers.

Visa is partnering with leading tech firms, including Anthropic, IBM, Microsoft, OpenAI, and Stripe, to build a system where AI agents shop according to user preferences.

Meanwhile, Mastercard’s Agent Pay integrates payment functionality into AI-driven conversational platforms, blending commerce and conversation into a seamless user experience.

These announcements follow years of AI integration into retail, with adoption growing at 40% annually and the market projected to surpass $8 billion by 2024. Retailers initially used AI for backend optimisation, but nearly 87% now apply it in customer-facing roles.

The next phase, where AI doesn’t just suggest but acts, is rapidly taking shape—backed by consumer demand for hyper-personalisation and efficiency.

Research suggests 71% of consumers want generative AI embedded in their shopping journeys, with 58% already turning to AI tools over traditional search engines for recommendations. However, consumer trust remains a challenge.

Satisfaction with AI dropped slightly last year, highlighting concerns over privacy and implementation quality—especially critical for financial transactions.

Visa and Mastercard’s moves reflect both opportunity and necessity. With 75% of retailers viewing AI agents as essential within the next year, and AI expected to handle 20% of eCommerce tasks, the payment giants are positioning themselves as indispensable infrastructure in a fast-changing market.

Their broad alliances across AI, payments, and tech underline a shared goal: to stay central as shopping behaviours evolve in the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI could quietly sabotage critical software

When Google’s Jules AI agent added a new feature to a live codebase in under ten minutes, it initially seemed like a breakthrough. But the same capabilities that allow AI tools to scan, modify, and deploy code rapidly also introduce new, troubling possibilities—particularly in the hands of malicious actors.

Experts are now voicing concern over the risks posed by hostile agents deploying AI tools with coding capabilities. If weaponised by rogue states or cybercriminals, the tools could be used to quietly embed harmful code into public or private repositories, potentially affecting millions of lines of critical software.

Even a single unnoticed line among hundreds of thousands could trigger back doors, logic bombs, or data leaks. The risk lies in how AI can slip past human vigilance.

From modifying update mechanisms to exfiltrating sensitive data or weakening cryptographic routines, the threat is both technical and psychological.

Developers must catch every mistake; an AI only needs to succeed once. As such tools become more advanced and publicly available, the conversation around safeguards, oversight, and secure-by-design principles is becoming urgent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New York Times partners with Amazon on AI integration

The New York Times Company and Amazon have signed a multi-year licensing agreement that will allow Amazon to integrate editorial content from The New York Times, NYT Cooking, and The Athletic into a range of its AI-powered services, the companies announced Wednesday.

Under the deal, Amazon will use licensed content for real-time display in consumer-facing products such as Alexa, as well as for training its proprietary foundation models. The agreement marks an expansion of the firms’ existing partnership.

‘The agreement expands the companies’ existing relationship, and will deliver additional value to Amazon customers while bringing Times journalism to broader audiences,’ the companies said in a joint statement.

According to the announcement, the licensing terms include ‘real-time display of summaries and short excerpts of Times content within Amazon products and services’ alongside permission to use the content in AI model development. Amazon platforms will also feature direct links to full Times articles.

Both companies described the partnership as a reflection of a shared commitment to delivering global news and information across Amazon’s AI ecosystem. Financial details of the agreement were not made public.

The announcement comes amid growing industry debate about the role of journalistic material in training AI systems.

By entering a formal licensing arrangement, The New York Times positions itself as one of the first major media outlets to publicly align with a technology company for AI-related content use.

The companies have yet to name additional Amazon products that will feature Times content, and no timeline has been disclosed for the rollout of the new integrations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Croatia urged to embed human rights into AI law

Politiscope recently held an event at the Croatian Journalists’ Association to highlight the human rights risks of AI.

As Croatia begins drafting a national law to implement the EU AI Act, the event aimed to push for stronger protections and transparency instead of relying on vague promises of innovation.

Croatia’s working group is still forming key elements of the law, such as who will enforce it, making it an important moment for public input.

Experts warned that AI systems could increase surveillance, discrimination, and exclusion. Speakers presented troubling examples, including inaccurate biometric tools and algorithms that deny benefits or profile individuals unfairly.

Campaigners from across Europe, including EDRi, showcased how civil society has already stopped invasive AI tools in places like the Netherlands and Serbia. They argued that ‘values’ embedded in corporate AI systems often lack accountability and harm marginalised groups instead of protecting them.

Rather than presenting AI as a distant threat or a miracle cure, the event focused on current harms and the urgent need for safeguards. Speakers called for a public register of AI use in state institutions, a ban on biometric surveillance in public, and full civil society participation in shaping AI rules.

A panel urged Croatia to go beyond the EU Act’s baseline by embracing more transparent and citizen-led approaches.

Despite having submitted recommendations, Politiscope and other civil society organisations remain excluded from the working group drafting the law. While business groups and unions often gain access through social dialogue rules, CSOs are still sidelined.

Politiscope continues to demand an open and inclusive legislative process, arguing that democratic oversight is essential for AI to serve people instead of controlling them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!