NSO asks court to overturn WhatsApp verdict

Israeli spyware company NSO Group has requested a new trial after a US jury ordered it to pay $168 million in damages to WhatsApp.

The company, which has faced mounting legal and financial troubles, filed a motion in a California federal court last week seeking to reduce the verdict or secure a retrial.

The May verdict awarded WhatsApp $444,719 in compensatory damages and $167.25 million in punitive damages. Jurors found that NSO exploited vulnerabilities in the encrypted platform and sold the exploit to clients who allegedly used it to target journalists, activists and political rivals.

WhatsApp, owned by Meta, filed the lawsuit in 2019.

NSO claims the punitive award is unconstitutional, arguing it is over 376 times greater than the compensatory damages and far exceeds the US Supreme Court’s general guidance of a 4:1 ratio.

The firm also said it cannot afford the penalty, citing losses of $9 million in 2023 and $12 million in 2024. Its CEO testified that the company is ‘struggling to keep our heads above water’.

WhatsApp, responding to TechCrunch in a statement, said NSO was once again trying to evade accountability. The company vowed to continue its legal campaign, including efforts to secure a permanent injunction that would prevent NSO from ever targeting WhatsApp or its users again.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI copyright clash stalls UK data bill

A bitter standoff over AI and copyright has returned to the House of Lords, as ministers and peers clash over how to protect creative workers while fostering technological innovation.

At the centre of the debate is the proposed Data (Use and Access) Bill, which was expected to pass smoothly but is now stuck in parliamentary limbo due to growing resistance.

The bill would allow AI firms to access copyrighted material unless rights holders opt out, a proposal that many artists and peers believe threatens the UK’s £124bn creative industry.

Nearly 300 Lords have called for AI developers to disclose what content they use and seek licences instead of relying on blanket access. Former film director Baroness Kidron described the policy as ‘state-sanctioned theft’ and warned it would sacrifice British talent to benefit large tech companies.

Supporters of the bill, like former Meta executive Sir Nick Clegg, argue that forcing AI firms to seek individual permissions would severely damage the UK’s AI sector. The Department for Science, Innovation and Technology insists it will only consider changes if they are proven to benefit creators.

If no resolution is found, the bill risks being shelved entirely. That would also scrap unrelated proposals bundled into it, such as new NHS data-sharing rules and plans for a nationwide underground map.

Despite the bill’s wide scope, the fight over copyright remains its most divisive and emotionally charged feature.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gmail adds automatic AI summaries

Gmail on mobile now displays AI-generated summaries by default, marking a shift in how Google’s Gemini assistant operates within inboxes.

Instead of relying on users to request a summary, Gemini will now decide when it’s useful—typically for long email threads with multiple replies—and present a brief summary card at the top of the message.

These summaries update automatically as conversations evolve, aiming to save users from scrolling through lengthy discussions.

The feature is currently limited to mobile devices and available only to users with Google Workspace accounts, Gemini Education add-ons, or a Google One AI Premium subscription. For the moment, summaries are confined to emails written in English.

Google expects the rollout to take around two weeks, though it remains unclear when, or if, the tool will extend to standard Gmail accounts or desktop users.

Anyone wanting to opt out must disable Gmail’s smart features entirely—giving up tools like Smart Compose, Smart Reply, and package tracking in the process.

While some may welcome the convenience, others may feel uneasy about their emails being analysed by large language models, especially since this process could contribute to further training of Google’s AI systems.

The move reflects a wider trend across Google’s products, where AI is becoming central to everyday user experiences.

Additional user controls and privacy commitments

According to Google Workspace, users have some control over the summary cards. They can collapse a Gemini summary card, and it will remain collapsed for that specific email thread.

In the near future, Gmail will introduce enhancements, such as automatically collapsing future summary cards for users who consistently collapse them, until the user chooses to expand them again. For emails that don’t display automatic summaries, Gmail still offers manual options.

Users can tap the ‘summarise this email’ chip at the top of the message or use the Gemini side panel to trigger a summary manually. Google also reaffirms its commitment to data protection and user privacy. All AI features in Gmail adhere to its privacy principles, with more details available on the Privacy Hub.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini AI can now summarise videos in Google Drive

Google is expanding Gemini AI’s capabilities in Drive by enabling it to analyse video files and respond to user questions or generate concise summaries.

The new feature aims to save users time by providing quick insights from lengthy content such as meetings, classes or announcements, instead of requiring them to watch the entire video. Until now, Gemini could only summarise documents and PDFs stored in Drive.

According to a blog post published on 28 May 2025, the feature will support prompts like ‘Summarise the video’ or ‘List action items from the meeting.’ Users can access Gemini’s functionality either through Drive’s overlay previewer or a standalone viewer in a separate browser tab.

However, captions must be enabled within the user’s domain for the feature to work properly.

The update is being gradually rolled out and is expected to be available to all eligible users by 19 June. At the moment, it is limited to English and accessible only to users of Google Workspace and Google One AI Premium, or those with Gemini Business or Enterprise add-ons.

For administrators, smart features and personalisation settings must be activated to grant access.

To use the new function, users can double-click on a video file in Drive and select the ‘Ask Gemini’ option marked by a star icon in the top right corner. Google says the upgrade reflects a broader effort to integrate AI seamlessly into everyday workflows by making content easier to navigate and understand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces backlash over open source AI claims

Meta is under renewed scrutiny for what critics describe as ‘open washing’ after sponsoring a Linux Foundation whitepaper on the benefits of open source AI.

The paper highlights how open models help reduce enterprise costs—claiming companies using proprietary AI tools spend over three times more. However, Meta’s involvement has raised questions, as its Llama AI models are presented as open source despite industry experts insisting otherwise.

Amanda Brock, head of OpenUK, argues that Llama does not meet accepted definitions of open source due to licensing terms that restrict commercial use.

She referenced the Open Source Initiative’s (OSI) standards, which Llama fails to meet, pointing to the presence of commercial limitations that contradict open source principles. Brock noted that open source should allow unrestricted use, which Llama’s license does not support.

Meta has long branded its Llama models as open source, but the OSI and other stakeholders have repeatedly pushed back, stating that the company’s licensing undermines the very foundation of open access.

While Brock acknowledged Meta’s contribution to the broader open source conversation, she also warned that such mislabelling could have serious consequences—especially as lawmakers and regulators increasingly reference open source in crafting AI legislation.

Other firms have faced similar allegations, including Databricks with its DBRX model in 2024, which was also criticised for failing to meet OSI standards. As the AI sector continues to evolve, the line between truly open and merely accessible models remains a point of growing tension.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New York Times partners with Amazon on AI integration

The New York Times Company and Amazon have signed a multi-year licensing agreement that will allow Amazon to integrate editorial content from The New York Times, NYT Cooking, and The Athletic into a range of its AI-powered services, the companies announced Wednesday.

Under the deal, Amazon will use licensed content for real-time display in consumer-facing products such as Alexa, as well as for training its proprietary foundation models. The agreement marks an expansion of the firms’ existing partnership.

‘The agreement expands the companies’ existing relationship, and will deliver additional value to Amazon customers while bringing Times journalism to broader audiences,’ the companies said in a joint statement.

According to the announcement, the licensing terms include ‘real-time display of summaries and short excerpts of Times content within Amazon products and services’ alongside permission to use the content in AI model development. Amazon platforms will also feature direct links to full Times articles.

Both companies described the partnership as a reflection of a shared commitment to delivering global news and information across Amazon’s AI ecosystem. Financial details of the agreement were not made public.

The announcement comes amid growing industry debate about the role of journalistic material in training AI systems.

By entering a formal licensing arrangement, The New York Times positions itself as one of the first major media outlets to publicly align with a technology company for AI-related content use.

The companies have yet to name additional Amazon products that will feature Times content, and no timeline has been disclosed for the rollout of the new integrations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Croatia urged to embed human rights into AI law

Politiscope recently held an event at the Croatian Journalists’ Association to highlight the human rights risks of AI.

As Croatia begins drafting a national law to implement the EU AI Act, the event aimed to push for stronger protections and transparency instead of relying on vague promises of innovation.

Croatia’s working group is still forming key elements of the law, such as who will enforce it, making it an important moment for public input.

Experts warned that AI systems could increase surveillance, discrimination, and exclusion. Speakers presented troubling examples, including inaccurate biometric tools and algorithms that deny benefits or profile individuals unfairly.

Campaigners from across Europe, including EDRi, showcased how civil society has already stopped invasive AI tools in places like the Netherlands and Serbia. They argued that ‘values’ embedded in corporate AI systems often lack accountability and harm marginalised groups instead of protecting them.

Rather than presenting AI as a distant threat or a miracle cure, the event focused on current harms and the urgent need for safeguards. Speakers called for a public register of AI use in state institutions, a ban on biometric surveillance in public, and full civil society participation in shaping AI rules.

A panel urged Croatia to go beyond the EU Act’s baseline by embracing more transparent and citizen-led approaches.

Despite having submitted recommendations, Politiscope and other civil society organisations remain excluded from the working group drafting the law. While business groups and unions often gain access through social dialogue rules, CSOs are still sidelined.

Politiscope continues to demand an open and inclusive legislative process, arguing that democratic oversight is essential for AI to serve people instead of controlling them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU says US tech firms censor more

Far more online content is removed under US tech firms’ terms and conditions than under the EU’s Digital Services Act (DSA), according to Tech Commissioner Henna Virkkunen.

Her comments respond to criticism from American tech leaders, including Elon Musk, who have labelled the DSA a threat to free speech.

In an interview with Euractiv, Virkkunen said recent data show that 99% of content removals in the EU between September 2023 and April 2024 were carried out by platforms like Meta and X based on their own rules, not due to EU regulation.

Only 1% of cases involved ‘trusted flaggers’ — vetted organisations that report illegal content to national authorities. Just 0.001% of those reports led to an actual takedown decision by authorities, she added.

The DSA’s transparency rules made those figures available. ‘Often in the US, platforms have more strict rules with content,’ Virkkunen noted.

She gave examples such as discussions about euthanasia and nude artworks, which are often removed under US platform policies but remain online under European guidelines.

Virkkunen recently met with US tech CEOs and lawmakers, including Republican Congressman Jim Jordan, a prominent critic of the DSA and the DMA.

She said the data helped clarify how EU rules actually work. ‘It is important always to underline that the DSA only applies in the European territory,’ she said.

While pushing back against American criticism, Virkkunen avoided direct attacks on individuals like Elon Musk or Mark Zuckerberg. She suggested platform resistance reflects business models and service design choices.

Asked about delays in final decisions under the DSA — including open cases against Meta and X — Virkkunen stressed the need for a strong legal basis before enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Drive adds AI video summaries

Google Drive is gaining a new AI-powered tool that allows Workspace users to summarise and interact with video content using Gemini, Google’s generative AI assistant.

Instead of manually skipping through videos, users can now click the ‘Ask Gemini’ button to get instant summaries, key highlights, or action items from uploaded recordings.

The tool builds on Gemini 2.5 Pro’s strong video analysis capabilities, which recently scored 84.8% on the VideoMME benchmark. Gemini’s side panel, already used for summarising documents and folders, can now handle natural language prompts like ‘Summarise this video’ or ‘List key points from this meeting’.

However, the feature only works in English and requires captions to be enabled by the Workspace admin.

Google is rolling out the feature across various Workspace plans, including Business Standard and Enterprise tiers, with access available through Drive’s overlay preview or a new browser tab.

Instead of switching between windows or scrubbing through videos, users can now save time by letting Gemini handle the heavy lifting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Telegram partners with Musk’s xAI

Elon Musk’s AI company, xAI, is partnering with Telegram to bring its AI assistant, Grok, to the messaging platform’s more than one billion users.

Telegram founder Pavel Durov announced that Grok will be integrated into Telegram’s apps and distributed directly through the service.

Instead of a simple tech integration, the arrangement includes a significant financial deal. Telegram is set to receive $300 million in cash and equity from xAI, along with half of the revenue from any xAI subscriptions sold through the platform. The agreement is expected to last one year.

The move mirrors Meta’s recent rollout of AI features on WhatsApp, which drew criticism from users concerned about the changing nature of private messaging.

Analysts like Hanna Kahlert of Midia Research argue that users still prefer using social platforms to connect with friends, and that adding AI tools could erode trust and shift focus away from what made these apps popular in the first place.

The partnership also links two controversial tech figures. Durov was arrested in France in 2024 over allegations that Telegram failed to curb criminal activity, though he denies obstructing law enforcement.

Meanwhile, Musk has been pushing into AI development after falling out with OpenAI, and is using xAI to rival industry giants. In March, he valued xAI at $80 billion after acquiring X, formerly known as Twitter.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!