Lawmakers at IGF 2025 call for global digital safeguards

At the Internet Governance Forum (IGF) 2025 in Norway, a high‑level parliamentary roundtable convened global lawmakers to tackle the pressing challenge of digital threats to democracy. Led by moderator Nikolis Smith, the discussion included Martin Chungong, Secretary‑General of the Inter‑Parliamentary Union (via video), and MPs from Norway, Kenya, California, Barbados, and Tajikistan. The central concern was how AI, disinformation, deepfakes, and digital inequality jeopardise truth, electoral integrity, and public trust.

Grunde Almeland, Member of the Norwegian Parliament, warned: ‘Truth is becoming less relevant … it’s hard and harder to pierce [confirmation‑bias] bubbles with factual debate and … facts.’ He championed strong, independent media, noting Norway’s success as “number one on the press freedom index” due to its editorial independence and extensive public funding. Almeland emphasised that legislation exists, but practical implementation and international coordination are key.

Kenyan Senator Catherine Mumma described a comprehensive legal framework—including cybercrime, data protection, and media acts—but admitted gaps in tackling misinformation. ‘We don’t have a law that specifically addresses misinformation and disinformation,’ she said, adding that social‑media rumours ‘[sometimes escalate] to violence’ especially around elections. Mumma called for balanced regulation that safeguards innovation, human rights, and investment in digital infrastructure and inclusion.

California Assembly Member Rebecca Bauer‑Kahn outlined her state’s trailblazing privacy and AI regulations. She highlighted a new law mandating watermarking of AI‑generated content and requiring political‑advert disclosures, although these face legal challenges as potentially ‘forced speech.’ Bauer‑Kahn stressed the need for ‘technology for good,’ including funding universities to develop watermarking and authentication tools—like Adobe’s system for verifying official content—emphasising that visual transparency restores trust.

Barbados MP Marsha Caddle recounted a recent deepfake falsely attributed to her prime minister, saying it risked ‘put[ting] at risk … global engagement.’ She promoted democratic literacy and transparency, explaining that parliamentary meetings are broadcast live to encourage public trust. She also praised local tech platforms such as Zindi in Africa, saying they foster home‑grown solutions to combat disinformation.

Tajikistan MP Zafar Alizoda highlighted regional disparities in data protections, noting that while EU citizens benefit from GDPR, users in Central Asia remain vulnerable. He urged platforms to adopt uniform global privacy standards: ‘Global platforms … must improve their policies for all users, regardless of the country of the user.’

Several participants—including John K.J. Kiarie, MP from Kenya—raised the crucial issue of ‘technological dumping,’ whereby wealthy nations and tech giants export harmful practices to vulnerable regions. Kiarie warned: ‘My people will be condemned to digital plantations… just like … slave trade.’ The consensus called for global digital governance treaties akin to nuclear or climate accords, alongside enforceable codes of conduct for Big Tech.

Despite challenges—such as balancing child protection, privacy, and platform regulation—parliamentarians reaffirmed shared goals: strengthening independent media, implementing watermarking and authentication technologies, increasing public literacy, ensuring equitable data protections, and fostering global cooperation. As Grunde Almeland put it: ‘We need to find spaces where we work together internationally… to find this common ground, a common set of rules.’ Their unified message: safeguarding democracy in the digital age demands national resilience and collective global action.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

FC Barcelona documents leaked in ransomware breach

A recent cyberattack on French insurer SMABTP’s Spanish subsidiary, Asefa, has led to the leak of over 200GB of sensitive data, including documents related to FC Barcelona.

The ransomware group Qilin has claimed responsibility for the breach, highlighting the growing threat posed by such actors. With high-profile victims now in the spotlight, the reputational damage could be substantial for Asefa and its clients.

The incident comes amid growing concern among UK small and medium-sized enterprises (SMEs) about cyber threats. According to GlobalData’s UK SME Insurance Survey 2025, more than a quarter of SMEs have been influenced by media reports of cyberattacks when purchasing cyber insurance.

Meanwhile, nearly one in five cited a competitor’s victimisation as a motivating factor.

Over 300 organisations have fallen victim to Qilin in the past year alone, reflecting a broader trend in the rise of AI-enabled cybercrime.

AI allows cybercriminals to refine their methods, making attacks more effective and challenging to detect. As a result, companies are increasingly recognising the importance of robust cybersecurity measures.

With threats escalating, there is an urgent call for insurers to offer more tailored cyber coverage and proactive services. The breach involving FC Barcelona is a stark reminder that no organisation is immune and that better risk assessment and resilience planning are now business essentials.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU adviser backs Android antitrust ruling against Google

An adviser to the Court of Justice of the European Union has supported the EU’s antitrust ruling against Google, recommending the dismissal of its appeal over a €4.1bn fine. The case concerns Google’s use of its Android mobile system to limit competition through pre-installed apps and contractual restrictions.

The original €4.34bn fine was imposed by the European Commission in 2018 and later reduced by the General Court.

Google then appealed to the EU’s top court, but Advocate-General Juliane Kokott concluded that Google’s practices gave it unfair market advantages.

Kokott rejected Google’s argument that its actions should be assessed against an equally efficient competitor, noting Google’s dominance in the Android ecosystem and the robust network effects it enjoys.

She argued that bundling Google Search and Chrome with the Play Store created barriers for competitors.

The final court ruling is expected in the coming months and could shape Google’s future regulatory obligations in Europe. Google has already incurred over €8 billion in the EU antitrust fines across several investigations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp ad rollout in EU slower than global pace amid privacy scrutiny

Meta is gradually rolling out advertising features on WhatsApp globally, starting with the Updates tab, where users follow channels and may see sponsored content.

Although the global rollout remains on track, the Irish Data Protection Commission has indicated that a full rollout across the EU will not occur before 2026. However, this delay reflects ongoing regulatory scrutiny, particularly over privacy compliance.

Concerns have emerged regarding how user data from Meta platforms like Facebook, Instagram, and Messenger might be used to target ads on WhatsApp.

Privacy group NOYB had previously voiced criticism about such cross-platform data use. However, Meta clarified that these concerns are not directly applicable to the current WhatsApp ad model.

According to Meta, integrating WhatsApp with the Meta Account Center—which allows cross-app ad personalization—is optional and off by default.

If users do not link their WhatsApp accounts, only limited data sourced from WhatsApp (such as city, language, followed channels, and ad interactions) will be used for ad targeting in the Updates tab.

Meta maintains that this approach aligns with EU privacy rules. Nonetheless, regulators are expected to carefully assess Meta’s implementation, especially in light of recent judgments against the company’s ‘pay or consent’ model under the Digital Markets Act.

Meta recently reduced the cost of its ad-free subscriptions in the EU, signalling a willingness to adapt—but the company continues to prioritize personalized advertising globally as part of its long-term strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oakley unveils smart glasses featuring Meta technology

Meta has partnered with Oakley to launch a new line of smart glasses designed for active lifestyles. The flagship model, Oakley Meta HSTN, will be available for preorder from 11 July for $499.

Additional Oakley models featuring Meta’s innovative technology are set to launch later in the summer, starting at $399.

https://twitter.com/1Kapisch/status/1936045567626617315

The glasses include a front-facing camera, open-ear speakers, and microphones embedded in the frame, much like the Meta Ray-Bans. When paired with a smartphone, users can listen to music, take calls, and interact with Meta AI.

With built-in cameras and microphones, Meta AI can also describe surroundings, answer visual questions, and translate languages.

With their sleek, sports-ready design and IPX4 water resistance, the glasses are geared toward athletes. They offer 8 hours of battery life—twice that of the Meta Ray-Bans—and come with a charging case that extends usage to 48 hours. Video capture quality has also improved, now supporting 3K resolution.


Customers can choose from five frame and lens combinations with prescription lenses for an added cost. Colours include warm grey, black, brown smoke, and clear, while lens options include Oakley’s PRIZM and transitions.

The $499 limited-edition version features gold accents and gold PRIZM lenses. Sales will cover major markets across North America, Europe, and Australia.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!


Hidden privacy risk: Meta AI app may make sensitive chats public

Meta’s new AI app raises privacy concerns as users unknowingly expose sensitive personal information to the public.

The app includes a Discover feed where anyone can view AI chats — even those involving health, legal or financial data. Many users have accidentally shared full resumes, private conversations and medical queries without realising they’re visible to others.

Despite this, Meta’s privacy warnings are minimal. On iPhones, there’s no clear indication during setup that chats will be made public unless manually changed in settings.

Android users see a brief, easily missed message. Even the ‘Post to Feed’ button is ambiguous, often mistaken as referring to a user’s private chat history rather than public content.

Users must navigate deep into the app’s settings to make chats private. They can restrict who sees AI prompts there, stop sharing on Facebook and Instagram, and delete previous interactions.

Critics argue the app’s lack of clarity burdens users, leaving many at risk of oversharing without realising it.

While Meta describes the Discover feed as a way to explore creative AI usage, the result has been a chaotic mix of deeply personal content and bizarre prompts.

Privacy experts warn that the situation mirrors Meta’s longstanding issues with user data. Users are advised to avoid sharing personal details with the AI entirely and immediately turn off all public sharing options.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Massive data leak exposes 16 billion login credentials from Google, Facebook, and more

One of the largest-ever leaks of stolen login data has come to light, exposing more than 16 billion records across widely used services, including Facebook, Google, Telegram, and GitHub. The breach, uncovered by researchers at Cybernews, highlights a growing threat to individuals and organisations.

The exposed data reportedly originated from info stealer malware, previous leaks, and credential-stuffing tools. A total of 30 separate datasets were identified, some containing over 3.5 billion entries.

These were briefly available online due to unsecured cloud storage before being removed. Despite the swift takedown, the data had already been collected and analysed.

Experts have warned that the breach could lead to identity theft, phishing, and account takeovers. Smaller websites and users with poor cybersecurity practices are especially vulnerable. Many users continue to reuse passwords or minor variations of them, increasing the risk of exploitation.

While the leak is severe, users employing two-factor authentication (2FA), password managers, or passkeys are less likely to be affected.

Passkeys, increasingly adopted by companies like Google and Apple, offer a phishing-resistant login method that bypasses the need for passwords altogether.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Washington city orders removal of crypto ATMs over rising scams 

The Spokane City Council in Washington State has unanimously voted to ban virtual currency kiosks across the city, including crypto ATMs. The ordinance targets approximately 50 machines found at convenience stores, gas stations, and major retailers such as Safeway and Walgreens.

Operators must remove their kiosks within 60 days or risk fines and potential loss of business licences.

Council members highlighted the growing threat these kiosks pose to vulnerable residents, particularly seniors, who have fallen victim to scams. Council Member Paul Dillon described the machines as ‘preferred tools’ for fraudsters exploiting the decentralised nature of cryptocurrency and limited tracking options for stolen funds.

The council initially sought state-level regulation, but after legislative delays, Spokane chose local action to address the issue.

The FBI estimates $5.6 billion of the $6.5 billion lost nationwide to fraud, scams, and extortion in 2023 involved crypto kiosks. Seniors accounted for nearly half of these losses despite being a smaller percentage of the population.

Spokane Police Detective Tim Schwering reported numerous cases where victims were deceived into buying crypto through kiosks after being contacted by scammers impersonating law enforcement or tax officials. Tragically, several local suicides have been linked to these scams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Episource data breach impacts patients at Sharp Healthcare

Episource, a UnitedHealth Group-owned health analytics firm, has confirmed that patient data was compromised during a ransomware attack earlier this year.

The breach affected customers, including Sharp Healthcare and Sharp Community Medical Group, who have started notifying impacted patients. Although electronic health records and patient portals remained untouched, sensitive data such as health plan details, diagnoses and test results were exposed.

The cyberattack, which occurred between 27 January and 6 February, involved unauthorised access to Episource’s internal systems.

A forensic investigation verified that cybercriminals viewed and copied files containing personal information, including insurance plan data, treatment plans, and medical imaging. Financial details and payment card data, however, were mostly unaffected.

Sharp Healthcare confirmed that it was informed of the breach on 24 April and has since worked closely with Episource to identify which patients were impacted.

Compromised information may include names, addresses, insurance ID numbers, doctors’ names, prescribed medications, and other protected health data.

The breach follows a troubling trend of ransomware attacks targeting healthcare-related businesses, including Change Healthcare in 2024, which disrupted services for months. Comparitech reports at least three confirmed ransomware attacks on healthcare firms already in 2025, with 24 more suspected.

Given the scale of patient data involved, experts warn of growing risks tied to third-party healthcare service providers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UBS employee data leaked after Chain IQ ransomware attack

UBS Group AG has confirmed a serious data breach affecting around 130,000 of its employees, following a cyberattack on its third-party supplier, Chain IQ Group AG.

The exposed information included employee names, emails, phone numbers, roles, office locations, and preferred languages. No client data has been impacted, according to UBS.

Chain IQ, a procurement services firm spun off from UBS in 2013, was reportedly targeted by the cybercrime group World Leaks, previously known as Hunters International.

Unlike traditional ransomware operators, World Leaks avoids encryption and instead steals data, threatening public release if ransoms are not paid.

While Chain IQ has acknowledged the breach, it has not disclosed the extent of the stolen data or named all affected clients. Notably, companies such as Swiss Life, AXA, FedEx, IBM, KPMG, Swisscom, and Pictet are among its clients—only Pictet has confirmed it was impacted.

Cybersecurity experts warn that the breach may have long-term implications for the Swiss banking sector. Leaked employee data could be exploited for impersonation, fraud, phishing scams, or even blackmail.

The increasing availability of generative AI may further amplify the risks through voice and video impersonation, potentially aiding in money laundering and social engineering attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!