Privacy lawsuit targets Meta AI glasses after reports of footage review

Meta is facing a new lawsuit in the US over privacy concerns tied to its AI smart glasses.

The legal complaint follows investigative reporting indicating that contractors working for a Kenya-based subcontractor reviewed footage captured by users’ devices, including sensitive personal scenes.

The lawsuit alleges that some of the reviewed material included nudity and other intimate activities recorded by the glasses’ cameras.

According to the complaint, the footage formed part of a data review process designed to improve the AI system integrated into the wearable device.

Plaintiffs claim Meta marketed the product as prioritising user privacy, citing advertisements suggesting that the glasses were ‘designed for privacy’ and that users remained in control of their personal data.

The complaint argues that such messaging could mislead consumers if the footage were subject to human review without clear disclosure.

A legal action that also names eyewear manufacturer Luxottica, which partnered with Meta to produce the glasses.

Meanwhile, the UK’s Information Commissioner’s Office has begun examining the issue after reports that face-blurring safeguards may not have consistently protected individuals captured in the recordings.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU Commission’s new guidance to push Cybersecurity Resilience Act

The EU Commission has opened a public consultation on draft guidance to help companies apply the EU’s Cyber Resilience Act (CRA), a regulation that sets baseline cybersecurity requirements for hardware and software ‘products with digital elements’ to reduce vulnerabilities and improve security throughout a product’s life cycle. The guidance is framed as practical help, especially for microenterprises and SMEs, and the consultation runs until 31 March 2026.

The CRA is designed to make ‘secure by design’ the default for connected products people use every day, from consumer devices to business software, while giving users clearer information about a product’s security properties. In timeline terms, the Act entered into force on 10 December 2024. The incident reporting duties start on 11 September 2026, and the main obligations apply from 11 December 2027, giving industry a runway but also a clear countdown.

What the Commission is trying to nail down now are the parts companies have found hardest to interpret: how the rules apply to remote data processing solutions (cloud-linked features), how they treat free and open-source software, what ‘support periods’ mean in practice (i.e. how long security upkeep is expected), and how the CRA fits alongside other EU laws. In other words, this is less about announcing new rules and more about reducing legal grey zones before enforcement ramps up.

The guidance push also lands amid a broader policy drive, as on 20 January 2026, the Commission proposed a new EU cybersecurity package, built around a revised Cybersecurity Act and targeted NIS2 amendments. The package aims to harden ICT supply chains, including a framework to jointly identify and mitigate risks across 18 critical sectors, and would enable mandatory ‘de-risking’ of EU mobile telecom networks away from high‑risk third‑country suppliers. It also proposes a revamped EU cybersecurity certification system with simpler procedures, giving a default 12‑month timeline to develop certification schemes, while cutting red tape for tens of thousands of firms and strengthening ENISA’s role, including early warnings, ransomware support, and a major budget boost.

Taken together, the EU is moving from strategy documents to operational details, product security on one side (CRA) and ecosystem-level resilience on the other (supply chains, certification, incident reporting and supervision). For companies, that can be both reassuring and demanding: clearer guidance should reduce uncertainty, but the compliance reality may still be layered, especially for businesses spanning devices, software, cloud features, and cross-border operations. The Commission’s stakeholder feedback window is essentially a test of whether these rules can be made workable without diluting their bite.

Why does it matter?

Beyond technical risk, this is increasingly about sovereignty: who sets the rules for digital products, who can be trusted in supply chains, and how much dependency is acceptable in critical infrastructure. Digital governance expert Jovan Kurbalija argues that full ‘stack’ digital sovereignty, that is to say control over infrastructure, services, data, and AI knowledge, is concentrated in very few states, while most countries must balance openness with autonomy. The EU’s current wave of cybersecurity governance fits that pattern: it’s an attempt to turn security standards, certification, and supply-chain choices into a practical form of strategic control, not just to prevent hacks, but to protect democratic institutions, economic competitiveness, and trust in the digital tools people rely on.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

EU competition scrutiny pushes Meta to reopen WhatsApp AI access

Meta has announced that third-party AI chatbots will again be allowed to operate through WhatsApp in Europe, reversing restrictions introduced earlier this year.

The decision follows pressure from the European Commission, which had warned it could impose interim competition measures.

Earlier in 2026, Meta limited access to rival chatbot services on the messaging platform, prompting regulators to examine whether the move unfairly restricted competition in the rapidly expanding AI market.

WhatsApp remains one of the most widely used messaging applications across European countries, making platform access critical for emerging AI services.

Under the new arrangement, companies will be able to distribute general-purpose AI chatbots via the WhatsApp Business API for 12 months.

The change is intended to give European regulators time to complete their investigation while allowing competing AI services to operate within the platform ecosystem.

Meta has also indicated that businesses offering chatbots through WhatsApp will be required to pay fees to access the system.

The European Commission is now assessing whether these adjustments sufficiently address competition concerns surrounding the integration of AI services inside major digital platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Calls grow to strengthen New Zealand privacy law

Pressure is growing in New Zealand to strengthen the Privacy Act following several high-profile data breaches. Debate in New Zealand intensified after a cyberattack exposed medical records from the Manage My Health patient portal.

The breach in New Zealand affected about 120,000 patients and involved threats to release documents on the dark web. Another incident forced the MediMap medication platform offline after unauthorised changes were detected in patient records.

Privacy specialists argue that current enforcement powers are too weak to deter serious failures. The Privacy Act allows only limited financial penalties, with fines generally capped at NZD10,000.

Officials are now considering reforms, including stronger penalties for privacy violations. Policymakers also warn that failure to strengthen the law could threaten the country’s EU data adequacy status.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU launches panel on child safety online and social media age rules

The European Commission has convened a new expert panel tasked with examining how children can be better protected across digital platforms, including social media, gaming environments and AI tools.

The initiative reflects growing concern across Europe regarding the psychological and safety risks associated with young users’ online behaviour.

Announced during the 2025 State of the Union Address by Commission President Ursula von der Leyen, the panel will evaluate evidence on both the opportunities and harms linked to children’s digital engagement.

Specialists from health, computer science, child rights and digital literacy will work alongside youth representatives to assess current research and policy responses.

Discussions during the first meeting centred on platform responsibility, including age-appropriate safety-by-design features, algorithmic amplification and addictive product design.

An initiative that also addresses digital literacy for children, parents and educators, while considering how regulatory measures can reduce risks without undermining the benefits of online participation.

The panel’s work complements the enforcement of the Digital Services Act and related European policies designed to strengthen protections for minors online.

Among the tools under development is an EU age-verification application currently tested in several member states, intended to support privacy-preserving checks compatible with the future EU digital identity framework.

The panel is expected to deliver policy recommendations to the Commission by summer 2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI explains 5 AI value models transforming enterprise strategy

AI is beginning to reshape corporate strategy as organisations shift from isolated technology experiments to broader operational transformation.

According to OpenAI, businesses that treat AI as a collection of disconnected pilots risk missing the bigger structural change that the technology enables.

A new framework describes five value models through which AI can gradually reshape companies. The first stage focuses on workforce empowerment, where tools such as ChatGPT spread AI capabilities across teams and improve everyday productivity.

Once employees develop fluency, organisations can introduce AI-native distribution models that transform how customers discover products and interact with digital services.

More advanced stages involve specialised systems. Expert capability integrates AI into research, creative production, and domain-specific analysis, allowing professionals to explore a wider range of ideas and experiments.

Meanwhile, systems and dependency management introduce AI tools capable of safely updating interconnected digital environments, including codebases, documentation, and operational processes.

The final stage involves full process re-engineering through autonomous agents. In such environments, AI systems coordinate complex workflows across departments while maintaining governance, accountability, and auditability.

Organisations that successfully progress through these stages may eventually redesign their business models rather than merely improving efficiency within existing structures.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New Coruna exploit kit targets iPhones running older iOS versions

The Google Threat Intelligence Group (GTIG) has identified a powerful exploit toolkit, Coruna, that targets Apple iPhones running iOS versions 13.0 to 17.2.1.

The toolkit contains five complete exploit chains and 23 exploits designed to compromise devices using previously unseen techniques and mitigation bypasses.

Parts of the exploit chain were first detected in early 2025, when a client of a commercial surveillance vendor used them. Later investigations revealed the same framework in highly targeted attacks against Ukrainian users linked to a suspected Russian espionage group.

Toward the end of the year, the toolkit resurfaced in large-scale campaigns linked to financially motivated actors operating from China.

Coruna relies on a sophisticated JavaScript framework that identifies iPhone models and their iOS versions before delivering the appropriate WebKit remote code execution exploit and additional bypass techniques.

Several vulnerabilities exploited by the toolkit had previously been treated as zero-day flaws, highlighting the growing circulation of advanced cyber-attack tools among multiple threat actors.

Google warned that the payload can steal sensitive data, including financial and cryptocurrency wallet information, and allows attackers to deploy additional modules remotely.

The company has added related malicious domains to Safe Browsing and urged users to install the latest iOS updates, noting that the exploit kit does not affect the newest version of Apple’s operating system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Passkey login comes to Windows 11 via Bitwarden vault

Bitwarden has announced support for logging into Windows 11 devices using passkeys stored in its encrypted vault, enabling phishing-resistant authentication directly at the operating system login screen.

The feature is available across all Bitwarden plans, including the free tier, and is believed to be a first for a third-party password manager.

During the login process, Windows 11 displays a QR code that users scan with their mobile device running the Bitwarden app, which then confirms access to the stored passkey and completes authentication.

Unlike device-bound passkey implementations, passkeys are synchronised across devices via Bitwarden’s end-to-end encrypted vault, meaning users can still regain access even if their phone is lost.

The feature builds on Microsoft’s introduction of native support for external passkey managers in Windows 11 in November 2025. It requires the device to be joined to Microsoft Entra ID with FIDO2 security key sign-in enabled.

Microsoft says the passkey-based login will roll out throughout March, depending on an organisation’s Entra ID configuration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China strengthens online safeguards for minors

Chinese authorities have introduced new rules to classify online content that could affect the health and well-being of minors. Set to take effect on 1 March, the measures aim to adapt to a rapidly evolving internet landscape.

Top government bodies, including those in cyberspace, education, publishing, film, culture, tourism, public security, and radio and television, jointly released the initiative. Together, they outlined four categories of content that could negatively impact minors and specified their key characteristics.

Recent issues, such as the misuse of minors’ images, have been integrated into the regulatory framework. Authorities also established preventive guidelines to manage risks from emerging technologies, including algorithmic recommendations and generative AI.

Internet platforms and content producers are now required to take both proactive and corrective measures against harmful content. The rules emphasise that platforms must monitor, block, or remove information that could affect minors’ well-being.

The Cyberspace Administration of China pledged to continue purifying the online environment. Authorities will urge platforms to assume their primary responsibilities and strengthen governance of content affecting young users, aiming to create a safer and healthier digital space for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sovereign AI becomes a strategic question for governments

Governments across the world are increasingly treating AI as a strategic capability that shapes economic development, public services and national security. Momentum behind the idea of ‘sovereign AI’ is growing as countries reassess who controls the chips, cloud infrastructure, data and models powering modern technology.

Complete control over the entire AI stack remains unrealistic for most economies because of the enormous financial and technological costs involved. Global infrastructure continues to rely heavily on US technology firms, which still operate a large share of data centres and AI systems worldwide.

Policy makers are therefore exploring different approaches to sovereignty across the AI ecosystem rather than pursuing total independence. Strategies range from building domestic computing capacity to adapting global AI models for national languages, regulations and public services.

Several countries already illustrate different approaches. The EU is investing billions in AI infrastructure, Canada protects sensitive computing resources while using global models, and India prioritises applications that serve its multilingual population through public digital systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot