Pressure is growing in New Zealand to strengthen the Privacy Act following several high-profile data breaches. Debate in New Zealand intensified after a cyberattack exposed medical records from the Manage My Health patient portal.
The breach in New Zealand affected about 120,000 patients and involved threats to release documents on the dark web. Another incident forced the MediMap medication platform offline after unauthorised changes were detected in patient records.
Privacy specialists argue that current enforcement powers are too weak to deter serious failures. The Privacy Act allows only limited financial penalties, with fines generally capped at NZD10,000.
Officials are now considering reforms, including stronger penalties for privacy violations. Policymakers also warn that failure to strengthen the law could threaten the country’s EU data adequacy status.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Bitwarden has announced support for logging into Windows 11 devices using passkeys stored in its encrypted vault, enabling phishing-resistant authentication directly at the operating system login screen.
The feature is available across all Bitwarden plans, including the free tier, and is believed to be a first for a third-party password manager.
During the login process, Windows 11 displays a QR code that users scan with their mobile device running the Bitwarden app, which then confirms access to the stored passkey and completes authentication.
Unlike device-bound passkey implementations, passkeys are synchronised across devices via Bitwarden’s end-to-end encrypted vault, meaning users can still regain access even if their phone is lost.
The feature builds on Microsoft’s introduction of native support for external passkey managers in Windows 11 in November 2025. It requires the device to be joined to Microsoft Entra ID with FIDO2 security key sign-in enabled.
Microsoft says the passkey-based login will roll out throughout March, depending on an organisation’s Entra ID configuration.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
TikTok will not adopt end-to-end encryption for direct messages. The company explained that using this technology could hinder safety teams’ and law enforcement’s efforts to detect harmful content in private messages, which the company believes could make users less safe online.
Encrypted messaging ensures that only the sender and recipient can read a conversation and is widely used across the social media industry. Rivals including Facebook, Instagram, Messenger, and X have adopted the technology, saying protecting private communication is central to user privacy.
The issue has become more sensitive because the platform has long faced scrutiny over possible links between its parent company, ByteDance, and the government of the People’s Republic of China, something the company has repeatedly denied. Reflecting these concerns, earlier this year, US lawmakers ordered the separation of TikTok’s US operations from its global business.
The company told the BBC that encrypted messaging would make it impossible for police and platform safety teams to read direct messages when needed. TikTok emphasised that this decision was made to enhance user protection, with a particular focus on the safety of younger users, and that it sees monitoring capabilities as crucial for addressing harmful behaviour.
Industry analyst Matt Navarra said the platform’s decision to ‘swim against the tide’ is ‘notable’ but presents ‘challenging optics’. He noted, ‘Grooming and harassment risks are present in DMs [direct messages], so TikTok can state it is prioritising proactive safety over privacy absolutism,’ though he added that the decision ‘places TikTok out of alignment with global privacy expectations’.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Businesses across the US and Europe are confronting new privacy risks as AI transcription tools spread through workplaces. Tools that automatically record and transcribe meetings increasingly capture sensitive conversations without clear consent.
Privacy specialists warn that organisations in the US and Europe previously focused on rules controlling what employees upload into AI systems. Governance efforts now shift towards monitoring what AI tools record during daily work.
AI services such as Otter, Zoom transcription and Microsoft Copilot can record discussions involving performance reviews, health information and legal matters. Companies in the US and Europe face legal exposure when third-party platforms store recordings without strict controls.
Governance teams in the US and Europe are being urged to introduce clear rules on meeting recordings and retention of transcripts. Stronger policies may include consent requirements, limits on recording sensitive meetings and stricter data storage oversight.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Ransomware attacks are increasingly relying on stolen passwords rather than traditional malware, according to Cloudflare’s latest annual threat report. Attackers now exploit legitimate account credentials to blend into regular traffic, making breaches harder to detect and contain.
Manufacturing and critical infrastructure organisations account for over half of targeted attacks, reflecting their high operational stakes.
Cloudflare highlighted that AI is enabling attackers to prioritise speed and scale over technical sophistication. Generative AI lets criminals automate fraud, hijacking email threads and targeting a ~$49,000 sweet spot to maximise profit while avoiding scrutiny.
Nation-state actors also leverage legitimate platforms for command-and-control operations, with Russia, China, Iran, and North Korea each following distinct cyber strategies.
Researchers warned that modern ransomware is less a malware crisis and more an identity and access challenge. Attackers using authorised credentials can bypass defences and execute high-impact extortion, marking a significant shift in global threat vectors.
The report urges businesses to strengthen identity security, monitor access, and defend against AI-driven attacks that exploit impersonation and automation at scale.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The civil liberties committee failed to secure majority backing for its amended report on extending the EU’s temporary chat-scanning rules instead of giving a clear negotiating position.
Members of Parliament reviewed the amendments on Monday, but the final text did not garner sufficient support, leaving the proposal without endorsement as the adoption deadline approaches.
A proposal to extend the current derogation that allows tech companies to voluntarily scan their services for Child Sexual Abuse Material (CSAM).
The existing regime expires in April 2026 and was intended only as a stopgap while a permanent Child Sexual Abuse Regulation was developed. Years of stalled negotiations have led to the temporary rules being extended twice since 2021.
Council has already approved its position without changes to the Commission proposal, creating a tight timeline for Parliament.
With trilogue talks finally underway, institutions would need to conclude discussions unusually quickly to prevent the legal basis from expiring. If no agreement is reached by April, companies would lose their ability to scan services under the EU law.
The committee confirmed that the file will now move to plenary in the week of 9–12 March, where political groups may table new amendments. An outcome that will determine whether the temporary regime remains in place while negotiations on the permanent system continue.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Social platform X has released a standalone version of its private messaging service, X Chat, via Apple’s TestFlight. The initial beta reached capacity within two hours, reflecting strong early demand among iOS users eager to trial the new app.
Michael Boswell confirmed that the first 1,000 places were quickly expanded to 5,000, with further growth expected. Development has been ongoing for several months, and testers have been urged to stress-test the product and submit detailed feedback.
Early screenshots suggest a cleaner interface and possible rebranding to ‘xChat’.
Security claims remain under scrutiny, as experts question whether X Chat’s encryption matches established platforms such as Signal. Clear evidence addressing those concerns in the standalone build has yet to emerge.
Launch of the separate app marks a notable shift from Elon Musk’s earlier ambition to integrate messaging, payments, and content into a single ‘everything app’.
Chats will synchronise across X, its web platform chat.x.com, and the new iOS app, while an Android version is expected soon.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A deepfake video of Bombay Stock Exchange chief executive Sundararaman Ramamurthy circulated on social media in India, falsely offering stock advice to investors. The exchange moved quickly to report and remove the content, warning the public not to trust fake investment clips.
Cybersecurity experts say such cases are rising sharply, with one US firm estimating a 3,000 percent increase in deepfake incidents over two years. Executives in the US and the UK have also been impersonated using AI-generated audio and video.
In Hong Kong, police said a UK engineering firm lost $25m after an employee joined a video call featuring deepfake versions of senior colleagues. The transfer was made to multiple accounts before the fraud was discovered.
Security companies in the US and the UK are developing detection tools that analyse facial movement and blood flow patterns to identify AI-generated footage. Analysts warn that as costs fall and tools improve, businesses in India, Hong Kong and beyond face an escalating arms race against digital fraud.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has outlined a plan to strengthen Chrome’s HTTPS security against future quantum-computing threats. Rather than expanding traditional X.509 certificate chains in Chrome with post-quantum cryptography, the company is developing a new model based on Merkle Tree Certificates (MTCs).
The proposal from the PLANTS working group seeks to modernise the web public key infrastructure. Under the MTC model, a Certification Authority signs a single ‘Tree Head’ covering many certificates. Browsers receive a lightweight proof instead of a full certificate chain.
Google said this structure reduces authentication data exchanged during TLS handshakes while supporting post-quantum algorithms. By decoupling cryptographic strength from certificate size, the approach seeks to preserve performance as stronger security standards are adopted.
The company is already testing MTCs with real internet traffic. Phase one involves feasibility studies with Cloudflare, while phase two, in early 2027, will invite selected Certificate Transparency log operators to support initial public deployment.
By the third quarter of 2027, Google plans to establish requirements for onboarding certificate authorities to the quantum-resistant Chrome Root Store, which exclusively supports MTCs. The company described the initiative as foundational to maintaining long-term web security resilience.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Hundreds of academics urged governments to halt plans for mandatory age checks on social media, rather than accelerating deployment without assessing the risks.
Researchers argue that current systems expose people to privacy breaches, security vulnerabilities and malicious sites that ignore verification rules instead of offering meaningful protection.
They say scientific consensus has not yet formed on the benefits or harms of age-assurance technologies, making large-scale implementation premature and potentially discriminatory.
The letter stresses that any credible system would require cryptographic safeguards for every query, protecting data in transit rather than leaving identity checks to platforms without robust technical guarantees.
Academics believe such infrastructure would be complex to build globally and would create friction that many providers may refuse to adopt.
Concern escalated after early deployments in Italy and France, where verification is already mandatory.
Signatories, including Ronald Rivest and Bart Preneel, warn that governments risk introducing a socially unacceptable system that increases exposure to data misuse instead of ensuring children’s safety online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!