OpenAI privacy model sets new standard for AI-data protection

The US R&D company, OpenAI, has introduced OpenAI Privacy Filter, a specialised AI system designed to detect and redact personally identifiable information in text with high accuracy.

A model that is part of broader efforts to strengthen privacy-by-design practices in AI development, offering developers a practical tool to embed data protection directly into workflows rather than relying on external processing systems.

Unlike traditional rule-based systems, the model applies contextual language understanding to identify sensitive information in unstructured text. It processes inputs in a single pass and supports long-context analysis, enabling efficient handling of large documents.

Local deployment further reduces exposure risks, allowing sensitive data to remain on-device rather than being transmitted to external servers.

Performance benchmarks indicate near frontier-level capability, with strong precision and recall scores across standard evaluation datasets.

The system detects multiple categories of private data, including personal identifiers, financial information, and confidential credentials, while allowing developers to fine-tune detection thresholds according to operational requirements.

Despite its capabilities, the model is positioned as one component within a wider privacy framework instead of a standalone compliance solution.

Human oversight remains necessary in high-risk domains such as legal or financial processing.

Such a release by OpenAI reflects a shift towards smaller, specialised AI systems designed to address targeted challenges in real-world deployments while maintaining adaptability and transparency.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Canada reviews Privacy Act to modernise data protection and digital governance

The Government of Canada has launched a formal review of the Privacy Act, opening a broader effort to modernise how the federal public sector governs personal data in an increasingly digital administrative environment.

Led by the Treasury Board of Canada Secretariat and announced by Shafqat Ali, President of the Treasury Board, the process will reassess how more than 250 government institutions collect, use, share, and protect personal information.

The review places particular emphasis on improving how data is managed across government programmes, with reform proposals focused on more secure information-sharing, less duplication, and greater accuracy in public administration. Canadian authorities say the aim is to introduce designated official data sources while ensuring that any reuse of personal information serves individuals directly or delivers a clear public benefit.

The process also points to more structural changes, including recognising privacy as a fundamental right and aligning legal definitions more closely with international standards. It is further intended to harmonise procedures for accessing personal information and to update the federal privacy framework to support a more connected digital state.

Consultations will continue through mid-2026, with feedback expected to feed into a final report in winter 2026–27. Taken together, the review suggests that Canada is rethinking how privacy protection, public-sector data sharing, and institutional accountability should operate in a modern digital governance system.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Amnesty International warns EU tech law reforms could weaken GDPR and AI Act protections

Amnesty International has warned that proposed EU reforms presented as a way to simplify digital regulation and boost competitiveness could weaken core safeguards for privacy and fundamental rights.
At the centre of the concern is the European Commission’s ‘Digital Omnibus’ initiative, which would affect major pieces of legislation, including the General Data Protection Regulation and the AI Act.

Amnesty and other civil society groups argue that the package risks reopening key protections in the EU’s digital rulebook under the banner of regulatory simplification.

Among the most controversial proposals are changes to how personal data is defined, along with exceptions that could make it easier for companies to retain or reuse data for AI systems. Critics say that such changes would weaken safeguards intended to limit excessive data collection and to preserve accountability in how personal information is processed.

Concerns also extend to the AI Act, where proposed adjustments could reduce obligations for high-risk systems. According to Amnesty, companies may be given greater discretion in how they assess and disclose risks, potentially lowering transparency and limiting external scrutiny.

Delays in implementation, the organisation argues, could also allow harmful systems to remain in use without full regulatory oversight.

The broader reform agenda may reach beyond privacy and AI rules. Future ‘fitness checks’ could also affect frameworks such as the Digital Services Act and the Digital Markets Act, raising wider concerns about whether the EU’s digital regulatory model is being softened in the name of competitiveness.

For critics, the cumulative risk is that the balance of the EU digital framework could begin to shift away from rights protection and public accountability, and towards greater corporate flexibility in areas linked to surveillance, discrimination, and market power.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU child safety rules lapse amid ongoing debate over privacy and enforcement

The European Union has been unable to reach an agreement on extending temporary rules that allow online platforms to detect child sexual abuse material, leaving the current framework set to expire in April.

Discussions between the European Parliament and the Council of the European Union concluded without reaching a consensus on how to proceed with such measures.

The existing rules permit technology companies to voluntarily scan their services for harmful content, supporting efforts to identify and remove illegal material.

The European Commission had proposed a temporary extension while negotiations continue on a permanent framework under the Child Sexual Abuse Regulation, but differing views on scope and safeguards prevented agreement.

Stakeholders across sectors have highlighted the importance of maintaining effective tools to address online harms, while also emphasising the need to respect fundamental rights.

Previous periods of legal uncertainty have shown that detection capabilities may be affected when such frameworks are absent, although assessments of effectiveness remain subject to ongoing debate.

At the same time, concerns have been raised regarding the broader implications of monitoring digital communications. Some perspectives stress that any approach should carefully consider privacy protections, particularly in relation to secure and encrypted services.

Attention now turns to ongoing negotiations on a long-term regulatory solution.

The outcome will shape how the EU approaches the challenge of addressing harmful online content while safeguarding rights and ensuring proportional and transparent enforcement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU calls on US tech firms to respect rules on handling staff data

Concerns over data protection have intensified as the European Commission calls on major technology companies to apply the EU standards when handling sensitive staff information linked to digital regulation.

Pressure follows requests from the US House Judiciary Committee seeking access to communications between US firms and the EU officials involved in enforcing laws such as the Digital Services Act and Digital Markets Act.

The EU officials emphasise that formal exchanges with companies take place through official channels, including documented correspondence, rather than informal messaging platforms. Internal communication practices may involve encrypted tools, reflecting growing concerns about data security and external scrutiny.

Debate surrounding the issue reflects wider tensions between the EU and the US over digital governance, privacy protections and regulatory authority. Questions over jurisdiction and access to sensitive communications are likely to remain central as transatlantic tech policy evolves.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU privacy watchdogs warn over US plans to expand traveller data collection

European privacy authorities have raised concerns about proposed changes to the Electronic System for Travel Authorisation that could require travellers to the US to disclose extensive personal information, including social media activity.

The European Data Protection Board, which coordinates national data protection authorities across the EU, sent a letter to the European Commission asking whether the institution plans to intervene or respond to the updated requirements.

A proposal that would apply to visitors entering the US through the visa-waiver programme for short stays of up to 90 days.

Under the proposed changes, travellers may be required to provide details about their social media accounts covering the previous five years.

Authorities could also request personal data about family members, including addresses, phone numbers and dates of birth, information that privacy regulators argue is unrelated to travel authorisation.

Watchdogs also questioned how EU citizens could exercise their data protection rights once such information is transferred to US authorities, particularly regarding storage periods and potential misuse.

Parallel negotiations between the EU and the US have also attracted attention.

Discussions around a potential Enhanced Border Security Partnerships framework could allow US authorities to seek access to biometric databases held by European countries, including facial scans and fingerprint records.

European privacy regulators warned that such measures could raise significant concerns regarding fundamental rights and personal data protection for travellers from the EU.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Hackers can use AI to de-anonymise social media accounts

AI technology behind platforms like ChatGPT is making it significantly easier for hackers to identify anonymous social media users, a new study warns. LLMs could match anonymised accounts to real identities by analysing users’ posts across platforms.

Researchers Simon Lermen and Daniel Paleka warned that AI enables cheap, highly personalised privacy attacks, urging a rethink of what counts as private online. The study highlighted risks from government surveillance to hackers exploiting public data for scams.

Experts caution that AI-driven de-anonymisation is not flawless. Errors in linking accounts could wrongly implicate individuals, while public datasets beyond social media- such as hospital or statistical records- may be exposed to unintended analysis.

Users are urged to reconsider what information they share, and platforms are encouraged to limit bulk data access and detect automated scraping.

The study underscores growing concerns about AI surveillance. While the technology cannot guarantee complete de-anonymisation, its rapid capabilities demand stronger safeguards to protect privacy online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Privacy lawsuit targets Meta AI glasses after reports of footage review

Meta is facing a new lawsuit in the US over privacy concerns tied to its AI smart glasses.

The legal complaint follows investigative reporting indicating that contractors working for a Kenya-based subcontractor reviewed footage captured by users’ devices, including sensitive personal scenes.

The lawsuit alleges that some of the reviewed material included nudity and other intimate activities recorded by the glasses’ cameras.

According to the complaint, the footage formed part of a data review process designed to improve the AI system integrated into the wearable device.

Plaintiffs claim Meta marketed the product as prioritising user privacy, citing advertisements suggesting that the glasses were ‘designed for privacy’ and that users remained in control of their personal data.

The complaint argues that such messaging could mislead consumers if the footage were subject to human review without clear disclosure.

A legal action that also names eyewear manufacturer Luxottica, which partnered with Meta to produce the glasses.

Meanwhile, the UK’s Information Commissioner’s Office has begun examining the issue after reports that face-blurring safeguards may not have consistently protected individuals captured in the recordings.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Debate grows over the future of privacy

Experts gathered in London, UK, to examine how the concept of privacy has evolved over centuries. Discussions in London, UK, highlighted that privacy was only widely recognised as a legal and social norm after the Second World War.

Speakers in London noted that earlier societies often viewed privacy with suspicion or did not recognise it at all. Historical examples discussed included practices from Roman society and the French monarchy.

Modern legal protections expanded rapidly in recent decades, with privacy laws now covering about 80 percent of the global population. Scholars said the concept remains relatively new despite its central role in modern democracies.

The debate also explored whether privacy will remain a stable social value as technology evolves. Analysts in London said emerging technologies such as AI are reshaping debates over personal data and surveillance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Calls grow to strengthen New Zealand privacy law

Pressure is growing in New Zealand to strengthen the Privacy Act following several high-profile data breaches. Debate in New Zealand intensified after a cyberattack exposed medical records from the Manage My Health patient portal.

The breach in New Zealand affected about 120,000 patients and involved threats to release documents on the dark web. Another incident forced the MediMap medication platform offline after unauthorised changes were detected in patient records.

Privacy specialists argue that current enforcement powers are too weak to deter serious failures. The Privacy Act allows only limited financial penalties, with fines generally capped at NZD10,000.

Officials are now considering reforms, including stronger penalties for privacy violations. Policymakers also warn that failure to strengthen the law could threaten the country’s EU data adequacy status.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot