OpenAI privacy model sets new standard for AI-data protection

The US R&D company, OpenAI, has introduced OpenAI Privacy Filter, a specialised AI system designed to detect and redact personally identifiable information in text with high accuracy.

A model that is part of broader efforts to strengthen privacy-by-design practices in AI development, offering developers a practical tool to embed data protection directly into workflows rather than relying on external processing systems.

Unlike traditional rule-based systems, the model applies contextual language understanding to identify sensitive information in unstructured text. It processes inputs in a single pass and supports long-context analysis, enabling efficient handling of large documents.

Local deployment further reduces exposure risks, allowing sensitive data to remain on-device rather than being transmitted to external servers.

Performance benchmarks indicate near frontier-level capability, with strong precision and recall scores across standard evaluation datasets.

The system detects multiple categories of private data, including personal identifiers, financial information, and confidential credentials, while allowing developers to fine-tune detection thresholds according to operational requirements.

Despite its capabilities, the model is positioned as one component within a wider privacy framework instead of a standalone compliance solution.

Human oversight remains necessary in high-risk domains such as legal or financial processing.

Such a release by OpenAI reflects a shift towards smaller, specialised AI systems designed to address targeted challenges in real-world deployments while maintaining adaptability and transparency.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Online safety agreement signed by eSafety and OAIC in Australia

Australia’s eSafety Commissioner and the Office of the Australian Information Commissioner have signed a memorandum of understanding to strengthen cooperation on issues where online safety and privacy intersect.

The agreement formalises communication pathways between the two regulators and builds on existing collaboration. It covers matters including age-assurance requirements under Australia’s online industry codes and standards, as well as compliance by age-restricted platforms with Social Media Minimum Age obligations.

eSafety Commissioner Julie Inman Grant stated: ‘Both regulators have always recognised that combatting certain harms requires privacy and safety to go hand in hand. For example, at eSafety we knew from the outset our implementation of the Social Media Minimum Age would need to recognise important rights, including the right to privacy.’

She added: ‘Our commitment to continue working collaboratively with the OAIC gives formal recognition to that principle and sets out how we will balance and promote privacy and safety for everyone.’

Inman Grant also linked the agreement to emerging risks associated with new technologies and wider regulatory requirements around age assurance. Grant expanded: ‘It comes at an important time, when the proliferation of new technologies like artificial intelligence is amplifying risks and we are increasingly requiring industry to deploy age-assurance technologies that meet their regulatory obligations and respect privacy in the Australian context.’

Australian Information Commissioner Elizabeth Tydd said the memorandum would support the OAIC’s work in monitoring and responding to emerging online privacy risks and help both agencies deliver their statutory functions under the Online Safety Act.

Tydd added: ‘With this memorandum, we’re not only formalising cooperation, but building a foundation where privacy protections and online safety initiatives can better address specific harms side by side, ensuring Australians can be protected when interacting online.’

Why does it matter?

A growing number of online safety measures now depend on systems that also raise privacy questions, especially age-assurance tools and other platform controls involving personal data. The agreement gives both regulators a clearer basis for coordinating oversight as Australia expands enforcement around child safety, platform obligations, and emerging technologies such as AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ofcom releases public 4chan decision under UK online safety rules

Ofcom has published a non-confidential version of its confirmation decision against 4chan, giving a fuller public account of one of the UK regulator’s early enforcement actions under the Online Safety Act.

The decision concerns 4chan.org and sets out Ofcom’s findings that the platform failed to comply with several duties under the Act. According to the regulator, those failures included failing to carry out a suitable and sufficient illegal content risk assessment, failing to clearly set out in its terms of service how users are to be protected from illegal content, and failing to use highly effective age assurance to prevent children from encountering pornographic content.

Ofcom said 4chan must now take a series of corrective steps, including completing an illegal content risk assessment, updating its terms of service, and implementing robust age assurance measures. The regulator also imposed separate financial penalties linked to each breach, including a substantially larger penalty connected to the child protection requirement.

The case is significant because it shows the Online Safety Act moving from general compliance expectations into concrete enforcement. Rather than only warning platforms about their duties, Ofcom is now publicly setting out what it considers to be specific operational failures and attaching financial consequences to them.

The decision also underlines the regulator’s broader approach to compliance. Ofcom has indicated that further daily penalties can apply after the relevant deadlines if required actions are not taken, showing that enforcement is not limited to one-off fines but can escalate where platforms continue to fall short.

However, the publication of the decision provides platforms with a clearer signal of what enforcement under the Act is likely to look like. The 4chan case suggests that Ofcom is focusing not only on the presence of harmful or illegal content itself, but also on whether platforms have the systems, rules, and protective measures in place that the law requires.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Employee monitoring grows at Meta as AI overhaul accelerates

Meta has introduced a new internal tool to track employee activity, including keystrokes and mouse movements, as part of efforts to train its AI systems. The company says the data will help improve AI models designed to perform everyday digital tasks.

According to company statements, the tracking is limited to Meta-owned devices and applications, with safeguards in place to protect sensitive information. The initiative reflects a broader strategy to gather real-world usage data to enhance the performance and accuracy of AI tools.

The move has raised concerns among employees, some of whom view the monitoring as intrusive, particularly amid ongoing job cuts and reduced hiring. Reports indicate that Meta has significantly scaled back recruitment while increasing investment in AI development.

The company has committed substantial resources to AI, with plans to expand spending and accelerate model development. Internal tracking is positioned as part of a broader shift toward automation, as firms seek to reshape workflows and productivity through AI.

The development highlights growing tensions between AI innovation and workplace privacy. Increased reliance on employee data to train AI systems may reshape labour practices, raising questions about surveillance, consent, and the balance between technological advancement and workers’ rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Law Society conference highlights GDPR’s role in regulating AI tools

GDPR obligations remain ‘fundamental’ when addressing data protection issues linked to AI tools, according to legal experts speaking at a conference organised by the Law Society’s Intellectual Property and Data Protection Commission, a committee within the Law Society of Ireland, on 20 April. The event reviewed recent legislative developments, case law and the use of AI tools in the workplace.

Olivia Mullooly, partner at Arthur Cox, said regulation in the area remains a ‘moving feast’ amid ongoing negotiations on the EU Digital Omnibus. She added that GDPR has been effective in regulating new and novel activities by AI companies, and continues to overlap with other regulatory frameworks.

In a panel discussion, Bird & Bird partner Deirdre Kilroy said firms should not ignore fundamental GDPR principles when using AI. She also noted that organisations should not delay compliance actions despite shifting regulatory conditions.

Speakers also discussed uncertainty around evolving the EU rules and increasing complexity in compliance. The Data Protection Commission reported a rise in AI-related engagements, which accounted for one in four cases last year, up from one in 35 in 2021.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

UK’s ICO outlines personal data use in elections

The UK Information Commissioner’s Office has issued guidance on the use of personal data during the upcoming local elections. The publication aims to inform voters about their rights and expectations.

According to the Office, personal data plays a central role in political campaigning, helping parties communicate with voters and understand public concerns. The regulator emphasises that trust depends on lawful and transparent data use.

The guidance states that voters should expect clear explanations of how their data is used, including when profiling or targeted advertising is involved. Political organisations must provide accessible privacy information and follow data protection rules.

The Information Commissioner’s Office also highlights that individuals have the right to question or object to data use, reinforcing accountability during election campaigns in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NOYB highlights access request failures

NOYB and the European Centre for Digital Rights report that a large share of data access requests are not properly handled under existing rules. The findings are presented as part of a broader assessment of compliance with data protection obligations.

According to the report, 83.5 percent of access requests reviewed were either ignored or inadequately answered. This raises concerns about how organisations implement user rights under the GDPR in practice.

The analysis suggests that individuals face barriers when trying to obtain information about how their personal data is processed. These issues may undermine transparency and accountability in data handling.

Both organisations call for stronger enforcement and clearer obligations to ensure organisations comply with access rights requirements across the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tax Practitioners Board of Australia ends submissions on AI draft for tax agents

Australia’s Tax Practitioners Board has closed submissions on its exposure draft on the use of AI and the Code of Professional Conduct. The draft information sheet, TPB(I) D62/2026, was issued on 23 March 2026 and invited comments within 28 days.

According to the exposure draft, the guidance is intended to help registered tax agents and BAS agents understand their obligations under the Tax Agent Services Act 2009 of Australia when using AI in the provision of tax agent services. The document says it focuses in particular on obligations under the Code of Professional Conduct and the Tax Agent Services (Code of Professional Conduct) Determination 2024.

The draft says tax practitioners remain ultimately responsible for the services they provide and must understand the capabilities and limitations of AI tools, assess outputs, and supplement them with professional judgement. It adds that AI outputs should inform, not replace, tax knowledge, experience, or expertise.

On competency, the draft says tax practitioners must ensure services are provided competently, maintain relevant knowledge and skills, take reasonable care in ascertaining a client’s state of affairs, and take reasonable care to ensure taxation laws are applied correctly. It also says practitioners should verify AI-generated content for accuracy and establish processes to understand and contest AI decisions or outputs.

The exposure draft also addresses confidentiality. It says tax practitioners must not disclose information relating to a client’s affairs to a third party without the client’s permission, and notes that this may include entering client information into AI chatbots or copilots, depending on how those tools are configured and used. It also says practitioners should review commercial AI tools to ensure client information will be kept secure and that Privacy Act 1988 requirements are met.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EDPB adopts scientific research data guidelines and Europrivacy opinions

The European Data Protection Board (EDPB) has adopted guidelines on the processing of personal data for scientific research purposes during its latest plenary, and opened them for public consultation until 25 June. The Board also created a dedicated ‘sprint team’ to complete its upcoming guidelines on anonymisation by the summer.

According to the EDPB, the new guidelines are intended to provide researchers with greater clarity on how the General Data Protection Regulation (GDPR) applies to scientific research while protecting individuals’ fundamental rights. The Board says the text clarifies the meaning of ‘scientific research’ under the GDPR and sets out six indicative factors to help determine whether processing is carried out for scientific research purposes.

The guidelines also explain that further processing for scientific research purposes is presumed to be compatible with the initial purpose for collecting personal data, meaning controllers do not need to carry out the GDPR purpose compatibility test. The EDPB says controllers must still ensure that the legal basis for the initial processing is also suitable for the further processing of personal data for scientific research purposes.

EDPB Chair Anu Talus said: ‘Scientific research can drive societal progress and improve our daily lives. Our guidelines facilitate innovative research by helping researchers to navigate the GDPR. The EDPB is committed to supporting the scientific community and unlocking the full potential of scientific research in the EU while upholding data protection rights.’

On consent, the Board says controllers may rely on ‘broad consent’ when research purposes are not fully known at the time of data collection, provided appropriate safeguards are in place. It also says controllers may seek consent separately for individual research projects once their purposes become known, and that a combination of broad and dynamic consent is possible.

The guidelines also address the rights of individuals, including the rights to erasure and to object, and explain when limitations may apply in the context of scientific research. The EDPB says the text also clarifies how responsibilities should be allocated when several entities are involved in processing, and outlines safeguards such as anonymisation or pseudonymisation, secure processing environments, privacy-enhancing technologies, confidentiality arrangements, and conditions for further use.

In addition, the Board adopted two opinions on two sets of Europrivacy certification criteria for approval as European Data Protection Seals. One opinion approves an updated set of criteria whose scope now includes controllers and processors established outside Europe that are subject to Article 3(2) GDPR.

The second, adopted for the first time, recognises Europrivacy certification criteria as a European Data Protection Seal that can be used as a tool for transfers under Articles 42 and 46 GDPR. According to the EDPB, this will allow data importers outside Europe that are not subject to the GDPR to apply to the Europrivacy certification scheme for transferred data they receive.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Researchers flag risks in EU AI changes

A research paper by Hannah van Kolfschooten, Barry Solaiman and Daria Onitiu examines how recent European Union policy proposals could affect safeguards for medical AI under the EU AI Act. The study focuses on changes linked to broader simplification initiatives.

According to the authors, the reforms could maintain the classification of AI-enabled medical devices as high risk while removing key obligations tied to that classification. These include requirements on data governance, risk management and human oversight.

The paper argues that this shift would separate risk classification from the safeguards that give it practical meaning. It suggests that reliance may move back towards existing medical device laws without equivalent AI-specific protections.

The authors warn that such changes could weaken oversight, increase legal uncertainty and affect patient safety where AI systems influence clinical decisions in the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot