Australia aligns privacy and online safety regulation

Australia’s eSafety Commissioner and the Office of the Australian Information Commissioner have signed a new agreement to strengthen cooperation on online privacy and safety regulation.

The Memorandum of Understanding formalises coordination between the two bodies as digital risks increasingly overlap across their respective mandates.

The agreement focuses on joint oversight of age-assurance technologies and compliance with social media minimum-age requirements. Both regulators say they want to ensure that systems designed to protect children from harmful or inappropriate content also respect privacy obligations under Australian law.

Officials also highlighted the growing complexity of online risks, particularly with the rapid development of AI and other emerging technologies. The framework is intended to support more consistent regulatory responses by improving communication, information sharing, and enforcement coordination.

Why does it matter?

Officials from both agencies said closer collaboration will help address digital harms more effectively while ensuring privacy protections remain central to online safety measures. The initiative reflects a broader shift towards more integrated regulation of technology-driven risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Meta expands parental oversight with new AI conversation insights for teens

Meta has introduced new supervision features that allow parents to see the topics their teenagers discuss with its AI assistant across Facebook, Messenger, and Instagram.

The update provides visibility into activity over the previous seven days, grouping interactions into areas such as education, health and well-being, lifestyle, travel, and entertainment. Parents can review these themes through a new Insights tab, although they will not see the exact prompts their teen sent or Meta AI’s responses.

The feature forms part of Meta’s broader effort to strengthen safeguards for younger users as AI becomes more embedded in everyday digital experiences. For more sensitive issues, including suicide and self-harm, Meta says it is developing additional alerts to notify parents when teens try to engage in those types of conversations with its AI assistant.

Meta has also partnered with external experts, including the Cyberbullying Research Centre, to develop structured conversation prompts to help families talk about AI use. The company says these tools are intended to support informed, non-judgemental dialogue rather than passive monitoring.

Alongside these updates, Meta has created an AI Wellbeing Expert Council to provide input on the development of age-appropriate AI systems for teens. The move reflects a wider shift towards embedding safety, transparency, and parental involvement into AI-driven platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OpenAI introduces ChatGPT for Clinicians and HealthBench Professional

OpenAI has launched ChatGPT for Clinicians, a version of ChatGPT designed to support clinical tasks such as documentation, medical research, evidence review, and care consults. The company says the product is now available free to verified physicians, nurse practitioners, physician associates, and pharmacists in the United States.

According to OpenAI, ChatGPT for Clinicians includes trusted clinical search with cited answers, reusable skills for repeatable workflows, deep research across medical literature, optional HIPAA support through a Business Associate Agreement for eligible accounts, and the ability for eligible evidence review to count towards continuing medical education credits. OpenAI also says conversations in the product are not used to train models.

The launch builds on OpenAI’s earlier ChatGPT for Healthcare offering for organisations. OpenAI says clinicians across US health systems are already using that product for administrative work such as medical research and documentation, and describes the free clinician version as the next step in expanding access.

Alongside the launch, OpenAI has introduced HealthBench Professional, which it describes as an open benchmark for real-world clinician chat tasks across care consultation, writing, documentation, and medical research. The company says the benchmark is based on physician-authored conversations, multi-stage physician adjudication, and filtered examples selected for quality, representativeness, and difficulty.

OpenAI also says physician advisers reviewed more than 700,000 model responses in health scenarios, and that before release, clinicians tested 6,924 conversations across clinical care, documentation, and research.

According to the company, physicians rated 99.6% of those responses as safe and accurate, while GPT-5.4 in the ChatGPT for Clinicians workspace outperformed base GPT-5.4, other OpenAI and external models, and human physicians on HealthBench Professional. OpenAI adds that the tool is designed to support clinicians with information rather than replace their judgement or expertise.

The company says the free version is currently limited to verified US clinicians, with plans to expand access to additional countries and groups over time. OpenAI also says it will begin by working with the Better Evidence Network to pilot access for verified clinicians outside the United States, subject to local regulations, and has released a Health Blueprint with recommendations for responsible AI integration in US healthcare.

Why does it matter?

The launch of ChatGPT for Clinicians reflects a shift from general-purpose AI use in healthcare towards clinician-specific products tied to workflow, benchmarking, and compliance. It also shows that competition in medical AI is increasingly centred not only on model capability, but on safety evaluation, evidence retrieval, privacy controls, and integration into real clinical practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Italy issues guidelines requiring consent for email tracking pixels

Italy’s Data Protection Authority has issued new guidelines on tracking pixels used in email communications, requiring organisations to inform users and obtain consent before deploying the hidden monitoring tools.

Published on 17 April 2026, the Garante per la Protezione dei Dati Personali guidelines address the invasive nature of tracking pixels, which silently monitor whether recipients open and read emails without their knowledge.

Tracking pixels are tiny, often invisible images embedded in emails that automatically send information back to the sender when recipients open the message. The pixels can collect data, including device type, IP address, and exact time of access.

The Authority identified limited exceptions to the consent requirement, including statistical measurements of email open rates, security protocols during user authentication, and mandatory institutional communications such as fraud alerts or contractual notifications.

The guidelines allow organisations six months from publication to achieve compliance with the new standards. Users in Italy must be able to revoke consent easily and granularly, meaning they can withdraw permission for tracking whilst continuing to receive emails.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Philippines and Bermuda sign data protection agreement to strengthen cross-border cooperation

The National Privacy Commission of the Republic of the Philippines has signed a memorandum of understanding with the Office of the Privacy Commissioner of the Islands of Bermuda to strengthen cooperation on personal data protection.

The agreement focuses on cross-border enforcement and regulatory collaboration, enabling the exchange of information on investigations and mutual assistance in addressing potential violations of data privacy laws. It also supports coordination in cross-border data breach cases.

The agreement outlines cooperation on developing compatible data protection mechanisms, including certification frameworks and trusted data flow systems. It also promotes training, knowledge sharing and collaboration on emerging privacy issues.

The authority states that the partnership between the Philippines and Bermuda aims to strengthen accountability and global data protection standards, and that the agreement was signed during an international privacy summit in Washington.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI privacy model sets new standard for AI-data protection

The US R&D company, OpenAI, has introduced OpenAI Privacy Filter, a specialised AI system designed to detect and redact personally identifiable information in text with high accuracy.

A model that is part of broader efforts to strengthen privacy-by-design practices in AI development, offering developers a practical tool to embed data protection directly into workflows rather than relying on external processing systems.

Unlike traditional rule-based systems, the model applies contextual language understanding to identify sensitive information in unstructured text. It processes inputs in a single pass and supports long-context analysis, enabling efficient handling of large documents.

Local deployment further reduces exposure risks, allowing sensitive data to remain on-device rather than being transmitted to external servers.

Performance benchmarks indicate near frontier-level capability, with strong precision and recall scores across standard evaluation datasets.

The system detects multiple categories of private data, including personal identifiers, financial information, and confidential credentials, while allowing developers to fine-tune detection thresholds according to operational requirements.

Despite its capabilities, the model is positioned as one component within a wider privacy framework instead of a standalone compliance solution.

Human oversight remains necessary in high-risk domains such as legal or financial processing.

Such a release by OpenAI reflects a shift towards smaller, specialised AI systems designed to address targeted challenges in real-world deployments while maintaining adaptability and transparency.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Online safety agreement signed by eSafety and OAIC in Australia

Australia’s eSafety Commissioner and the Office of the Australian Information Commissioner have signed a memorandum of understanding to strengthen cooperation on issues where online safety and privacy intersect.

The agreement formalises communication pathways between the two regulators and builds on existing collaboration. It covers matters including age-assurance requirements under Australia’s online industry codes and standards, as well as compliance by age-restricted platforms with Social Media Minimum Age obligations.

eSafety Commissioner Julie Inman Grant stated: ‘Both regulators have always recognised that combatting certain harms requires privacy and safety to go hand in hand. For example, at eSafety we knew from the outset our implementation of the Social Media Minimum Age would need to recognise important rights, including the right to privacy.’

She added: ‘Our commitment to continue working collaboratively with the OAIC gives formal recognition to that principle and sets out how we will balance and promote privacy and safety for everyone.’

Inman Grant also linked the agreement to emerging risks associated with new technologies and wider regulatory requirements around age assurance. Grant expanded: ‘It comes at an important time, when the proliferation of new technologies like artificial intelligence is amplifying risks and we are increasingly requiring industry to deploy age-assurance technologies that meet their regulatory obligations and respect privacy in the Australian context.’

Australian Information Commissioner Elizabeth Tydd said the memorandum would support the OAIC’s work in monitoring and responding to emerging online privacy risks and help both agencies deliver their statutory functions under the Online Safety Act.

Tydd added: ‘With this memorandum, we’re not only formalising cooperation, but building a foundation where privacy protections and online safety initiatives can better address specific harms side by side, ensuring Australians can be protected when interacting online.’

Why does it matter?

A growing number of online safety measures now depend on systems that also raise privacy questions, especially age-assurance tools and other platform controls involving personal data. The agreement gives both regulators a clearer basis for coordinating oversight as Australia expands enforcement around child safety, platform obligations, and emerging technologies such as AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ofcom releases public 4chan decision under UK online safety rules

Ofcom has published a non-confidential version of its confirmation decision against 4chan, giving a fuller public account of one of the UK regulator’s early enforcement actions under the Online Safety Act.

The decision concerns 4chan.org and sets out Ofcom’s findings that the platform failed to comply with several duties under the Act. According to the regulator, those failures included failing to carry out a suitable and sufficient illegal content risk assessment, failing to clearly set out in its terms of service how users are to be protected from illegal content, and failing to use highly effective age assurance to prevent children from encountering pornographic content.

Ofcom said 4chan must now take a series of corrective steps, including completing an illegal content risk assessment, updating its terms of service, and implementing robust age assurance measures. The regulator also imposed separate financial penalties linked to each breach, including a substantially larger penalty connected to the child protection requirement.

The case is significant because it shows the Online Safety Act moving from general compliance expectations into concrete enforcement. Rather than only warning platforms about their duties, Ofcom is now publicly setting out what it considers to be specific operational failures and attaching financial consequences to them.

The decision also underlines the regulator’s broader approach to compliance. Ofcom has indicated that further daily penalties can apply after the relevant deadlines if required actions are not taken, showing that enforcement is not limited to one-off fines but can escalate where platforms continue to fall short.

However, the publication of the decision provides platforms with a clearer signal of what enforcement under the Act is likely to look like. The 4chan case suggests that Ofcom is focusing not only on the presence of harmful or illegal content itself, but also on whether platforms have the systems, rules, and protective measures in place that the law requires.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Employee monitoring grows at Meta as AI overhaul accelerates

Meta has introduced a new internal tool to track employee activity, including keystrokes and mouse movements, as part of efforts to train its AI systems. The company says the data will help improve AI models designed to perform everyday digital tasks.

According to company statements, the tracking is limited to Meta-owned devices and applications, with safeguards in place to protect sensitive information. The initiative reflects a broader strategy to gather real-world usage data to enhance the performance and accuracy of AI tools.

The move has raised concerns among employees, some of whom view the monitoring as intrusive, particularly amid ongoing job cuts and reduced hiring. Reports indicate that Meta has significantly scaled back recruitment while increasing investment in AI development.

The company has committed substantial resources to AI, with plans to expand spending and accelerate model development. Internal tracking is positioned as part of a broader shift toward automation, as firms seek to reshape workflows and productivity through AI.

The development highlights growing tensions between AI innovation and workplace privacy. Increased reliance on employee data to train AI systems may reshape labour practices, raising questions about surveillance, consent, and the balance between technological advancement and workers’ rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Law Society conference highlights GDPR’s role in regulating AI tools

GDPR obligations remain ‘fundamental’ when addressing data protection issues linked to AI tools, according to legal experts speaking at a conference organised by the Law Society’s Intellectual Property and Data Protection Commission, a committee within the Law Society of Ireland, on 20 April. The event reviewed recent legislative developments, case law and the use of AI tools in the workplace.

Olivia Mullooly, partner at Arthur Cox, said regulation in the area remains a ‘moving feast’ amid ongoing negotiations on the EU Digital Omnibus. She added that GDPR has been effective in regulating new and novel activities by AI companies, and continues to overlap with other regulatory frameworks.

In a panel discussion, Bird & Bird partner Deirdre Kilroy said firms should not ignore fundamental GDPR principles when using AI. She also noted that organisations should not delay compliance actions despite shifting regulatory conditions.

Speakers also discussed uncertainty around evolving the EU rules and increasing complexity in compliance. The Data Protection Commission reported a rise in AI-related engagements, which accounted for one in four cases last year, up from one in 35 in 2021.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot