Australian authorities warn of data exploitation through social media platforms

Social media and messaging services pose growing security and privacy risks, with personal data used to build profiles for fraud, espionage, or social engineering. Even routine posts may contribute to broader data collection and unintended exposure.

Platforms typically collect extensive user and device data under evolving privacy policies, sometimes storing it across jurisdictions with varying legal protections. Such conditions increase the risks to identity theft, reputational harm, and the misuse of aggregated personal information.

The Australian Government advises organisations to restrict access to official accounts, train staff, and enforce clear policies on what can be shared. It also highlights the importance of breach response procedures to maintain operational security.

For individuals, the Government guidance recommends limiting exposure of personal data, using privacy settings, avoiding unknown contacts, and applying strong authentication.

Regular updates, careful app permissions, and device security measures are also encouraged to reduce cyber risks.

Strengthening awareness and applying consistent security practices reduces vulnerability and supports more resilient organisational systems in an increasingly interconnected digital environment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Corporate AI governance gaps highlighted in UNESCO report

UNESCO and the Thomson Reuters Foundation have published ‘Responsible AI in practice: 2025 global insights from the AI Company Data Initiative‘, presenting findings from what the report describes as the largest global dataset of corporate responsible AI disclosures.

The report analyses 2,972 companies across 11 sectors and multiple regions using publicly available disclosures and company survey responses collected through the AI Company Data Initiative.

The report says AI is being embedded across companies’ products, services, and internal operations faster than governance and disclosure are developing. It states that 43.7% of companies publicly communicate having an AI strategy or guidelines, but only 13% publicly claim adherence to a formal AI governance framework.

Among those that do cite a framework, 53% refer to the EU AI Act, while the report says 43.6% cite ‘other’ frameworks, which it presents as weakening comparability across the wider AI governance ecosystem.

The publication also says many companies describe AI governance in conceptual terms while providing less evidence on operational controls, accountability pathways, monitoring, and remediation. It states that 40% report board- or committee-level oversight on AI, and 12.4% report having a policy to ensure a human oversees AI systems.

At the same time, the publication says 72% of companies do not report conducting any AI-related impact assessment. Of those that do, 11% report environmental impact assessments and 7% report human rights impact assessments. The key statistics on page 10 visually present these findings.

Regarding labour impacts, the report says companies do not provide adequate protection for workers as AI reshapes jobs. It states that while 31% of companies claim to have AI training programmes, only 12% offered structured training with comprehensive coverage. It also argues that effective worker protection requires stronger evidence of reskilling, retraining, redeployment, transition support, and access to remedy where AI affects workers’ rights.

Why does it matter?

The report further states that ethical issues, including human rights and environmental impacts, are being sidelined in AI governance and risk management, while transparency regarding training data, third-party systems, and user rights remains uneven. It presents the AI Company Data Initiative as a tool to help companies assess their governance practices against UNESCO’s Recommendation on the Ethics of AI and to give investors more comparable information on how AI is governed in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan approves APPI amendment bill on personal data, AI training, and fines

Japan’s Cabinet has approved a bill to amend the Act on the Protection of Personal Information, or APPI, for submission to parliament.

The proposed amendments combine stricter enforcement with regulatory easing. They would introduce an administrative fine system, strengthen protections for children’s data and certain biometric data, and allow broader use of personal data for AI training. The bill would also ease some data-breach notification requirements.

Digital Minister of Japan, Hisashi Matsumoto, said enabling the use of sensitive personal data without consent is important for developing domestic AI models. He said the bill seeks to balance that objective with stronger protections for children’s data and facial-recognition data, as well as the introduction of administrative fines.

The fine mechanism would be introduced in a limited form. Provisions to impose fines for large-scale data breaches resulting from inadequate security measures were removed. Instead, the bill would target improper acquisition or use of personal data, unlawful provision of data to third parties, and misuse of sensitive data beyond stated statistical purposes, including transfers to third parties.

According to the proposal, fines would apply in large-scale cases involving more than 1,000 affected individuals, with amounts linked to profits derived from unlawful data handling. During drafting, the Personal Information Protection Commission also dropped plans to introduce consumer class actions for legal redress, while saying it would continue studying the issue.

The Personal Information Protection Commission is seeking passage during the current parliamentary session. The proposal follows a lengthy amendment process, during which earlier plans faced opposition from business and technology groups.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

French data protection authority sets out 2026 GDPR and AI guidance agenda

The French data protection authority, the Commission nationale de l’informatique et des libertés (CNIL), has outlined the main guidance, consultations, and resources it plans to publish in 2026 to support compliance with the General Data Protection Regulation and certain provisions of the AI Act.

According to the CNIL, the programme is intended to help public and private sector actors prepare for upcoming consultations and anticipate regulatory developments. It says the programme is indicative and may evolve in response to current events.

The CNIL says it will begin work on ‘multi-property’ consent, covering the conditions for obtaining a single consent across several sites or media, particularly where they belong to the same group. It also says it will finalise work on the use of AI in the workplace and in health, including bias risks and safeguards to protect the rights of employees and patients.

The authority also plans to work on transcription and automated analysis tools used in call centres and videoconferencing software, operational content for data protection officers, and clarification of how the GDPR applies to non-anonymous AI models.

In the health sector, it says it will update research reference methodologies, publish its position on how people should be informed when data are reused for research, and issue a consolidated document on the electronic patient record.

On security, the CNIL says it will continue publishing recommendations to improve personal data security, publish the final updated version of its recommendation on remote electronic voting systems, and open public consultations on recommendations covering the security of personal data exchanges, remote identity verification, and end point detection and response services. It also says it will publish a recommendation on web filtering gateways.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches child safety framework to address AI risks

A new framework has been introduced by OpenAI to address risks of AI-enabled child abuse and strengthen protection mechanisms across digital systems.

An initiative that reflects growing concern over how emerging technologies can both enable and prevent harm.

The blueprint focuses on modernising legal frameworks to address AI-generated harmful content, improving reporting and coordination among service providers, and embedding safety measures directly into AI systems.

These measures aim to enhance early detection and prevent misuse at scale.

Developed in collaboration with organisations such as the National Center for Missing and Exploited Children (NCMEC) and the Attorney General Alliance, the framework promotes shared standards across industry and public authorities.

It emphasises coordinated responses and stronger accountability mechanisms.

An approach that combines technical safeguards, human oversight, and legal enforcement, aiming to improve response speed and reduce risks before harm occurs.

Ultimately, the initiative highlights the need for continuous adaptation as AI capabilities evolve and reshape online safety challenges.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

ICO launches online privacy campaign for parents

New research published by the Information Commissioner’s Office (ICO) found that 24% of primary school-aged children have shared their real name or address online, while 21% of parents and carers have never spoken to them about online privacy. It also found that 22% of children have shared personal information, such as health details, with AI tools.

Research published by the ICO also found that 71% of parents worry that information their child shares today could affect their future. Findings also show that 46% do not feel confident protecting their children’s privacy online, 44% say they try but are not sure they are doing enough, and 42% say they probably do not spend enough time checking privacy settings.

Online privacy is one of the least-discussed online safety topics among parents, according to the ICO. Its research found that 38% discuss it less than once a month, while 90% have discussed screen time in the past month.

Emily Keaney, Deputy Commissioner at the ICO, said: ‘The internet offers amazing opportunities for children – but every click can leave a hidden data trail and these digital footprints can last forever.’ She added: ‘We wouldn’t expect our children to share their birthdays or address with a stranger in a shop, because we’d explain stranger danger to them from a very young age, but kids these days are growing up online.’

Keaney said: ‘We know that where children’s details – like their name, interests and pictures – aren’t protected, the potential risks are serious: unwanted contact from strangers, grooming and radicalisation.’ She said children’s online privacy ‘requires a whole society approach’ and added: ‘We have taken and will continue to take action to hold tech companies accountable for their role.’

Keaney also said: ‘There’s a role for parents too but the problem is that many families have never been shown how to talk to their children about online privacy.’ She added: ‘This is where the ICO comes in. We want parents to feel empowered and children to feel digitally confident, because only then will they be able to start to trust in how their data is used and be part of the whole society solution that is needed for online safety.’

The ICO campaign website outlines three steps for parents: talk regularly with children about online privacy, carefully choose what personal information to share, and check privacy settings on new devices and apps.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New law strengthens protections for healthcare patients in Brazil

Brazil has introduced a new legal framework establishing a nationwide Statute of Patients’ Rights through Law No. 15.378. The law sets out protections and responsibilities for healthcare patients across public, private, and insurance services.

The statute guarantees patients’ key rights such as non-discriminatory treatment, access to clear and sufficient medical information, confidentiality of health data, and the requirement for informed consent before treatment decisions.

Additional protections include the right to a companion during care, access to interpreters or accessibility support, and the ability to seek a second medical opinion. Patient responsibilities are also formalised under the law.

Individuals are expected to provide accurate medical history and follow prescribed treatments. They must ask questions when needed, respect healthcare rules, and inform professionals of any changes in their condition or decision to discontinue treatment.

Compliance measures include publishing rights, assessing healthcare quality, promoting research, and providing complaint channels. Violations are treated as human rights infringements, reinforcing the law’s legal and ethical importance in Brazil’s healthcare system.

By embedding principles such as informed consent, non-discrimination, privacy, and access to information into law, it strengthens individual autonomy and dignity in medical decision-making.

In broader terms, it reinforces the idea that access to safe, transparent, and respectful healthcare is an essential component of fundamental human rights, not a discretionary service.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

IAPP Global Summit session examines AI, privacy, and the courts with US federal judges

US District Court for the District of Columbia Chief Judge James Boasberg and US District Court for the District of Massachusetts Judge Allison Burroughs discussed AI, privacy, and the courts during the IAPP Global Summit 2026 in Washington, D.C.

The IAPP report said Burroughs pointed to the gap between older legal protections and newer technologies, including debates over how surveillance rules apply to cell-tower data. Burroughs said existing laws and constitutional protections are ‘not keeping up, never have kept up and never will keep up’ with the speed of innovation.

Burroughs commented: ‘The gap is getting bigger for two reasons. One is that there’s so much more data stored electronically that if you even search for someone’s laptop, you’re going to get more data now than you used to get, and the other one is that there is so much more technology, there are just so many ways of gaining access to data.’

Another part of the IAPP report stated that Boasberg referred to a case in which lawyers submitted filings containing hallucinatory information generated through AI use. According to the report, he required that side to pay attorney’s fees to the other side as a sanction after discovering that AI had been used in the briefs.

Boasberg noted at the IAPP session: ‘I’m sure lawyers using AI is happening a lot more on the state level, and some judges are referring lawyers to state bars (for possible discipline), but there have been federal judges whose opinions included hallucinatory (citations) and that was obviously embarrassing for them.’ He added: ‘The question is how can it help without compromising privacy issues, sealed cases; there’s just a whole lot that we have to figure out, but I think judges are trying to learn how we can use this constructively.’

Burroughs also remarked at the IAPP event that judges want disclosure when lawyers use AI in filings. She said: ‘We want lawyers to tell us when they’ve used AI. They can use it, but they have to disclose it.’ She added: ‘They can use AI, they can’t use AI, they must disclose when they’re using it, they have to certify that they do citation checks to make sure they don’t have hallucinatory citations — it’s hard to think of what these rules would be going forward today.’

IAPP reported the remarks from the summit discussion. At the IAPP Global Summit, the discussion focused on how AI is affecting legal filings, surveillance questions, and court practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Student AI rights framework unveiled

A newly released ‘Student AI Bill of Rights’ in the US outlines a proposed framework to protect learners as AI tools become increasingly widespread in education. The initiative aims to establish clear standards for fairness, transparency and accountability.

The document highlights the need for students to be informed when AI systems are used in teaching, assessment or administration. It also stresses that students should retain control over their personal data and academic work.

Another central principle is accountability, with students given the right to question and appeal decisions made or influenced by AI systems. The framework also calls for safeguards to prevent bias and ensure equal access to educational opportunities.

While not legally binding, the proposal is designed to guide higher education institutions in developing responsible AI policies. It reflects growing efforts to define ethical standards for AI use in education in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU lapse in child safety rules raises concerns

The expiry of the EU ePrivacy derogation, which allowed technology to detect child sexual abuse material online, has raised concerns over weaker child safeguards. The lapse is seen as creating legal uncertainty for platforms that rely on established detection tools to prevent ongoing harm.

For years, technology companies have voluntarily used hash-matching to detect and remove CSAM, a widely recognised tool for disrupting abuse and protecting victims.

Google is among the organisations calling on the EU institutions to urgently finalise a regulatory framework, alongside nearly 250 child rights organisations, warning that reduced capacity could impact child safety globally.

The EU institutions face criticism for failing to maintain an interim agreement, with stakeholders saying the lack of continuity undermines child online safety efforts.

Meta, Microsoft, and Snap have reaffirmed their commitment to continue voluntary detection and reporting measures while respecting user privacy. The companies also urge the EU institutions to finalise an urgent regulatory framework for consistent and effective child protection standards.

The absence of a clear framework has been described as creating instability for responsible platforms operating across Europe. Fragmented rules and legal uncertainty can slow detection and reporting systems, weakening coordinated protection efforts across platforms and borders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot