IAPP updates US state breach notification resource as legal differences persist

The International Association of Privacy Professionals (IAPP) has updated its US State Breach Notification Chart, a resource that summarises state breach notification laws across the United States. In an analysis published on 26 March, the IAPP says the revised chart highlights both nationwide coverage and continuing variation in how states define personal information, apply harm thresholds, and trigger reporting duties.

According to the IAPP, all 50 states, the District of Columbia, Guam, Puerto Rico, and the US Virgin Islands now have breach notification laws. California enacted the first state law in 2002, which took effect in 2003, while Alabama was the last state to adopt such a law in 2018. The IAPP says the result is a de facto nationwide framework, but one marked by significant differences across jurisdictions.

A central point in the analysis is that breach notification laws generally use a narrower definition of personal information than more recent comprehensive privacy laws. The IAPP says the original purpose of breach notification was to alert people to the risks of identity theft and financial fraud after a data breach, so laws tend to focus on identifiers such as names combined with Social Security numbers, driver’s licence details, or financial account credentials.

The article contrasts narrower statutes with broader ones. Hawaii’s law is described as among the narrowest, while Illinois and California are presented as having broader definitions that can extend to medical information, health insurance details, biometric data, genetic data, and, in California’s case, some automated licence plate recognition data.

Even so, the IAPP says many state breach laws still do not cover large categories of digital information, such as browsing history, cookie data, IP addresses, cell phone numbers, purchasing records, or complete financial transaction histories where account credentials were not compromised.

Exemptions and scope also vary. The IAPP says most breach notification laws apply broadly to businesses and often to nonprofit organisations, while privacy laws tend to contain more exclusions. The article notes that some states cover state and local government entities directly, while California has a separate breach notification law for governmental bodies. The IAPP also says its chart is focused on laws applicable to the private sector.

Encryption safe harbours appear across the state laws, according to the analysis, with some states also recognising redaction or other protections that render data unreadable or unusable. Attorney general notification requirements also differ. The IAPP says 34 state laws require notice to the state attorney general once certain thresholds are met, with thresholds ranging from 250 affected residents in North Dakota and Oregon to 1,000 in many other states, while some states, such as Connecticut and New York, require notice regardless of the number affected.

Harm thresholds are another area of divergence. The IAPP says about 30 state laws include a harm standard, meaning notice may not be required unless the breach caused, or is likely to cause, harm to affected individuals.

The article describes substantial differences in wording across states, with some referring to ‘reasonable likelihood’ of harm, others to ‘material risk,’ ‘substantial economic loss,’ or misuse of the data, while some states, including California, Georgia, Illinois, Massachusetts, Minnesota, North Dakota, and Texas, require no harm showing at all.

The practical effect, the IAPP argues, is that organisations holding data on residents of multiple states face a complex compliance problem. A data element that triggers notice in one state may not do so in another, and the article says reconciling the different harm standards is effectively impossible. The analysis notes that some organisations may decide to notify if there is doubt, while others may choose to notify only where clearly required.

The IAPP concludes that the absence of a preemptive federal breach notification law leaves entities to navigate overlapping but inconsistent state rules. Its updated chart is presented as a tool to help practitioners track those differences and build awareness of how US state breach notification laws continue to evolve.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India AI governance faces court, privacy and cyber pressures

An opinion article published by the International Association of Privacy Professionals says India’s data protection and AI governance environment is facing growing pressure as compliance work around the Digital Personal Data Protection Act (DPDPA) unfolds, court challenges continue, and regulators widen oversight into new sectors. The piece, published on 26 March, is labelled as an opinion article and includes an editor’s note stating that the IAPP is policy neutral and publishes contributed opinion pieces to reflect a broad spectrum of views.

The article says several legal and regulatory developments are unfolding simultaneously. One example cited is a public interest litigation filed before India’s Supreme Court by journalist Geeta Seshu and the Software Freedom Law Centre, India, challenging parts of the DPDPA on constitutional and rights-related grounds. According to the piece, the Supreme Court later issued a notice to the Government of India on 12 March.

Concerns outlined in the article include the absence of journalistic exemptions, the lack of compensation for data breach victims when penalties are imposed to the government, broad state powers to exempt departments from the law, and questions about the independence of the Data Protection Board given the government’s control over appointments. The article notes that similar petitions had already been filed, but says this was the first time the court issued notice to the government.

The article also turns to proceedings before the Kerala High Court involving privacy concerns about biometric and personal data collected through Digi Yatra, a not-for-profit foundation that operates airport passenger-processing infrastructure in India. According to the piece, a public interest litigation filed by C R Neelakandan asked for a temporary restraint on the sharing of collected personal data and its commercial use without proper authorisation.

The article says the Kerala High Court issued notice to the Digi Yatra Foundation and sought clarification from the government on whether the Data Protection Board had been established to oversee such matters.

Alongside the litigation, the opinion piece points to government efforts to show legal preparedness for AI-related risks. It says Electronics and Information Technology Minister Ashwini Vaishnaw outlined existing safeguards during the ongoing parliamentary session, referring to the Information Technology Act, the DPDPA, and subordinate rules, along with published guidelines on AI governance, toy safety, harmful content, awareness-building measures, and cyber safety.

Cybersecurity developments also feature in the article. It says the Indian Computer Emergency Response Team, working with the SatCom Industry Association, issued guidelines on 26 February for space, including satellite communications. According to the piece, the framework is intended to strengthen resilience in India’s space ecosystem.

It applies to covered entities, including government agencies, satellite service providers, ground station operators, terminal equipment vendors, and private space entities. Incident reporting within six hours and annual audits are among the measures described.

A further section of the article draws on Thales’ 2026 Data Threat Report. The piece says 64% of surveyed organisations in India identified AI-driven transformation as their biggest security risk, while 55% said they had to deal with reputational damage caused by AI-generated misinformation. It also says 65% reported deepfake-driven attacks, 35% had a complete view of their data, and 36% could fully classify their data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU demands stronger age verification from adult websites

The European Commission has preliminarily found that several major adult platforms, including Pornhub, Stripchat, XNXX, and XVideos, may be in breach of the Digital Services Act for failing to adequately protect minors from accessing harmful content.

These findings highlight concerns that children can easily access such platforms rather than being effectively prevented by robust safeguards.

The Commission’s investigation indicates that the platforms’ risk assessments were insufficient. In several cases, companies focused on reputational or business risks instead of fully addressing societal harms to minors.

Authorities also raised concerns that some platforms did not adequately consider input from civil society organisations specialising in children’s rights and age-assurance technologies, undermining the reliability of their evaluations.

Regarding risk mitigation, the Commission found that existing measures are ineffective. Simple self-declaration systems, in which users confirm they are over 18, were deemed inadequate, while additional features such as warnings, labels, or blurred content failed to prevent minors from accessing content.

The Commission considers that stronger, privacy-preserving age-verification solutions are necessary to ensure meaningful protection of children’s rights and well-being online.

The companies involved now have the opportunity to respond and propose corrective measures, while consultations with the European Board for Digital Services continue.

If the preliminary findings are confirmed, the Commission may impose fines of up to 6 percent of global annual turnover, alongside periodic penalties to enforce compliance.

The case forms part of broader efforts to enforce the Digital Services Act and strengthen online safety across the EU, rather than relying on voluntary measures by platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Europol warns legal gaps could weaken child abuse detection online

Efforts to combat online child sexual exploitation could be severely weakened, Europol has warned, if legal frameworks supporting detection and reporting are disrupted.

Executive Director Catherine De Bolle highlighted growing concerns over the increasing volume of harmful content online and stressed that protecting children remains a top priority for European law enforcement.

Authorities rely heavily on reports submitted by online service providers, which play a central role in identifying victims and supporting investigations, rather than relying solely on traditional policing methods.

Europol processed around 1.1 million CyberTips in a single year, many originating from the National Centre for Missing & Exploited Children and shared across 24 European countries.

These CyberTips include critical evidence such as images, videos, and other digital data used to track criminal activity.

Europol cautioned that removing the legal basis allowing voluntary detection by platforms could significantly reduce the number of reports submitted to authorities. A decline in CyberTips would limit investigative leads, making it harder to identify victims and disrupt online criminal networks.

Such a development could undermine broader security efforts and weaken the protection of minors across the EU instead of strengthening safeguards.

The agency emphasised that maintaining online service providers’ ability to detect and report suspected abuse is essential to effective law enforcement.

Ensuring continued cooperation between platforms and authorities remains a key factor in safeguarding children and addressing the growing threat of online exploitation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Mistral AI launches open-source voice model for enterprises

Mistral AI has introduced a new open-source text-to-speech model designed to power voice assistants and enterprise applications, rather than relying on proprietary solutions.

The model, named Voxtral TTS, marks the company’s entry into the competitive voice AI market alongside players such as OpenAI and ElevenLabs.

Voxtral TTS supports nine languages, including English, French, German, Spanish, and Arabic, allowing organisations to deploy multilingual voice systems across different markets.

The Mistral AI model is designed to operate efficiently on devices such as smartphones, laptops, and even wearables, reducing infrastructure costs rather than relying on large-scale cloud systems.

It can replicate custom voices using only a few seconds of audio, capturing accents and speech patterns while maintaining consistency across languages.

The system is optimised for real-time performance, delivering rapid response times and enabling applications such as live translation, dubbing, and customer engagement tools.

Built on a compact architecture, it balances efficiency with high-quality output, aiming to produce natural-sounding speech instead of robotic voice synthesis. Earlier releases of transcription models suggest a broader strategy to develop a full suite of voice technologies.

Looking ahead, Mistral AI plans to expand towards end-to-end multimodal systems capable of handling audio, text, and image inputs within a single platform.

The company’s focus on open-source development and customisation is intended to attract enterprises seeking flexible solutions, positioning its technology as an alternative to closed ecosystems in the growing voice AI market.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK tightens sanctions on crypto-linked scam networks

The UK has stepped up its crackdown by sanctioning a crypto marketplace tied to major scam centres in Southeast Asia. Measures aim to disrupt the sale of stolen personal data and limit the financial infrastructure enabling online fraud targeting British victims.

Authorities also targeted operators behind ‘#8 Park’, Cambodia’s largest scam compound, believed to house up to 20,000 trafficked workers. Many individuals forced to run scams were lured with false job offers before being coerced into fraudulent activity under severe threats.

Sanctions extend to key entities and individuals connected to the wider network, including those facilitating crypto laundering and cross-border financial flows. Earlier UK action froze over £1 billion in assets and helped shut down platforms used for laundering illicit funds.

Officials said the measures will isolate these operations from the crypto ecosystem and freeze UK-based assets. The measures come ahead of an international summit in June aimed at strengthening global coordination against illicit finance and digital fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google sets 2029 deadline for post-quantum cryptography migration

A transition to post-quantum cryptography by 2029 is being led by Google, aiming to secure digital systems against future quantum computing threats instead of relying on existing encryption standards.

The move reflects growing concern that advances in quantum hardware and algorithms could eventually undermine current cryptographic protections, particularly through attacks that store encrypted data today for decryption in the future.

Quantum computers are expected to challenge widely used encryption and digital signature systems, prompting the need for early transition strategies.

Google has updated its threat model to prioritise authentication services, recognising that digital signatures pose a critical vulnerability if not addressed before the arrival of quantum machines capable of cryptanalysis.

The company is encouraging broader industry action to accelerate migration efforts and reduce long-term security risks.

As part of its strategy, Google is integrating post-quantum cryptography into its products and services.

Android 17 will include quantum-resistant digital signature protection aligned with standards developed by the US’s National Institute of Standards and Technology. At the same time, support has already been introduced in Google Chrome and cloud platforms.

These measures aim to bring advanced security technologies directly to users instead of limiting them to experimental environments.

By setting a clear timeline, Google aims to instil urgency and direction across the wider technology sector.

The transition to post-quantum cryptography is expected to become a critical step in maintaining online security, ensuring that digital infrastructure remains resilient as quantum computing capabilities continue to evolve.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI launches a public Safety Bug Bounty programme

OpenAI has introduced a public Safety Bug Bounty programme to identify misuse and safety risks across its AI systems. The initiative expands the company’s existing vulnerability reporting framework by focusing on harms that fall outside traditional security definitions.

The programme covers AI threats such as agentic risks, prompt injection, data exfiltration, and bypassing platform integrity controls. Researchers are encouraged to submit reproducible cases where AI systems perform harmful actions or expose sensitive information.

Unlike standard security reports, the initiative accepts safety issues that pose real-world risk, even if they are not classified as technical vulnerabilities. Dedicated safety and security teams will assess submissions and may be reassigned depending on relevance.

The scheme is open to external researchers and ethical hackers to strengthen AI safety through broader collaboration. OpenAI says the approach is intended to improve resilience against evolving misuse as AI systems become more advanced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK tests social media bans for children in national pilot

The UK government has launched a large-scale pilot programme to test social media restrictions in the homes of 300 teenagers, aiming to improve children’s well-being instead of relying solely on existing digital safety measures.

The initiative, led by the Department for Science, Innovation and Technology and supported by Liz Kendall, will run for six weeks and examine how limits on digital platforms affect young people’s daily lives, including sleep, schoolwork, and family relationships.

Families across the UK will be divided into groups testing different approaches. Some parents will block access to social media entirely, while others will introduce a one-hour daily limit on popular platforms such as Instagram, TikTok, and Snapchat.

Another group will implement overnight curfews, restricting access between 9 pm and 7 am, while a control group will maintain existing usage patterns rather than introducing changes.

Participants will be interviewed before and after the trial to assess behavioural and practical outcomes, including how easily restrictions can be enforced and whether teenagers attempt to bypass controls.

The pilot runs alongside a national consultation on children’s digital well-being, which has already received nearly 30,000 responses. Government officials and academic experts will analyse data gathered from both initiatives to guide future policy decisions.

A programme that aims to ensure that any regulatory steps are evidence-based, reflecting real-life experiences rather than theoretical assumptions about digital behaviour.

Alongside the government trials, an independent scientific study funded by the Wellcome Trust will examine the effects of reduced social media use among adolescents.

Led by researchers from the University of Cambridge and the Bradford Institute for Health Research, the study will involve around 4,000 students aged 12 to 15.

Findings are expected to provide deeper insight into how social media influences anxiety, sleep, relationships, and overall well-being, supporting policymakers in shaping future online safety measures instead of relying on limited evidence.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!