IAPP updates US state breach notification resource as legal differences persist

The International Association of Privacy Professionals (IAPP) has updated its US State Breach Notification Chart, a resource that summarises state breach notification laws across the United States. In an analysis published on 26 March, the IAPP says the revised chart highlights both nationwide coverage and continuing variation in how states define personal information, apply harm thresholds, and trigger reporting duties.

According to the IAPP, all 50 states, the District of Columbia, Guam, Puerto Rico, and the US Virgin Islands now have breach notification laws. California enacted the first state law in 2002, which took effect in 2003, while Alabama was the last state to adopt such a law in 2018. The IAPP says the result is a de facto nationwide framework, but one marked by significant differences across jurisdictions.

A central point in the analysis is that breach notification laws generally use a narrower definition of personal information than more recent comprehensive privacy laws. The IAPP says the original purpose of breach notification was to alert people to the risks of identity theft and financial fraud after a data breach, so laws tend to focus on identifiers such as names combined with Social Security numbers, driver’s licence details, or financial account credentials.

The article contrasts narrower statutes with broader ones. Hawaii’s law is described as among the narrowest, while Illinois and California are presented as having broader definitions that can extend to medical information, health insurance details, biometric data, genetic data, and, in California’s case, some automated licence plate recognition data.

Even so, the IAPP says many state breach laws still do not cover large categories of digital information, such as browsing history, cookie data, IP addresses, cell phone numbers, purchasing records, or complete financial transaction histories where account credentials were not compromised.

Exemptions and scope also vary. The IAPP says most breach notification laws apply broadly to businesses and often to nonprofit organisations, while privacy laws tend to contain more exclusions. The article notes that some states cover state and local government entities directly, while California has a separate breach notification law for governmental bodies. The IAPP also says its chart is focused on laws applicable to the private sector.

Encryption safe harbours appear across the state laws, according to the analysis, with some states also recognising redaction or other protections that render data unreadable or unusable. Attorney general notification requirements also differ. The IAPP says 34 state laws require notice to the state attorney general once certain thresholds are met, with thresholds ranging from 250 affected residents in North Dakota and Oregon to 1,000 in many other states, while some states, such as Connecticut and New York, require notice regardless of the number affected.

Harm thresholds are another area of divergence. The IAPP says about 30 state laws include a harm standard, meaning notice may not be required unless the breach caused, or is likely to cause, harm to affected individuals.

The article describes substantial differences in wording across states, with some referring to ‘reasonable likelihood’ of harm, others to ‘material risk,’ ‘substantial economic loss,’ or misuse of the data, while some states, including California, Georgia, Illinois, Massachusetts, Minnesota, North Dakota, and Texas, require no harm showing at all.

The practical effect, the IAPP argues, is that organisations holding data on residents of multiple states face a complex compliance problem. A data element that triggers notice in one state may not do so in another, and the article says reconciling the different harm standards is effectively impossible. The analysis notes that some organisations may decide to notify if there is doubt, while others may choose to notify only where clearly required.

The IAPP concludes that the absence of a preemptive federal breach notification law leaves entities to navigate overlapping but inconsistent state rules. Its updated chart is presented as a tool to help practitioners track those differences and build awareness of how US state breach notification laws continue to evolve.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FCA outlines AI-driven plan to modernise financial regulation

The UK’s Financial Conduct Authority (FCA) has outlined plans to integrate AI and data-driven tools into its regulatory processes as part of its 2026/27 work programme to become a more efficient and effective regulator.

The programme includes developing an internal authorisation tool to speed up approvals and using generative AI to review documents and support supervision, while maintaining human decision-making at the core of regulatory actions.

The FCA said it will also test automated data-sharing in a sandbox environment, expand its Supercharged Sandbox for firms developing AI-based financial products, and invest in analytics to better identify risks and prioritise cases.

Measures to reduce burdens on firms include removing certain data reporting requirements, simplifying digital processes and improving authorisation timelines, alongside efforts to enhance firms’ experience through new tools and feedback mechanisms.

The regulator also plans to support economic growth and consumer protection by advancing measures such as regulating buy now pay later products, speeding up IPO processes, expanding international presence, and addressing emerging risks, including the use of general-purpose AI in financial decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

India AI governance faces court, privacy and cyber pressures

An opinion article published by the International Association of Privacy Professionals says India’s data protection and AI governance environment is facing growing pressure as compliance work around the Digital Personal Data Protection Act (DPDPA) unfolds, court challenges continue, and regulators widen oversight into new sectors. The piece, published on 26 March, is labelled as an opinion article and includes an editor’s note stating that the IAPP is policy neutral and publishes contributed opinion pieces to reflect a broad spectrum of views.

The article says several legal and regulatory developments are unfolding simultaneously. One example cited is a public interest litigation filed before India’s Supreme Court by journalist Geeta Seshu and the Software Freedom Law Centre, India, challenging parts of the DPDPA on constitutional and rights-related grounds. According to the piece, the Supreme Court later issued a notice to the Government of India on 12 March.

Concerns outlined in the article include the absence of journalistic exemptions, the lack of compensation for data breach victims when penalties are imposed to the government, broad state powers to exempt departments from the law, and questions about the independence of the Data Protection Board given the government’s control over appointments. The article notes that similar petitions had already been filed, but says this was the first time the court issued notice to the government.

The article also turns to proceedings before the Kerala High Court involving privacy concerns about biometric and personal data collected through Digi Yatra, a not-for-profit foundation that operates airport passenger-processing infrastructure in India. According to the piece, a public interest litigation filed by C R Neelakandan asked for a temporary restraint on the sharing of collected personal data and its commercial use without proper authorisation.

The article says the Kerala High Court issued notice to the Digi Yatra Foundation and sought clarification from the government on whether the Data Protection Board had been established to oversee such matters.

Alongside the litigation, the opinion piece points to government efforts to show legal preparedness for AI-related risks. It says Electronics and Information Technology Minister Ashwini Vaishnaw outlined existing safeguards during the ongoing parliamentary session, referring to the Information Technology Act, the DPDPA, and subordinate rules, along with published guidelines on AI governance, toy safety, harmful content, awareness-building measures, and cyber safety.

Cybersecurity developments also feature in the article. It says the Indian Computer Emergency Response Team, working with the SatCom Industry Association, issued guidelines on 26 February for space, including satellite communications. According to the piece, the framework is intended to strengthen resilience in India’s space ecosystem.

It applies to covered entities, including government agencies, satellite service providers, ground station operators, terminal equipment vendors, and private space entities. Incident reporting within six hours and annual audits are among the measures described.

A further section of the article draws on Thales’ 2026 Data Threat Report. The piece says 64% of surveyed organisations in India identified AI-driven transformation as their biggest security risk, while 55% said they had to deal with reputational damage caused by AI-generated misinformation. It also says 65% reported deepfake-driven attacks, 35% had a complete view of their data, and 36% could fully classify their data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

VTC expands AI training across all programmes in Hong Kong

The Vocational Training Council (VTC) has introduced an ‘AI for All’ strategy to integrate AI training across its programmes, aiming to support Hong Kong’s ambition to strengthen its innovation and technology sector.

The initiative aligns with broader policy priorities, including the ‘AI Plus’ approach outlined in national planning frameworks and Hong Kong’s budget, which emphasise integrating AI across industries while addressing a shortage of skilled professionals.

Under the ‘AI+Professional’ model, all Higher Diploma students are required to study IT modules covering prompt engineering, generative AI, and AI ethics and security, with training adapted to disciplines such as engineering, design, and information technology.

The council has also partnered with technology companies through memorandums of understanding. It provides ongoing training for employees in government and industry, while offering internal AI tools and a ‘Virtual Tutor’ platform to support teaching and learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

UNESCO advances regional AI in education observatory

A UNESCO-led public–private initiative is advancing the establishment of a Regional AI in Education Observatory for Latin America and the Caribbean. The project aims to strengthen education systems through the ethical and inclusive application of AI technologies.

A roundtable held at UNESCO Headquarters in Paris brought together more than 50 stakeholders from government, academia, industry, and civil society. Participants included universities, development banks, and research institutions providing technical expertise and regional knowledge.

The observatory will act as a shared regional infrastructure supporting evidence-based policy, teacher training, and capacity development. Focus areas include tackling foundational learning challenges in reading and mathematics while ensuring responsible AI integration in classrooms.

The initiative will be officially launched on 14 April 2026 at ECLAC headquarters in Santiago, Chile. Organisers emphasise the need for regional cooperation to guide AI adoption in education, promoting equity, innovation, and long-term learning improvements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Open letter targets Meta ad practices

A coalition of civil society and industry groups has urged the European Commission to enforce the Digital Markets Act more rigorously, warning that major tech firms continue to exploit compliance gaps. The appeal centres on concerns over data use and online advertising practices.

Organisations including noyb, Check My Ads, and the Irish Council for Civil Liberties argue that current models fail to offer users genuine choice. Critics say consent mechanisms tied to payment or tracking undermine the intent of the EU digital rules.

The letter against Meta calls for clearer standards, including equal options for personalised and non-personalised advertising, as well as stricter limits on design practices that influence user decisions. Campaigners also want stronger coordination between regulators to ensure consistent enforcement.

The push reflects wider frustration among European organisations, with several recent letters demanding faster action against dominant platforms. Observers warn that delayed enforcement risks weakening the credibility of the EU digital regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft launches nonprofit AI training and fellowship initiative

Microsoft has announced a new programme called Microsoft Elevate for Changemakers, aimed at helping nonprofit leaders build AI skills, credentials, and organisational capacity. In a post published on 25 March, the initiative is said to have been introduced alongside the company’s Global Nonprofit Leaders Summit, which it says brought together more than 1,500 nonprofit leaders from around the world.

The company says the programme is designed to help nonprofit organisations adopt AI in ways that reflect their missions and the communities they serve. According to the company, the new initiative includes an AI for Nonprofits credential developed with LinkedIn and NetHope, live and on-demand training on topics such as Copilot, change management, and responsible AI governance, and a Changemaker Fellowship for nonprofit professionals working on AI-related projects.

The AI for Nonprofits credential is a professional certificate built on work across the nonprofit sector, with participants receiving a LinkedIn professional certificate. Microsoft also says the fellowship will provide resources, investment, and expert guidance, while connecting participants to a global cohort and a wider network of nonprofit AI leaders. According to the post, support for the fellowship includes Microsoft and launch partners EY and Caribou.

Microsoft places the announcement within a broader argument about how AI is affecting labour, communities, and service delivery. The company says nonprofits are often closely connected to people seeking new skills, employment pathways, and community support, and that such organisations are well-positioned to help shape AI adoption at the local level. Microsoft also says the programme forms part of its wider Microsoft Elevate commitment and refers to plans to deliver more than $5 billion in discounts, donations, and grants over the next year to support nonprofit organisations and education systems.

Several examples in the post illustrate how Microsoft says AI is already being applied in nonprofit work. Microsoft says ARcare has used AI to reduce administrative work and estimates it has eliminated six to eight hours of manual tasks per day. Opportunity International is cited as using AI to scale a local-language chatbot for farmers, while Head Start Homes is described as using AI to increase organisational bandwidth and attract new funding. The tech conglomerate also points to de Alliantie, saying AI has helped the organisation improve efficiency in housing support operations while maintaining a human-centred approach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO and Tecnológico de Monterrey partner on AI in education initiative

UNESCO and Tecnológico de Monterrey have signed an agreement to collaborate on advancing the use of AI in education, as digital transformation reshapes learning systems and workforce skills across Latin America and the Caribbean.

The agreement establishes a framework for joint work on generating evidence, developing standards and formulating public policy recommendations on AI in education, and supports the launch of a Regional Observatory on Artificial Intelligence in Education.

A financial contribution of $90,000 will support the Observatory’s implementation, following months of technical coordination and institutional validation between the two organisations.

After the signing, technical teams reviewed the operational plan for the first year, including methodological frameworks on teachers’ digital competencies and AI ethics, as well as pilot projects in Chile, El Salvador and Mexico.

According to Esther Kuisch Laroche, the initiative aims to ensure AI contributes to more inclusive, ethical and relevant education systems, while moving from principles to practical solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

EU court challenges French police data practices

The Court of Justice of the European Union has ruled that aspects of France’s biometric data collection system breach the EU law. Judges found that taking fingerprints and photographs of suspects under broad conditions fails to meet strict proportionality standards.

The case examined rules allowing police to collect and store data in the French Traitement des antécédents judiciaires and the Fichier automatisé des empreintes digitales. The court said collection cannot be routine and must meet a threshold of absolute necessity.

Judges also criticised the lack of clear justification for data collection, stating that individuals should receive explanations to exercise their legal rights. Existing rules were found to lack safeguards to ensure the limited and proportionate use of sensitive biometric information in France.

The ruling requires national courts to reassess the framework and could lead to changes in policing practices. It also raises broader questions about large-scale data retention and the balance between security and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New Mexico wins major case against Meta

A jury has found Meta Platforms liable for misleading consumers and endangering children in a landmark case brought by the New Mexico Department of Justice. The verdict marks the first successful trial by a US state against a major tech firm over child safety concerns.

Jurors awarded civil penalties totalling 375 million dollars after finding violations of consumer protection law. The case focused on claims that platform design choices exposed young users to harmful and exploitative content.

Evidence presented in court included internal company documents and testimony suggesting awareness of risks to children. Allegations centred on failures to prevent exploitation, as well as features linked to addictive behaviour and exposure to harmful material.

Further proceedings in the US are scheduled, with authorities seeking additional penalties and mandated changes to platform safety measures. Proposed actions include stronger age verification and improved protections for minors online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot