IAPP updates US state breach notification resource as legal differences persist

The International Association of Privacy Professionals (IAPP) has updated its US State Breach Notification Chart, a resource that summarises state breach notification laws across the United States. In an analysis published on 26 March, the IAPP says the revised chart highlights both nationwide coverage and continuing variation in how states define personal information, apply harm thresholds, and trigger reporting duties.

According to the IAPP, all 50 states, the District of Columbia, Guam, Puerto Rico, and the US Virgin Islands now have breach notification laws. California enacted the first state law in 2002, which took effect in 2003, while Alabama was the last state to adopt such a law in 2018. The IAPP says the result is a de facto nationwide framework, but one marked by significant differences across jurisdictions.

A central point in the analysis is that breach notification laws generally use a narrower definition of personal information than more recent comprehensive privacy laws. The IAPP says the original purpose of breach notification was to alert people to the risks of identity theft and financial fraud after a data breach, so laws tend to focus on identifiers such as names combined with Social Security numbers, driver’s licence details, or financial account credentials.

The article contrasts narrower statutes with broader ones. Hawaii’s law is described as among the narrowest, while Illinois and California are presented as having broader definitions that can extend to medical information, health insurance details, biometric data, genetic data, and, in California’s case, some automated licence plate recognition data.

Even so, the IAPP says many state breach laws still do not cover large categories of digital information, such as browsing history, cookie data, IP addresses, cell phone numbers, purchasing records, or complete financial transaction histories where account credentials were not compromised.

Exemptions and scope also vary. The IAPP says most breach notification laws apply broadly to businesses and often to nonprofit organisations, while privacy laws tend to contain more exclusions. The article notes that some states cover state and local government entities directly, while California has a separate breach notification law for governmental bodies. The IAPP also says its chart is focused on laws applicable to the private sector.

Encryption safe harbours appear across the state laws, according to the analysis, with some states also recognising redaction or other protections that render data unreadable or unusable. Attorney general notification requirements also differ. The IAPP says 34 state laws require notice to the state attorney general once certain thresholds are met, with thresholds ranging from 250 affected residents in North Dakota and Oregon to 1,000 in many other states, while some states, such as Connecticut and New York, require notice regardless of the number affected.

Harm thresholds are another area of divergence. The IAPP says about 30 state laws include a harm standard, meaning notice may not be required unless the breach caused, or is likely to cause, harm to affected individuals.

The article describes substantial differences in wording across states, with some referring to ‘reasonable likelihood’ of harm, others to ‘material risk,’ ‘substantial economic loss,’ or misuse of the data, while some states, including California, Georgia, Illinois, Massachusetts, Minnesota, North Dakota, and Texas, require no harm showing at all.

The practical effect, the IAPP argues, is that organisations holding data on residents of multiple states face a complex compliance problem. A data element that triggers notice in one state may not do so in another, and the article says reconciling the different harm standards is effectively impossible. The analysis notes that some organisations may decide to notify if there is doubt, while others may choose to notify only where clearly required.

The IAPP concludes that the absence of a preemptive federal breach notification law leaves entities to navigate overlapping but inconsistent state rules. Its updated chart is presented as a tool to help practitioners track those differences and build awareness of how US state breach notification laws continue to evolve.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India AI governance faces court, privacy and cyber pressures

An opinion article published by the International Association of Privacy Professionals says India’s data protection and AI governance environment is facing growing pressure as compliance work around the Digital Personal Data Protection Act (DPDPA) unfolds, court challenges continue, and regulators widen oversight into new sectors. The piece, published on 26 March, is labelled as an opinion article and includes an editor’s note stating that the IAPP is policy neutral and publishes contributed opinion pieces to reflect a broad spectrum of views.

The article says several legal and regulatory developments are unfolding simultaneously. One example cited is a public interest litigation filed before India’s Supreme Court by journalist Geeta Seshu and the Software Freedom Law Centre, India, challenging parts of the DPDPA on constitutional and rights-related grounds. According to the piece, the Supreme Court later issued a notice to the Government of India on 12 March.

Concerns outlined in the article include the absence of journalistic exemptions, the lack of compensation for data breach victims when penalties are imposed to the government, broad state powers to exempt departments from the law, and questions about the independence of the Data Protection Board given the government’s control over appointments. The article notes that similar petitions had already been filed, but says this was the first time the court issued notice to the government.

The article also turns to proceedings before the Kerala High Court involving privacy concerns about biometric and personal data collected through Digi Yatra, a not-for-profit foundation that operates airport passenger-processing infrastructure in India. According to the piece, a public interest litigation filed by C R Neelakandan asked for a temporary restraint on the sharing of collected personal data and its commercial use without proper authorisation.

The article says the Kerala High Court issued notice to the Digi Yatra Foundation and sought clarification from the government on whether the Data Protection Board had been established to oversee such matters.

Alongside the litigation, the opinion piece points to government efforts to show legal preparedness for AI-related risks. It says Electronics and Information Technology Minister Ashwini Vaishnaw outlined existing safeguards during the ongoing parliamentary session, referring to the Information Technology Act, the DPDPA, and subordinate rules, along with published guidelines on AI governance, toy safety, harmful content, awareness-building measures, and cyber safety.

Cybersecurity developments also feature in the article. It says the Indian Computer Emergency Response Team, working with the SatCom Industry Association, issued guidelines on 26 February for space, including satellite communications. According to the piece, the framework is intended to strengthen resilience in India’s space ecosystem.

It applies to covered entities, including government agencies, satellite service providers, ground station operators, terminal equipment vendors, and private space entities. Incident reporting within six hours and annual audits are among the measures described.

A further section of the article draws on Thales’ 2026 Data Threat Report. The piece says 64% of surveyed organisations in India identified AI-driven transformation as their biggest security risk, while 55% said they had to deal with reputational damage caused by AI-generated misinformation. It also says 65% reported deepfake-driven attacks, 35% had a complete view of their data, and 36% could fully classify their data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

VTC expands AI training across all programmes in Hong Kong

The Vocational Training Council (VTC) has introduced an ‘AI for All’ strategy to integrate AI training across its programmes, aiming to support Hong Kong’s ambition to strengthen its innovation and technology sector.

The initiative aligns with broader policy priorities, including the ‘AI Plus’ approach outlined in national planning frameworks and Hong Kong’s budget, which emphasise integrating AI across industries while addressing a shortage of skilled professionals.

Under the ‘AI+Professional’ model, all Higher Diploma students are required to study IT modules covering prompt engineering, generative AI, and AI ethics and security, with training adapted to disciplines such as engineering, design, and information technology.

The council has also partnered with technology companies through memorandums of understanding. It provides ongoing training for employees in government and industry, while offering internal AI tools and a ‘Virtual Tutor’ platform to support teaching and learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

EU demands stronger age verification from adult websites

The European Commission has preliminarily found that several major adult platforms, including Pornhub, Stripchat, XNXX, and XVideos, may be in breach of the Digital Services Act for failing to adequately protect minors from accessing harmful content.

These findings highlight concerns that children can easily access such platforms rather than being effectively prevented by robust safeguards.

The Commission’s investigation indicates that the platforms’ risk assessments were insufficient. In several cases, companies focused on reputational or business risks instead of fully addressing societal harms to minors.

Authorities also raised concerns that some platforms did not adequately consider input from civil society organisations specialising in children’s rights and age-assurance technologies, undermining the reliability of their evaluations.

Regarding risk mitigation, the Commission found that existing measures are ineffective. Simple self-declaration systems, in which users confirm they are over 18, were deemed inadequate, while additional features such as warnings, labels, or blurred content failed to prevent minors from accessing content.

The Commission considers that stronger, privacy-preserving age-verification solutions are necessary to ensure meaningful protection of children’s rights and well-being online.

The companies involved now have the opportunity to respond and propose corrective measures, while consultations with the European Board for Digital Services continue.

If the preliminary findings are confirmed, the Commission may impose fines of up to 6 percent of global annual turnover, alongside periodic penalties to enforce compliance.

The case forms part of broader efforts to enforce the Digital Services Act and strengthen online safety across the EU, rather than relying on voluntary measures by platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Europol warns legal gaps could weaken child abuse detection online

Efforts to combat online child sexual exploitation could be severely weakened, Europol has warned, if legal frameworks supporting detection and reporting are disrupted.

Executive Director Catherine De Bolle highlighted growing concerns over the increasing volume of harmful content online and stressed that protecting children remains a top priority for European law enforcement.

Authorities rely heavily on reports submitted by online service providers, which play a central role in identifying victims and supporting investigations, rather than relying solely on traditional policing methods.

Europol processed around 1.1 million CyberTips in a single year, many originating from the National Centre for Missing & Exploited Children and shared across 24 European countries.

These CyberTips include critical evidence such as images, videos, and other digital data used to track criminal activity.

Europol cautioned that removing the legal basis allowing voluntary detection by platforms could significantly reduce the number of reports submitted to authorities. A decline in CyberTips would limit investigative leads, making it harder to identify victims and disrupt online criminal networks.

Such a development could undermine broader security efforts and weaken the protection of minors across the EU instead of strengthening safeguards.

The agency emphasised that maintaining online service providers’ ability to detect and report suspected abuse is essential to effective law enforcement.

Ensuring continued cooperation between platforms and authorities remains a key factor in safeguarding children and addressing the growing threat of online exploitation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Meta unveils TRIBE v2 brain modelling AI

TRIBE v2 is a next-generation AI model introduced by Meta, designed to simulate how the human brain responds to complex stimuli such as images, sounds and language. The system functions as a digital twin of neural activity, enabling high-speed and high-resolution predictions of brain responses.

Built on data from over 700 volunteers, TRIBE v2 analyses fMRI recordings to predict brain responses to media such as videos, podcasts, and text. The model improves significantly on previous approaches, offering higher accuracy and the ability to generalise across new subjects, tasks, and languages.

Meta says the system could enable brain studies without human participants in every experiment, potentially accelerating research into neurological conditions. The approach may also support future AI development by incorporating principles derived from neuroscience.

Alongside the launch, Meta has released a research paper, model code, and interactive demo under a non-commercial licence to encourage wider exploration and collaboration in neuroscience and AI research.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

HP reveals advanced AI devices and workflow tools at Imagine 2026

HP has announced a broad set of AI-focused products and workplace tools at HP Imagine 2026, presenting the update as part of a wider effort to simplify work across PCs, collaboration devices, security systems, and workflow platforms.

In a press release published on 24 March, HP said the new portfolio includes AI PCs, collaboration tools, workstations, printers, and software intended for hybrid work and on-device AI use.

HP says the update includes a new intelligence layer called HP IQ, which it describes as a system designed to orchestrate work across AI PCs, workplace devices, and meeting spaces through local AI and proximity-based connectivity.

The company also announced new EliteBook devices, workstation updates, and workflow automation changes through its Workforce Experience Platform and Build Workspace capabilities.

Several sections of the release focus on on-device AI. According to the company, HP IQ will debut on the next generation of EliteBook X G2 AI PCs and will support features such as prompt-based assistance, document analysis, note organisation, and meeting support.

The release also says NearSense is intended to help devices discover, connect, and collaborate, including through file sharing and one-click joining of conference room meetings.

Security is another central theme in the release. HP says it has introduced what it describes as the world’s first hardware solution to stop physical TPM bypass attacks, using a cryptographically bound link between the TPM and CPU.

The company also said it is expanding capabilities in HP Wolf Security and introducing HP Wolf Pro Security Next Gen Antivirus, as well as physical intrusion detection designed to protect memory if a device chassis is opened.

The announcement also includes new printers and document tools. HP says the LaserJet Pro 4000 and 4100 series, and the LaserJet Enterprise 5000 and 6000 series, are intended to support AI-powered document processing and quantum-resistant security. The release also highlights scanning shortcuts, editable OCR, reduced management time, and a design intended to improve serviceability.

For higher-performance users, the company says it is launching a new generation of Z workstations and mobile workstations. The release refers to systems such as the Z8 Fury, Max Side Panel for Z8 Fury and Z4 workstations, and updated mobile workstation models. Advanced AI development, visual effects, and simulation workloads are among the uses cited in the announcement.

Beyond enterprise work, the release also extends the same AI and device strategy into gaming. New HyperX and OMEN products are part of the announcement, including desktops, a gaming and modular ecosystem, and expanded AI game support through OMEN Gaming Hub and OMEN AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Oracle expands Oracle AI Database with new agentic AI tools

Oracle has announced new agentic AI capabilities for Oracle AI Database, presenting them as tools for building, deploying, and scaling production-grade AI applications that work with business data across operational databases and analytic lakehouses. The company says the new features are available across multicloud and on-premises environments.

According to Oracle, the announcement concerning Oracle AI Database centres on bringing AI and data together within the database so that agents can securely access real-time enterprise data where it resides. Oracle also says customers can choose AI models, agentic frameworks, open data formats, and deployment platforms, while Oracle Exadata users can use Exadata Powered AI Search for high-volume, multi-step agentic workloads.

Oracle’s new product set includes Oracle Autonomous AI Vector Database, which the company says is intended to simplify vector-based application development while preserving the broader database features of Oracle AI Database. Oracle says the service is available in limited capacity through the Oracle Cloud free tier or a low-cost developer tier, with one-click upgrade to full capabilities as requirements expand.

The company also introduced the Oracle AI Database Private Agent Factory, described as a no-code agent builder that can run in public clouds or on-premises without requiring customers to share data with third parties. Oracle says the service includes pre-built agents such as a Database Knowledge Agent, a Structured Data Analysis Agent, and a Deep Data Research Agent. Oracle Unified Memory Core was also announced as a way to store context for AI agents across vector, JSON, graph, relational, text, spatial, and columnar data, all in a single engine with consistent transactions and security.

A separate part of the announcement focuses on what Oracle describes as AI data risk reduction. Oracle says Deep Data Security applies end-user-specific access rules within the database, so that each user or AI agent acting on a user’s behalf can only see the data the user is allowed to access.

Besides the Oracle AI Database, Oracle also announced Private AI Services Container for customers that want to run private model instances without sharing data with third-party AI providers, including in air-gapped environments. Trusted Answer Search was presented as a method for providing answers based on previously created reports rather than relying directly on large language model responses.

Open standards and interoperability form another part of Oracle’s pitch. Oracle says Vectors on Ice adds native support for vector data stored in Apache Iceberg tables, enabling unified search across database and data-lake content. Oracle also announced an Autonomous AI Database MCP Server to allow external AI agents and MCP clients to access Autonomous AI Database capabilities without custom integration code or manual security administration.

Juan Loaiza, executive vice president of Oracle Database Technologies, said: ‘The next wave of enterprise AI will be defined by customers’ ability to use AI in business-critical production systems to safely deliver breakthrough innovations, insights, and productivity.’ He added: ‘With Oracle AI Database, customers don’t just store data, they activate it for AI. By architecting AI and data together, we help customers quickly build and manage agentic AI applications that can securely query and act on real-enterprise data with stock exchange-level robustness in every leading cloud and on-premises.’

Steven Dickens, CEO and principal analyst at HyperFRAME Research, said: ‘In the era of agentic AI, a unified memory core is essential for agents to maintain context across diverse data types, such as vector, JSON, graph, columnar, spatial, text, and relational, without the latency or staleness of external syncing.’

Dickens added: ‘Only Oracle AI Database delivers this in a single, mission-critical engine with concurrent transactional and analytical processing, high availability, and ironclad security, enabling real-time reasoning over live business data. Organisations without this foundation will struggle with fragmented, unreliable agents, while those leveraging Oracle gain a decisive edge in scalable AI deployment.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft launches nonprofit AI training and fellowship initiative

Microsoft has announced a new programme called Microsoft Elevate for Changemakers, aimed at helping nonprofit leaders build AI skills, credentials, and organisational capacity. In a post published on 25 March, the initiative is said to have been introduced alongside the company’s Global Nonprofit Leaders Summit, which it says brought together more than 1,500 nonprofit leaders from around the world.

The company says the programme is designed to help nonprofit organisations adopt AI in ways that reflect their missions and the communities they serve. According to the company, the new initiative includes an AI for Nonprofits credential developed with LinkedIn and NetHope, live and on-demand training on topics such as Copilot, change management, and responsible AI governance, and a Changemaker Fellowship for nonprofit professionals working on AI-related projects.

The AI for Nonprofits credential is a professional certificate built on work across the nonprofit sector, with participants receiving a LinkedIn professional certificate. Microsoft also says the fellowship will provide resources, investment, and expert guidance, while connecting participants to a global cohort and a wider network of nonprofit AI leaders. According to the post, support for the fellowship includes Microsoft and launch partners EY and Caribou.

Microsoft places the announcement within a broader argument about how AI is affecting labour, communities, and service delivery. The company says nonprofits are often closely connected to people seeking new skills, employment pathways, and community support, and that such organisations are well-positioned to help shape AI adoption at the local level. Microsoft also says the programme forms part of its wider Microsoft Elevate commitment and refers to plans to deliver more than $5 billion in discounts, donations, and grants over the next year to support nonprofit organisations and education systems.

Several examples in the post illustrate how Microsoft says AI is already being applied in nonprofit work. Microsoft says ARcare has used AI to reduce administrative work and estimates it has eliminated six to eight hours of manual tasks per day. Opportunity International is cited as using AI to scale a local-language chatbot for farmers, while Head Start Homes is described as using AI to increase organisational bandwidth and attract new funding. The tech conglomerate also points to de Alliantie, saying AI has helped the organisation improve efficiency in housing support operations while maintaining a human-centred approach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!