The International Association of Privacy Professionals (IAPP) has updated its US State Breach Notification Chart, a resource that summarises state breach notification laws across the United States. In an analysis published on 26 March, the IAPP says the revised chart highlights both nationwide coverage and continuing variation in how states define personal information, apply harm thresholds, and trigger reporting duties.
According to the IAPP, all 50 states, the District of Columbia, Guam, Puerto Rico, and the US Virgin Islands now have breach notification laws. California enacted the first state law in 2002, which took effect in 2003, while Alabama was the last state to adopt such a law in 2018. The IAPP says the result is a de facto nationwide framework, but one marked by significant differences across jurisdictions.
A central point in the analysis is that breach notification laws generally use a narrower definition of personal information than more recent comprehensive privacy laws. The IAPP says the original purpose of breach notification was to alert people to the risks of identity theft and financial fraud after a data breach, so laws tend to focus on identifiers such as names combined with Social Security numbers, driver’s licence details, or financial account credentials.
The article contrasts narrower statutes with broader ones. Hawaii’s law is described as among the narrowest, while Illinois and California are presented as having broader definitions that can extend to medical information, health insurance details, biometric data, genetic data, and, in California’s case, some automated licence plate recognition data.
Even so, the IAPP says many state breach laws still do not cover large categories of digital information, such as browsing history, cookie data, IP addresses, cell phone numbers, purchasing records, or complete financial transaction histories where account credentials were not compromised.
Exemptions and scope also vary. The IAPP says most breach notification laws apply broadly to businesses and often to nonprofit organisations, while privacy laws tend to contain more exclusions. The article notes that some states cover state and local government entities directly, while California has a separate breach notification law for governmental bodies. The IAPP also says its chart is focused on laws applicable to the private sector.
Encryption safe harbours appear across the state laws, according to the analysis, with some states also recognising redaction or other protections that render data unreadable or unusable. Attorney general notification requirements also differ. The IAPP says 34 state laws require notice to the state attorney general once certain thresholds are met, with thresholds ranging from 250 affected residents in North Dakota and Oregon to 1,000 in many other states, while some states, such as Connecticut and New York, require notice regardless of the number affected.
Harm thresholds are another area of divergence. The IAPP says about 30 state laws include a harm standard, meaning notice may not be required unless the breach caused, or is likely to cause, harm to affected individuals.
The article describes substantial differences in wording across states, with some referring to ‘reasonable likelihood’ of harm, others to ‘material risk,’ ‘substantial economic loss,’ or misuse of the data, while some states, including California, Georgia, Illinois, Massachusetts, Minnesota, North Dakota, and Texas, require no harm showing at all.
The practical effect, the IAPP argues, is that organisations holding data on residents of multiple states face a complex compliance problem. A data element that triggers notice in one state may not do so in another, and the article says reconciling the different harm standards is effectively impossible. The analysis notes that some organisations may decide to notify if there is doubt, while others may choose to notify only where clearly required.
The IAPP concludes that the absence of a preemptive federal breach notification law leaves entities to navigate overlapping but inconsistent state rules. Its updated chart is presented as a tool to help practitioners track those differences and build awareness of how US state breach notification laws continue to evolve.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
An opinion article published by the International Association of Privacy Professionals says India’s data protection and AI governance environment is facing growing pressure as compliance work around the Digital Personal Data Protection Act (DPDPA) unfolds, court challenges continue, and regulators widen oversight into new sectors. The piece, published on 26 March, is labelled as an opinion article and includes an editor’s note stating that the IAPP is policy neutral and publishes contributed opinion pieces to reflect a broad spectrum of views.
The article says several legal and regulatory developments are unfolding simultaneously. One example cited is a public interest litigation filed before India’s Supreme Court by journalist Geeta Seshu and the Software Freedom Law Centre, India, challenging parts of the DPDPA on constitutional and rights-related grounds. According to the piece, the Supreme Court later issued a notice to the Government of India on 12 March.
Concerns outlined in the article include the absence of journalistic exemptions, the lack of compensation for data breach victims when penalties are imposed to the government, broad state powers to exempt departments from the law, and questions about the independence of the Data Protection Board given the government’s control over appointments. The article notes that similar petitions had already been filed, but says this was the first time the court issued notice to the government.
The article also turns to proceedings before the Kerala High Court involving privacy concerns about biometric and personal data collected through Digi Yatra, a not-for-profit foundation that operates airport passenger-processing infrastructure in India. According to the piece, a public interest litigation filed by C R Neelakandan asked for a temporary restraint on the sharing of collected personal data and its commercial use without proper authorisation.
The article says the Kerala High Court issued notice to the Digi Yatra Foundation and sought clarification from the government on whether the Data Protection Board had been established to oversee such matters.
Alongside the litigation, the opinion piece points to government efforts to show legal preparedness for AI-related risks. It says Electronics and Information Technology Minister Ashwini Vaishnaw outlined existing safeguards during the ongoing parliamentary session, referring to the Information Technology Act, the DPDPA, and subordinate rules, along with published guidelines on AI governance, toy safety, harmful content, awareness-building measures, and cyber safety.
Cybersecurity developments also feature in the article. It says the Indian Computer Emergency Response Team, working with the SatCom Industry Association, issued guidelines on 26 February for space, including satellite communications. According to the piece, the framework is intended to strengthen resilience in India’s space ecosystem.
It applies to covered entities, including government agencies, satellite service providers, ground station operators, terminal equipment vendors, and private space entities. Incident reporting within six hours and annual audits are among the measures described.
A further section of the article draws on Thales’ 2026 Data Threat Report. The piece says 64% of surveyed organisations in India identified AI-driven transformation as their biggest security risk, while 55% said they had to deal with reputational damage caused by AI-generated misinformation. It also says 65% reported deepfake-driven attacks, 35% had a complete view of their data, and 36% could fully classify their data.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI is moving to shut down the Sora app, its consumer-facing AI video platform, according to an official X post on 24 March. The move follows months of scrutiny around AI-generated video, including concerns over deepfakes, copyright, and harmful synthetic media.
The reported shutdown comes shortly after OpenAI retired Sora 1 in the United States on 13 March 2026 and replaced it with Sora 2 as the default experience. OpenAI’s help documentation says the older version remains available only in countries where the newer one has not yet launched, while support pages for the standalone Sora app are still live. The product changes also follow the announcement of new copyright settings for the latest video generation model.
That makes the current picture more complex than a simple sunset. Public OpenAI help pages still describe tools on iOS, Android, and the web, while news reports say the company has now decided to wind down the app itself. OpenAI had also recently indicated that it plans to integrate Sora video generation into ChatGPT, which could help explain why the standalone product is being reconsidered.
Sora became one of OpenAI’s most visible consumer media products, but it also drew sustained scrutiny over deepfakes, non-consensual content, and copyrighted characters. Such concerns remained central even as OpenAI added additional controls to the platform, including new consent and traceability measures to enhance AI video safety. AP reported that pressure from advocacy groups, scholars, and entertainment-sector voices formed part of the backdrop to the shutdown decision.
For users, the immediate issue is preservation of existing content. OpenAI’s Sora 1 sunset FAQ says some legacy material may be exportable for a limited period before deletion, but the company has not yet published a detailed standalone help document explaining the full shutdown. Based on the information now available, the clearest distinction is that OpenAI first retired one legacy version in some markets and is now reportedly ending the standalone app more broadly.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Over the past few years, we have witnessed a rapid shift in the way data is stored and processed across businesses, organisations, and digital systems.
What we are increasingly seeing is that AI itself is changing form as computation shifts away from centralised cloud environments to the network edge. Such a shift has come to be known as edge AI.
Edge AI refers to the deployment of machine learning models directly on local devices such as smartphones, sensors, industrial machines, and autonomous systems.
Instead of transmitting data to remote servers for processing, analysis is performed on the device itself, enabling faster responses and greater control over sensitive information.
Such a transition marks a significant departure from earlier models of AI deployment, where cloud infrastructure dominated both processing and storage.
From centralised AI to edge intelligence
Traditional AI systems used to rely heavily on centralised architectures. Data collected from users or devices would be transmitted to large-scale data centres, where powerful servers would perform computations and generate outputs.
Such a model offered efficiency, scalability, and easier security management, as protection efforts could be concentrated within controlled environments.
Centralisation allowed organisations to enforce uniform security policies, deploy updates rapidly, and monitor threats from a single vantage point. However, reliance on cloud infrastructure also introduced latency, bandwidth constraints, and increased exposure of sensitive data during transmission.
Edge AI introduces a fundamentally different paradigm. Moving computation closer to the data source reduces the reliance on continuous connectivity and enables real-time decision-making.
Such decentralisation represents not merely a technical shift but a reconfiguration of the way digital systems operate and interact with their environments.
Advantages of edge AI
Reduced latency and real-time processing
Latency is significantly reduced when computation occurs locally. Edge systems are particularly valuable in time-sensitive applications such as autonomous vehicles, healthcare monitoring, and industrial automation, where delays can have critical consequences.
Enhanced privacy and data control
Privacy improves when sensitive data remains on-device instead of being transmitted across networks. Such an approach aligns with growing concerns around data protection, regulatory compliance, and user trust.
Operational resilience
Edge systems can continue functioning even when network connectivity is limited or unavailable. In remote environments or critical infrastructure, independence from central servers ensures service continuity.
Bandwidth efficiency and cost reduction
Bandwidth consumption is decreased because only processed insights are transmitted, not raw data. Such efficiency can translate into reduced operational costs and improved system performance.
Personalisation and context awareness
Devices can adapt to user behaviour in real time, learning from local data without exposing sensitive information externally. In healthcare, personalised diagnostics can be performed directly on wearable devices, while in manufacturing, predictive maintenance can occur on-site.
The dark side of edge AI
However, the shift towards edge computing introduces profound cybersecurity challenges. The most significant of these is the expansion of the attack surface.
Instead of a limited number of well-protected data centres, organisations must secure vast networks of distributed devices. Each endpoint represents a potential entry point for malicious actors.
The scale and diversity of edge deployments complicate efforts to maintain consistent security standards. Security is no longer centralised but dispersed, increasing the likelihood of vulnerabilities and misconfigurations.
Let’s take a closer look at some other challenges of edge AI.
Physical vulnerabilities and device exposure
Edge devices often operate in uncontrolled environments, making physical access a major risk. Attackers may tamper with hardware, extract sensitive information, or reverse engineer AI models.
Model extraction attacks allow adversaries to replicate proprietary algorithms, undermining intellectual property and enabling further exploitation. Such risks are significantly more pronounced compared to cloud systems, where physical access is tightly controlled.
Software constraints and patch management challenges
Many edge devices rely on embedded systems with limited computational resources. Such constraints make it difficult to implement robust security measures, including advanced encryption and intrusion detection.
Patch management becomes increasingly complex in decentralised environments. Ensuring that millions of devices receive timely updates is a significant challenge, particularly when connectivity is inconsistent or when devices operate in remote locations.
Breakdown of traditional security models
The decentralised nature of edge AI undermines conventional perimeter-based security frameworks. Without a clearly defined boundary, traditional approaches to network defence lose effectiveness.
Each device must be treated as an independent security domain, requiring authentication, authorisation, and continuous monitoring. Identity management becomes more complex as the number of devices grows, increasing the risk of misconfiguration and unauthorised access.
Data integrity and adversarial threats
As we mentioned before, edge devices rely heavily on local data inputs to make decisions. As a result, manipulated inputs can lead to compromised outcomes. Adversarial attacks, in which inputs are deliberately altered to deceive machine learning models, represent a significant threat.
In safety-critical systems, such manipulation can lead to severe consequences. Altered sensor data in industrial environments may disrupt operations, while compromised vision systems in autonomous vehicles may produce dangerous behaviour.
Supply chain risks in edge AI
Edge AI systems depend on a combination of hardware, software, and pre-trained models sourced from multiple vendors. Each component introduces potential vulnerabilities.
Attackers may compromise supply chains by inserting backdoors during manufacturing, distributing malicious updates, or exploiting third-party software dependencies. The global nature of technology supply chains complicates efforts to ensure trust and accountability.
Energy constraints and security trade-offs
Edge devices are often designed with efficiency in mind, prioritising performance and power consumption. Security mechanisms such as encryption and continuous monitoring require computational resources that may be limited.
As a result, security features may be simplified or omitted, increasing exposure to cyber threats. Balancing efficiency with robust protection remains a persistent challenge.
Cyber-physical risks and real-world impact
The integration of edge AI into cyber-physical systems elevates the consequences of security breaches. Digital manipulation can directly influence physical outcomes, affecting safety and infrastructure.
Compromised healthcare devices may produce incorrect diagnoses, while disrupted transportation systems may lead to accidents. In energy networks, attacks could impact entire regions, highlighting the broader societal implications of edge AI vulnerabilities.
Regulatory and governance challenges
Existing regulatory frameworks have been largely designed for centralised systems and do not fully address the complexities of decentralised architectures. Questions regarding liability, accountability, and enforcement remain unresolved.
Organisations may struggle to implement effective security practices without clear standards. Policymakers face the challenge of developing regulations that reflect the distributed nature of edge AI systems.
Towards a secure edge AI ecosystem
Addressing all these challenges requires a multi-layered and adaptive approach that reflects the complexity of edge AI environments.
Hardware-level protections, such as secure enclaves and trusted execution environments, play a critical role in safeguarding sensitive operations from physical tampering and low-level attacks.
Encryption and secure boot processes further strengthen device integrity, ensuring that both data and models remain protected and that unauthorised modifications are prevented from the outset.
At the software level, continuous monitoring and anomaly detection are essential for identifying threats in real time, particularly in distributed systems where central oversight is limited.
Secure update mechanisms must also be prioritised, ensuring that patches and security improvements can be deployed efficiently and reliably across large networks of devices, even in conditions of intermittent connectivity.
Without such mechanisms, vulnerabilities can persist and spread across the ecosystem.
Rather than relying entirely on decentralised or centralised models, organisations are distributing workloads strategically, keeping latency-sensitive and privacy-critical processes on the edge while maintaining centralised oversight, analytics, and security coordination in the cloud.
Such an approach allows organisations to balance performance and control, while enabling more effective threat detection and response through aggregated intelligence.
Security must also be embedded into system design from the outset, rather than treated as an additional layer to be applied after deployment. A proactive approach to risk assessment, combined with secure development practices, can significantly reduce vulnerabilities before systems are operational.
In conclusion, we have seen how the rise of edge AI represents a pivotal shift in both AI and cybersecurity. Decentralisation enables faster, more private, and more resilient systems, yet it also creates a fragmented and dynamic attack surface.
The advantages we have outlined are compelling, but they also introduce additional layers of complexity and risk. Addressing these challenges requires a comprehensive approach that combines technological innovation, regulatory development, and organisational awareness.
Only through such coordinated efforts can the benefits of edge AI be realised while ensuring that security, trust, and safety remain intact in an increasingly decentralised digital landscape.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
French prosecutors have escalated concerns about deepfakes linked to Elon Musk’s platform X, alerting US authorities to suspicions that manipulated content may have been used to influence the company’s valuation.
According to the Paris prosecutor’s office, the controversy surrounding sexually explicit deepfakes generated by Grok, X’s AI tool, may have been deliberately amplified to artificially boost the value of X and its associated AI entity ahead of a planned stock market listing in June 2026.
Authorities in France confirmed they had contacted the US Department of Justice and legal representatives at the Securities and Exchange Commission to share findings related to the deepfakes investigation and potential financial implications.
The case builds on an ongoing French probe into X, which initially focused on alleged algorithmic interference in domestic politics. Investigations have since expanded to include the spread of Holocaust denial content and the dissemination of sexualised deepfakes through Grok.
French regulators have taken additional steps, including summoning Musk for a voluntary interview and conducting searches at X’s local offices, actions he has described as politically motivated. Parallel investigations have also been launched in the UK and across the European Union into the use of AI tools to generate harmful deepfakes involving women and minors.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A proposal to restrict minors’ online activity is gaining momentum in Ecuador, where lawmakers are considering a social media ban for children under 15 as part of a broader response to rising organised crime.
Under discussion in the National Assembly, the initiative introduced by Assembly member Katherine Pacheco Machuca would amend the Code of Childhood and Adolescence to block access to platforms enabling public interaction, content sharing, and messaging. The proposal defines social networks broadly, covering services that allow users to create accounts, connect with others, and exchange content.
Unlike similar debates elsewhere, the justification for the social media ban is rooted less in mental health or privacy concerns and more in security. Ecuador has experienced a sharp deterioration in public safety, with rising homicide rates, expanding criminal networks, and increasing pressure on state institutions.
Recent findings from Ecuador’s Organised Crime Observatory indicate that around 27% of minors approached by criminal groups report initial contact through social media platforms. Surveys conducted by ChildFund Ecuador further suggest that vulnerable adolescents are increasingly exposed to recruitment tactics that combine economic incentives with normalised portrayals of violence.
In that context, the proposed social media ban is framed as a preventative measure against criminal recruitment rather than solely a child protection tool. The initiative forms part of a wider regulatory shift, including new cybersecurity legislation and draft laws targeting recruitment practices conducted through digital channels.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI will begin rolling out ChatGPT ads to Free and Go users in the United States in the coming weeks, marking a significant shift in how the company monetises its flagship AI product.
The ads will be shown to logged-in adult users on lower-tier plans, while paid subscriptions, including Plus, Pro, Business, Enterprise, and Education, will remain ad-free. The rollout in the US positions ChatGPT ads as a tiered feature, separating premium experiences from ad-supported access.
To support the initiative, OpenAI has integrated advertising technology firm Criteo into its pilot programme, enabling ad buying and more targeted placements. Advertisers are reportedly being offered entry commitments ranging from $50,000 to $100,000, reflecting early efforts to build a structured advertising marketplace.
The company has also launched a dedicated advertiser page that presents ChatGPT as a platform for reaching users during active research and decision-making. ChatGPT ads are being framed as part of conversational discovery, with OpenAI advising brands to provide multiple variations of creative content to improve performance.
The rollout comes as OpenAI seeks to diversify revenue amid rising compute costs and intensifying competition. Alongside subscriptions and API services, ChatGPT ads are expected to play an increasingly important role in supporting the platform’s long-term business model.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft is scaling back the presence of Copilot across Windows 11, signalling a shift toward a more selective and user-focused approach to AI integration.
Microsoft said it will reduce Copilot features in several built-in applications, including Photos, Widgets, Notepad and the Snipping Tool. The company described the move as part of a broader effort to integrate AI only where it delivers clear value to users.
The decision follows growing concerns about ‘AI bloat’ and user trust, with recent research indicating rising scepticism around AI. Microsoft is responding by prioritising more practical and reliable use cases rather than widespread deployment.
The change also aligns with earlier adjustments to Copilot plans, including shelving some system-level integrations and delaying features such as Windows Recall due to privacy and security concerns. Even after launch, vulnerabilities in Recall have continued to surface, reinforcing the need for caution.
Beyond AI, Microsoft is introducing several usability improvements to Windows 11. These include allowing users to reposition the taskbar, enhancing File Explorer performance, refining Widgets, and giving users greater control over system updates.
The update signals a broader recalibration, as Microsoft balances innovation with user expectations, aiming to deliver AI features that are both useful and trusted within everyday computing environments.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new wave of AI development is increasingly relying on real-world human behaviour, with DoorDash moving to tap its gig workforce to generate training data for robotics systems.
DoorDash has launched a standalone app called Tasks, allowing couriers to earn money by recording themselves performing everyday activities such as folding clothes, washing dishes or making a bed. The collected data is used to train AI and robotics models to understand physical environments and human interactions better.
The move reflects a broader shift in AI training, where companies are seeking physical, real-world data rather than relying solely on text and images. Such data is essential for building systems capable of performing tasks in dynamic environments, including humanoid robots and autonomous machines.
Other companies are pursuing similar strategies. Uber and Instawork have tested gig-based data-collection models, while robotics startups are using wearable devices, such as gloves and head-mounted cameras, to capture detailed motion data for training.
The Tasks app is currently being rolled out as a pilot, with DoorDash planning to expand the types of available assignments over time. Some tasks may also be integrated into the main Dasher app, including activities that support navigation or assist autonomous delivery systems.
As competition intensifies, access to large-scale physical data is becoming a critical advantage. DoorDash’s approach highlights how gig-economy platforms are increasingly integrated into the development of next-generation AI systems.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
An ambitious target to generate $100 billion in annual cloud and AI revenue within five years has been set, as Alibaba seeks to counter slowing growth in its once-dominant e-commerce business.
The push follows a sharp deterioration in financial performance, with quarterly earnings plunging and revenue growth missing expectations. The results underscore growing urgency within the company to extract meaningful returns from its AI investments, which have so far required heavy capital outlays.
Central to the strategy is a shift toward monetisation, with the rollout of agentic AI services such as Wukong and price increases of up to 34% across cloud and storage products. Alibaba is positioning its AI and cloud division as its primary growth engine, aiming to replicate the momentum seen in recent quarters, when AI-related revenues expanded by triple digits.
However, competitive pressures are intensifying. Domestic rivals including Tencent are leveraging vast ecosystems such as WeChat to gain an advantage in agentic AI, while a new wave of players like DeepSeek, MiniMax and Zhipu are offering low-cost, open-source models that compress margins across the industry.
At the same time, Alibaba faces structural challenges beyond AI. Core businesses such as e-commerce and food delivery remain under pressure from aggressive competition, while rising operational costs – subsidies and promotions to attract users – continue to weigh on profitability.
Leadership uncertainty and ongoing restructuring add further complexity. With major investment commitments exceeding $50 billion and increasing competition from both domestic and global players, Alibaba’s ability to execute on its AI strategy will be critical in determining whether it can sustain long-term growth and regain market confidence.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!