IAPP Global Summit session examines AI, privacy, and the courts with US federal judges

US District Court for the District of Columbia Chief Judge James Boasberg and US District Court for the District of Massachusetts Judge Allison Burroughs discussed AI, privacy, and the courts during the IAPP Global Summit 2026 in Washington, D.C.

The IAPP report said Burroughs pointed to the gap between older legal protections and newer technologies, including debates over how surveillance rules apply to cell-tower data. Burroughs said existing laws and constitutional protections are ‘not keeping up, never have kept up and never will keep up’ with the speed of innovation.

Burroughs commented: ‘The gap is getting bigger for two reasons. One is that there’s so much more data stored electronically that if you even search for someone’s laptop, you’re going to get more data now than you used to get, and the other one is that there is so much more technology, there are just so many ways of gaining access to data.’

Another part of the IAPP report stated that Boasberg referred to a case in which lawyers submitted filings containing hallucinatory information generated through AI use. According to the report, he required that side to pay attorney’s fees to the other side as a sanction after discovering that AI had been used in the briefs.

Boasberg noted at the IAPP session: ‘I’m sure lawyers using AI is happening a lot more on the state level, and some judges are referring lawyers to state bars (for possible discipline), but there have been federal judges whose opinions included hallucinatory (citations) and that was obviously embarrassing for them.’ He added: ‘The question is how can it help without compromising privacy issues, sealed cases; there’s just a whole lot that we have to figure out, but I think judges are trying to learn how we can use this constructively.’

Burroughs also remarked at the IAPP event that judges want disclosure when lawyers use AI in filings. She said: ‘We want lawyers to tell us when they’ve used AI. They can use it, but they have to disclose it.’ She added: ‘They can use AI, they can’t use AI, they must disclose when they’re using it, they have to certify that they do citation checks to make sure they don’t have hallucinatory citations — it’s hard to think of what these rules would be going forward today.’

IAPP reported the remarks from the summit discussion. At the IAPP Global Summit, the discussion focused on how AI is affecting legal filings, surveillance questions, and court practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Student AI rights framework unveiled

A newly released ‘Student AI Bill of Rights’ in the US outlines a proposed framework to protect learners as AI tools become increasingly widespread in education. The initiative aims to establish clear standards for fairness, transparency and accountability.

The document highlights the need for students to be informed when AI systems are used in teaching, assessment or administration. It also stresses that students should retain control over their personal data and academic work.

Another central principle is accountability, with students given the right to question and appeal decisions made or influenced by AI systems. The framework also calls for safeguards to prevent bias and ensure equal access to educational opportunities.

While not legally binding, the proposal is designed to guide higher education institutions in developing responsible AI policies. It reflects growing efforts to define ethical standards for AI use in education in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU lapse in child safety rules raises concerns

The expiry of the EU ePrivacy derogation, which allowed technology to detect child sexual abuse material online, has raised concerns over weaker child safeguards. The lapse is seen as creating legal uncertainty for platforms that rely on established detection tools to prevent ongoing harm.

For years, technology companies have voluntarily used hash-matching to detect and remove CSAM, a widely recognised tool for disrupting abuse and protecting victims.

Google is among the organisations calling on the EU institutions to urgently finalise a regulatory framework, alongside nearly 250 child rights organisations, warning that reduced capacity could impact child safety globally.

The EU institutions face criticism for failing to maintain an interim agreement, with stakeholders saying the lack of continuity undermines child online safety efforts.

Meta, Microsoft, and Snap have reaffirmed their commitment to continue voluntary detection and reporting measures while respecting user privacy. The companies also urge the EU institutions to finalise an urgent regulatory framework for consistent and effective child protection standards.

The absence of a clear framework has been described as creating instability for responsible platforms operating across Europe. Fragmented rules and legal uncertainty can slow detection and reporting systems, weakening coordinated protection efforts across platforms and borders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Advocates push for transparency rules in student AI systems

Consumer protection advocates have introduced a Student AI Bill of Rights, calling on higher education institutions to formalise safeguards as AI becomes increasingly embedded in academic systems.

The proposal, launched by the National Student Legal Defense Network under its SHAPE AI programme, highlights the growing use of AI across admissions, classroom instruction, and student support services.

The initiative argues that students must not be reduced to data points or treated as subjects for experimental technologies. It warns that while these tools may enable personalised learning, they also introduce risks linked to privacy, bias, and automated decision-making.

The framework sets out five core principles, including transparency in AI use, human oversight for high-stakes decisions, protection of student data and intellectual property, and safeguards against algorithmic bias. It also calls for equitable access to AI tools and education on their use.

Advocates are urging universities to adopt the principles to ensure accountability as AI becomes more deeply integrated into academic environments.

The development reflects a broader shift in higher education, where clear standards are seen as key to building trust, ensuring consistency, and enabling responsible AI integration in academic decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU interim ePrivacy derogation for voluntary CSAM detection expires

The EU’s interim ePrivacy derogation allowing certain communications services to detect child sexual abuse online voluntarily expired after 3 April 2026, bringing to an end the temporary legal basis that had permitted some providers to scan private communications for child sexual abuse material under limited conditions.

The exemption applied to number-independent interpersonal communications services such as messaging, webmail, and internet telephony platforms, allowing them to use specific technologies to detect, report, and remove child sexual abuse material in private communications.

Under the temporary framework, providers were also required to make information from reports submitted to authorities and the European Commission available in a structured, machine-readable format.

On 26 March 2026, the European Parliament said the derogation would not be extended after negotiations with the Council of the European Union failed to produce an agreement. Parliament had supported a further extension on 11 March, backing a shorter prolongation until August 2027 and a narrower scope than the European Commission had proposed, but no final deal was reached before the deadline.

The expiry leaves the EU without an updated interim arrangement, while negotiations on a permanent legal framework for addressing online child sexual abuse continue. In practice, that means the bloc still has no settled long-term answer to one of its most difficult digital policy questions: how to reconcile child protection measures with privacy and confidentiality rules governing private communications.

Why does it matter?

Because the lapse removes the temporary EU legal basis that had allowed some messaging and other communications services to voluntarily use detection technologies for online child sexual abuse under a limited exemption from ePrivacy rules. That creates immediate legal and operational uncertainty for providers that had relied on the framework, while also reopening a wider policy conflict the EU has still not resolved: how to support child safety online without undermining privacy, confidentiality of communications, and data protection safeguards in the absence of a permanent legislative solution.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OHCHR seeks inputs on protecting human rights defenders in the digital age

The Office of the UN High Commissioner for Human Rights has issued a call for inputs to support a report on how new and emerging technologies are affecting human rights defenders, including women human rights defenders, in the digital age.

Issued under Human Rights Council resolution 58/23, the call sought submissions by 31 March 2026 and forms part of a wider effort to examine how digital technologies are reshaping the conditions under which defenders work, communicate, and stay safe.

According to the OHCHR, the report will look at how digital and emerging technologies affect the work, privacy, communications, and security of human rights defenders. The call notes that digital tools have transformed both how defenders operate and the threats they face, with consequences for their safety online and offline.

The questions set out in the call are organised into four broad areas: legislative and regulatory measures, digital communications, privacy restrictions, and corporate responses. The OHCHR specifically asks for information on online safety and cybercrime laws, internet shutdowns, platform attacks, content moderation, surveillance tools, biometric surveillance, encryption, AI-related risks, and how companies assess and respond to harms affecting human rights defenders on their services.

The OHCHR invited member states, civil society, industry, and other stakeholders to submit written inputs in English, French, or Spanish. Those submissions will inform online consultations in April and the preparation of a report to the Human Rights Council under resolution 58/23.

Why does it matter?

Because the call treats the digital environment facing human rights defenders as a governance issue in its own right, rather than only as a technical or security concern. It brings together surveillance, platform accountability, encryption, AI, online harassment, and internet shutdowns under a single human rights framework, while signalling that the OHCHR wants evidence not only on state conduct, but also on how private companies shape civic space in the digital age.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Canada reviews Privacy Act to modernise data protection and digital governance

The Government of Canada has launched a formal review of the Privacy Act, opening a broader effort to modernise how the federal public sector governs personal data in an increasingly digital administrative environment.

Led by the Treasury Board of Canada Secretariat and announced by Shafqat Ali, President of the Treasury Board, the process will reassess how more than 250 government institutions collect, use, share, and protect personal information.

The review places particular emphasis on improving how data is managed across government programmes, with reform proposals focused on more secure information-sharing, less duplication, and greater accuracy in public administration. Canadian authorities say the aim is to introduce designated official data sources while ensuring that any reuse of personal information serves individuals directly or delivers a clear public benefit.

The process also points to more structural changes, including recognising privacy as a fundamental right and aligning legal definitions more closely with international standards. It is further intended to harmonise procedures for accessing personal information and to update the federal privacy framework to support a more connected digital state.

Consultations will continue through mid-2026, with feedback expected to feed into a final report in winter 2026–27. Taken together, the review suggests that Canada is rethinking how privacy protection, public-sector data sharing, and institutional accountability should operate in a modern digital governance system.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Amnesty International warns EU tech law reforms could weaken GDPR and AI Act protections

Amnesty International has warned that proposed EU reforms presented as a way to simplify digital regulation and boost competitiveness could weaken core safeguards for privacy and fundamental rights.
At the centre of the concern is the European Commission’s ‘Digital Omnibus’ initiative, which would affect major pieces of legislation, including the General Data Protection Regulation and the AI Act.

Amnesty and other civil society groups argue that the package risks reopening key protections in the EU’s digital rulebook under the banner of regulatory simplification.

Among the most controversial proposals are changes to how personal data is defined, along with exceptions that could make it easier for companies to retain or reuse data for AI systems. Critics say that such changes would weaken safeguards intended to limit excessive data collection and to preserve accountability in how personal information is processed.

Concerns also extend to the AI Act, where proposed adjustments could reduce obligations for high-risk systems. According to Amnesty, companies may be given greater discretion in how they assess and disclose risks, potentially lowering transparency and limiting external scrutiny.

Delays in implementation, the organisation argues, could also allow harmful systems to remain in use without full regulatory oversight.

The broader reform agenda may reach beyond privacy and AI rules. Future ‘fitness checks’ could also affect frameworks such as the Digital Services Act and the Digital Markets Act, raising wider concerns about whether the EU’s digital regulatory model is being softened in the name of competitiveness.

For critics, the cumulative risk is that the balance of the EU digital framework could begin to shift away from rights protection and public accountability, and towards greater corporate flexibility in areas linked to surveillance, discrimination, and market power.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Responsible AI gaps highlighted in UNESCO and Thomson Reuters Foundation report

A new global report from UNESCO and the Thomson Reuters Foundation suggests that companies are adopting AI faster than they are building the internal systems needed to govern it responsibly, exposing significant gaps in oversight, accountability, and risk management. Based on data from 3,000 companies, the report found that 44% have an AI strategy, but only 10% are publicly committed to following an AI governance framework.

The gap, according to the report, is no longer one of awareness but of implementation. Many companies now present responsible AI as a principle or ambition, yet provide far less detail on where AI is used, how risks are managed in practice, who is responsible when systems fail, or how concerns are escalated internally. Governance is often described at a conceptual level, but much less often backed by visible operational mechanisms.

Some of the sharpest weaknesses lie in areas central to public-interest AI governance. Only 11% of companies said they assess environmental impact, while just 7% evaluate the human rights impact of the AI they use. Human oversight also remains limited, with only 12% reporting a policy that ensures human supervision of AI systems.

The report also points to weak accountability and data governance structures. Only a small minority of companies could identify who is responsible for ethical risks across the AI lifecycle, while three-quarters showed no evidence of policies to verify the quality of AI training data.

Fewer than one in five reported conducting privacy or data protection impact assessments specific to AI, and only one in five had policies governing data sharing with third-party AI vendors.

Workforce preparedness appears similarly underdeveloped. While 30% of companies said they offer AI training programmes, only 12% provide structured training with comprehensive coverage. The report argues that many businesses now acknowledge the importance of skills development and workforce transition, but rarely explain how workers are supported in practice or how concerns can be raised and addressed.

Taken together, the findings suggest that the main test for responsible AI is shifting from principle to proof. The issue is no longer whether companies say the right things about ethical AI, but whether they can demonstrate that accountability, oversight, and remedies actually work when AI systems are deployed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EPO accelerates digital patent shift with paperless system by 2027

The European Patent Office (EPO) is accelerating its transition towards a fully digital patent system, with plans to implement a paperless patent-granting process by 2027.

Discussions at the latest eSACEPO meeting highlighted steady progress and broad stakeholder support for modernising patent workflows.

Electronic filing and communication are set to become the default, with paper-based processes limited to exceptional cases. The shift aims to improve efficiency and accessibility, supported by legal adjustments and the gradual introduction of structured data formats to enhance processing accuracy.

Digital tools continue to evolve, with the MyEPO platform expanding its functionality through interface upgrades, self-service features and new capabilities such as colour drawing submissions.

The rollout of DOCX filing, alongside optional PDF backups, reflects a cautious approach designed to balance innovation with reliability.

AI is increasingly integrated into patent examination processes, supporting tasks such as search and documentation.

However, the EPO maintains a human-centric model, ensuring that decision-making authority remains with patent examiners while AI enhances productivity and consistency.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!