How Trust and Safety Drive Innovation and Sustainable Growth

20 Feb 2026 14:00h - 15:00h

How Trust and Safety Drive Innovation and Sustainable Growth

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, moderated by IAPP’s Trevor Hughes, brought together a civil-society leader (Alex Reed-Gibbons) [2], a tech-industry representative (Amanda Craig, Microsoft) [4], and two regulators (UK Information Commissioner John Edwards [7] and Singapore PDPC Deputy Commissioner Denise Wong) [8] to examine whether trust can act as an engine for AI growth [16-18]. Hughes noted a paradox: while a “deregulatory mood” is evident [28-30], every banner on the summit floor highlighted trust, safety or privacy [32-34], prompting the question of whether the market is truly stepping back from guardrails [36-38].


Alex Reed-Gibbons argued that adoption-and thus innovation-depends on multiple dimensions of trust, from cultural fit to data security [56-63], and that thoughtful regulation can supply the “fuel” for that trust [67-70]. John Edwards explained that, in the UK, existing data-protection law (the GDPR) already provides a de-facto regulatory regime that sets common standards such as privacy-by-design and impact assessments, thereby helping businesses demonstrate trust [84-92][94-99][102-108]. Denise Wong described Singapore’s approach of regulating clear harms (e.g., AI-generated deepfakes in elections) while leaving broader issues to sectoral rules and adaptable codes of practice [136-141][144-148][252-258].


A consensus emerged that trust and safety are essential, though additional regulation is only clearly needed for high-risk scenarios [164-169]. When asked for promising innovations, the panel highlighted provenance tools to increase transparency [316-322], the concept of agency to restore user control [328-334], privacy-enhancing technologies such as federated learning [346-351], and well-funded, independent watchdogs to represent the public interest [355-356]. The discussion concluded that ongoing coordination among regulators, industry and civil society is crucial to embed trust and safety into AI’s expanding role in society [412-419][423-425].


Keypoints


Major discussion points


Trust and safety are seen as the engine of AI adoption and innovation, even amid a “deregulatory” climate.


Trevor notes the paradox of a deregulatory mood while trust-and-safety messaging dominates the summit ([28-39]). Alex expands that trust is the economic driver that makes AI tools usable and that thoughtful regulation can actually fuel innovation ([56-68]). John reinforces that regulatory standards (e.g., data-protection-by-design) provide a common metric for trust ([88-95]).


Existing data-protection regimes are being repurposed for AI, but there is debate over the need for new, AI-specific rules.


The UK relies on the UK-GDPR as a de-facto AI framework and issues guidance that maps AI practices to GDPR principles ([84-107]). Singapore’s approach blends sector-specific regulation for clear harms with broader “codes of practice” that sit alongside the PDPA, showing a preference for flexible, non-prescriptive tools ([136-158][252-258]). Alex points out that even where laws exist, AI’s opacity makes enforcement difficult, highlighting the need for a transparency layer ([150-162]).


Identifying and managing AI harms requires a mix of high-risk taxonomies, sector-agnostic principles, and supply-chain-wide governance.


Denise describes emerging global harm taxonomies (e.g., the International AI Safety Report) and the difficulty of prospectively defining harms, advocating for agile mechanisms like codes of practice ([236-247][250-257]). Amanda outlines Microsoft’s “sensitive-use” categories-impact on life opportunities, psychological/physical harm, and human-rights impacts-and stresses the challenge of governing risk across the entire AI supply chain ([195-204][209-219]).


Promising innovations to strengthen trust and safety were highlighted, ranging from technical tools to institutional capacity.


Amanda cites provenance tools that track dynamic AI components as a way to increase transparency and accountability ([316-322]). John emphasizes “agency”-giving users control beyond consent, such as delete or opt-out mechanisms ([328-334]). Denise points to privacy-enhancing technologies like federated learning that can protect data when law falls short ([346-351]). Alex stresses the importance of well-staffed, independent regulatory bodies and civil-society watchdogs ([355-356]).


Cross-jurisdictional coordination among regulators is essential to avoid fragmented oversight.


John describes active collaboration with Ofcom, the Global Privacy Assembly, and other regulators to share expectations and avoid duplicated effort ([274-301]). Trevor underscores that regulator interaction becomes critical in the absence of a unified AI standard ([272-273]).


Overall purpose / goal of the discussion


The panel, convened by the IAPP, aimed to explore why trust and safety are crucial for AI-driven growth, assess the current patchwork of regulatory and industry governance, identify gaps where new safeguards may be needed, and surface practical ideas-both technical and institutional-that can help align innovation with responsible, trustworthy AI deployment.


Overall tone and its evolution


The conversation begins analytically and slightly skeptical, highlighting a “deregulatory” mood versus pervasive trust messaging ([28-39]). It quickly shifts to a collaborative, solution-focused tone as panelists share examples of existing frameworks, emerging harm taxonomies, and innovative governance tools. By the later “speed-round” and closing remarks, the tone becomes upbeat and even humorous (e.g., playful audience polls, “cheat” comments), while maintaining a constructive optimism about building trustworthy AI ecosystems.


Speakers

Alexandra Reeve Givens – CEO of the Center for Democracy and Technology; expertise in civil rights, civil liberties, AI governance, trust and safety [S1][S2]


Amanda Craig – General Manager for Responsible AI Policy at Microsoft; expertise in responsible AI governance and policy 


Trevor Hughes – Moderator representing the International Association of Privacy Professionals (IAPP); expertise in privacy, data protection, and AI trust & safety 


Denise Wong – Deputy Commissioner of the Personal Data Protection Commission (PDPC) in Singapore and Assistant Chief Executive of IMDA; expertise in data protection, privacy-enhancing technologies, and AI regulatory frameworks [S9]


John Edwards – Information Commissioner of the United Kingdom (ICO); expertise in data protection law, privacy regulation, and AI oversight [S11]


Additional speakers:


– None identified beyond the listed speakers.


Full session reportComprehensive analysis and detailed insights

The session was opened by IAPP’s Trevor Hughes, who introduced the four panelists – civil-society leader Alexandra Reeve Givens, CEO of the Center for Democracy and Technology; Microsoft’s General Manager for Responsible AI Policy, Amanda Craig; the UK Information Commissioner, John Edwards; and the Deputy Commissioner of Singapore’s PDPC, Denise Wong – and noted that the discussion would explore whether “trust can act as an engine for AI growth” [1-12][16-18]. Hughes highlighted a paradox: while a “deregulatory mood” seems to be prevailing in policy circles, virtually every banner on the summit floor referenced trust, safety or privacy, raising the question of whether the market is truly stepping back from guard-rails [28-38].


Alexandra Reeve Givens argued that the long-term success of AI depends on adoption, which in turn requires trust across multiple dimensions – fit-for-purpose, linguistic and cultural suitability, privacy protection, data security and the quality of the underlying information [56-63]. She framed trust as the economic driver that will fuel innovation and contended that thoughtful, principle-based regulation can supply the “fuel” for that trust by outsourcing the burden of assurance from individual users to a common standard [67-70].


John Edwards explained that the United Kingdom does not have a dedicated AI statute because the UK-GDPR already provides a de-facto regulatory regime for AI. He described how data-protection-by-design, data-protection impact assessments and other GDPR-derived obligations give businesses a measurable way to demonstrate trust, and how the ICO issues guidance that maps AI practices onto existing GDPR principles, thereby filling any perceived regulatory lacuna [84-108].


Denise Wong outlined Singapore’s layered approach. For harms that are clear and present – such as AI-generated deep-fakes used in elections or AI-facilitated scams – the PDPC has enacted specific regulations [136-141]. For the remainder of AI applications, the regulator relies on existing sectoral laws and on “codes of practice” that can be updated quickly, positioning these tools as an “outcome-driven” complement to the more prescriptive PDPA [144-148][252-258].


Alexandra Reeve Givens illustrated a concrete transparency problem: U.S. equal-employment-opportunity statutes prohibit discrimination, yet AI-driven hiring tools can hide bias, making it extremely difficult for a candidate to prove a violation without a disclosure regime that forces providers to reveal model details and conduct impact assessments [150-162]. She therefore stressed that a horizontal transparency layer is essential to give existing anti-discrimination laws practical effect [150-162].


Amanda Craig described Microsoft’s internal governance model, which categorises “sensitive uses” into three high-impact buckets – impacts on life opportunities (e.g., employment, education), psychological or physical harm (especially for vulnerable groups), and human-rights implications [199-204]. She argued that AI risk must be managed across the entire supply chain, drawing on lessons from cybersecurity where risk is addressed holistically rather than focusing on a single component [209-219].


Across the panel there was strong consensus that trust and safety are prerequisites for AI adoption and that transparency, provenance and robust governance are the means to achieve them. All speakers emphasized that trust is a central outcome of both policy and corporate action[56-63][84-92][112-119][128-132]; that provenance tools can make dynamic AI components traceable [316-322]; that the concept of “agency” can restore user control beyond mere consent [328-334]; that privacy-enhancing technologies such as federated learning can protect data where law cannot [346-351]; and that well-staffed, independent watchdogs are essential to represent the public interest [355-356].


Nevertheless, the panel diverged on how far regulation should go. Edwards maintained that the GDPR already supplies sufficient safeguards and that new AI-specific legislation would be redundant [84-107]. Craig counter-argued that internal responsible-AI programmes must be complemented by external regulation to sustain trust [112-119]. Wong advocated a risk-based, “clear-harm-first” approach, limiting formal statutes to obvious threats while using agile codes of practice for the rest [136-147][251-258]. Givens pointed to emerging AI-specific statutes in the EU and several U.S. states as examples of targeted regulation that can actually promote innovation [262-270]. Hughes highlighted the tension between the deregulatory narrative and the ubiquity of trust-focused messaging [28-38].


When asked to name a promising innovation, Craig highlighted “provenance tools” – software-built material that tracks the provenance of dynamic AI components, thereby increasing transparency and accountability [316-322]. Edwards responded with the word “agency”, suggesting that restoring users’ ability to withdraw consent, delete data and control outcomes is a more powerful safeguard than traditional consent mechanisms [328-334]. Wong selected privacy-enhancing technologies, noting that federated learning is already moving from research into production and can secure personal data where legislation falls short [346-351]. Givens concluded with a call for “well-staffed, empowered, independent regulatory bodies and technically informed civil-society organisations” to act as public-interest watchdogs [355-356].


Cross-jurisdictional coordination was repeatedly stressed as essential. Edwards described active collaboration between the ICO, Ofcom and the Global Privacy Assembly to share expectations on AI safety, especially in the ongoing GROK investigation, and warned that fragmented oversight can leave gaps [274-301]. Hughes echoed this, noting that in the absence of a unified AI standard, regulator interaction becomes critical [272-279]. Wong added that an emerging international harm taxonomy – such as the International AI Safety Report – is beginning to provide a common language for regulators worldwide [244-247].


The audience poll on the relationship between innovation and regulation produced a majority thumbs-up, indicating a generally positive view of the innovation-regulation relationship [181-188]. The discussion turned to the difficulty of prescriptive regulation, illustrated by the long-standing debate over cookie consent mechanisms, which still burden users despite decades of experience [181-188][190-197]. Craig linked this to AI by arguing that identifying harms requires a blend of high-risk taxonomies, sector-agnostic principles and supply-chain-wide governance [195-204][209-219].


In closing, Hughes asked the panel to imagine the name of the AI summit five years hence, prompting playful answers such as “AI Trust Summit”, “Nostalgia”, “Thriving” and “For the people, by the people” [392-399][403-408]. He then reflected that the hard work of embedding trust and safety into AI – likened to the historic challenge of bringing electricity into the White House – is being carried out daily by organisations, regulators and civil-society actors, and urged the audience to thank those who are doing this work [412-419][421-425]. Trevor closed by likening the challenge of building trust in AI to the historic task of bringing electricity to the White House, and thanked the audience for the work being done across sectors[412-425].


Key take-aways


– Trust and safety are essential drivers of AI adoption; without trust users will not “flip the switch” [45-46][56-63][84-92][112-119][128-132].


– Existing data-protection regimes (UK-GDPR, Singapore PDPA) already provide a baseline regulatory layer for AI, offering tools such as privacy-by-design and impact assessments [84-108][144-148].


– A paradox exists between a deregulatory climate and the pervasive emphasis on trust in industry and policy messaging [28-38].


– Thoughtful, principle-based regulation can act as a catalyst for innovation rather than a brake [67-70].


– High-risk or clearly harmful AI applications (e.g., election deep-fakes, discriminatory hiring tools) merit targeted regulation; broader AI use can be governed through sectoral rules and internal standards [136-141][199-204].


– Identifying AI harms requires a mix of existing law, emerging harm taxonomies, corporate risk taxonomies and supply-chain-wide governance [195-204][244-247].


– Promising innovations include provenance tools, the agency concept, privacy-enhancing technologies and well-resourced independent watchdogs [316-322][328-334][346-351][355-356].


– International coordination (ICO-Ofcom-GPA collaboration, global harm taxonomies) is critical to avoid fragmented oversight [274-301][272-279][244-247].


Unresolved issues and suggested compromises


– The extent to which new AI-specific legislation is needed beyond existing data-protection frameworks remains contested.


– Prospective definition and prioritisation of emerging AI harms must balance cultural specificity with global consistency.


– Achieving a true “Brussels effect” for AI governance is still a work in progress.


– A hybrid approach is proposed: regulate clear, high-impact harms; use sectoral rules and agile codes of practice for the rest; and supplement with internal responsible-AI programmes and technical tools such as provenance and PETs [136-147][251-258][262-270][112-119].


Follow-up questions raised


1. How can effective transparency and disclosure regimes be built for high-risk AI contexts such as hiring to enable enforcement of anti-discrimination laws? [150-162]


2. What mechanisms allow regulators to prospectively identify and classify AI-related harms in culturally specific ways? [236-247]


3. How should international regulator coordination be structured to address cross-jurisdictional AI incidents like the GROK case? [274-301]


4. What is the effectiveness of regulatory sandboxes and codes of practice as less-prescriptive tools, and how can they be evaluated? [251-258]


5. How can provenance tools be standardised to provide traceability for agentic AI systems? [316-322]


6. How can the concept of “agency” be operationalised to shift responsibility back to providers rather than burdening users? [328-334]


7. What is the current state of adoption and impact of privacy-enhancing technologies such as federated learning? [346-351]


8. Why has the EU AI Act not generated a Brussels-effect comparable to the GDPR, and what factors influence global diffusion of AI regulatory models? [232-235][262-270]


9. How can independent regulatory bodies be protected and resourced to effectively represent the public interest in AI governance? [355-356]


10. Are current consent mechanisms adequate for AI-driven data processing, or are new user-centric remedies required? [181-188][210-218]


These points capture the breadth of the discussion, the areas of consensus and contention, and the concrete ideas proposed for advancing trustworthy AI governance.


Session transcriptComplete transcript of the session
Trevor Hughes

and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for Democracy and Technology, one of the leading advocacy organizations in the world, working on civil rights, civil liberties all around the world. She’s based in D .C. To her immediate left is Amanda Craig. Amanda is the General Manager for Responsible AI Policy at Microsoft. To Amanda’s left, we have John Edwards. John Edwards is known to many. He is the Information Commissioner of the United Kingdom. And to John’s left, we have Denise Wong, who is the Deputy Commissioner of the PDPC in Singapore, the Privacy and Data Protection Commission. Welcome to our panelists. So we have two regulators, an industry representative and a civil society representative.

And I come from the IAPP. If you don’t know the IAPP, we are a global professional association, a not -for -profit but also policy, and we’re neutral. We’re not just a company. an advocacy or a lobbying body, we bring together the people who do the work. Many of them are in the room right now who do the very hard work of data protection and AI governance all around the world. All right, let’s jump in. The title of the session reflects trust as an engine for growth. Let’s think about that just for a minute. Just a few short years ago, I think it was two and a half, maybe three years ago, this event started in Bletchley Park in England.

And in that iteration of the event, it was named the AI Safety Summit. Right around that time, the EUAI Act was being negotiated. It soon passed after that. But a lot has changed in that two or three years. This event is the AI Impact Summit. The event last year in Paris was the AI Action Summit. More recently, we have seen, the not yet fully implemented EUA. AI Act become subject to an omnibus package where some of the expectations of that original act are being dialed back a little bit. And we’ve seen broad critique of regulatory structures, trust and safety structures that might inhibit growth and innovation in AI. There clearly is a deregulatory mood in the air.

In fact, I think it’s notable that there has not been much discussion of law or regulatory initiatives that might create guardrails to help guide the adoption of AI. So clearly, we’re in an odd moment, and an odd moment for this panel. But as I walked around the campus of this event, this enormous campus, I noted something that was, I think, quite significant. Just about every second banner or poster, just about every large printout, printed word in the show floor, somewhere had trust. safety or privacy as part of the messaging. In fact, the sutras, and we’ll talk about them as we go through the session, the principles announced by the Indian government largely around trust and safety.

And so what gives? What’s the dichotomy here? At one moment we are saying it’s a deregulatory mode, we step back. Well, at the same time, we are actively embracing and discussing trust and safety, risk management, protecting consumers, citizens, human beings as they engage with AI. So do we care or not? Are we actually in a deregulatory moment, or have we just gotten quiet about the need for guardrails and trust and safety in these systems? I would say for business, risk exists regardless of whether there’s a law in place or not, and so businesses have an imperative to respond. I’m going to tell a very, very quick story, and that is that in 1891, when electricity was first being brought into the White House in the United States, then President Benjamin Harrison and his wife, Caroline, were actually terrified of flipping the light switch.

And so they hired the electrician from the Edison Company, a man named Ike Hoover, who went on to become the chief usher of the White House. They hired him to flip the light switch. I think the message of this story is that we won’t use it if we don’t trust it. And so as AI is being pulled through the walls of our world, as it’s creating light and switches and tools for us to use, I think we need to ensure that we’re comfortable flipping those switches. And that is the topic of our panel today. So let’s jump in. And our first question is going to be about just the moment that we find ourselves in.

And I’m going to start with Alex. why are trust and safety important to innovation? And maybe speak to this dichotomy that I’ve highlighted. Why is it in this moment that we can’t talk about regulation, but everywhere it seems we’re talking about trust and safety?

Alexandra Reeve Givens

Yeah, first of all, thank you for convening us, and it’s a pleasure to be here. I think you really hit the nail on the head in your introduction, which is when we think about the long -term success and sustainability of AI, and that is business sustainability for the companies, as well as societal sustainability for all of us. The secret is not just an acceleration, the biggest, fastest, most capable model. The real story is one of adoption, and that has been the overwhelming theme of the summit this year. And for people to adopt this technology, they need to trust it. And that’s trust in multiple different facets, right? Is the tool fit for purpose? Does it work in your language?

Is it appropriate for your culture? Will it protect your privacy? Is your data going to be secured? What is the quality of the information that is grounding that model and those outputs? And I think people are really waking up to this, and they’re demanding more. This is both as individual users and then, of course, for enterprise customers, too, who themselves are saying, we’re on the front lines thinking about how to integrate AI into our business operations. We’re the ones who will likely be sued if this goes wrong. So this is where trust really is the fuel of innovation because it is what’s going to be the economic driver of these tools being adopted. And the other thing that I would add is what we see is not only that trust is important for innovation in the abstract, but this is also where responsible, thoughtful regulation can be fuel for innovation as well.

Because the same way that we want to be able to drive cars without all of us being experts in how a motor works, Product liability and good laws around the creation of these tools help make sure they outsource some of that work for us so that we don’t all have to be doing the individual labor of deciding whether we can trust. So many times people will create this false framing of regulation versus innovation, as opposed to thoughtful being regulation being the fuel that actually allows us to sell, buy, and use these tools.

Trevor Hughes

Excellent. Fascinating. John, I’m going to jump to you, and Amanda, I will come right back. But I’m going to jump to you. The U .K. doesn’t have an AI law in place. It has lots of laws that will apply to AI. I think data protection and the GDPR Act in the U .K. is a great example of that. But talk to us a little bit about regulating in the absence of an AI law. What does that look like in the U .K.? And do you see organizations exhibiting behavior that demonstrates that they’re focused on the ideas that Alex suggested, that trust and safety matter regardless of the relationship? What is the regulatory structure that sits over them?

John Edwards

Yeah, absolutely. Absolutely. Absolutely.

Trevor Hughes

There it is.

John Edwards

No, very much so. I mean, the data protection laws apply across the board wherever technology touches personal data. So we have a de facto regulatory regime under the UK GDPR. Coming back to your comment about trust, it’s so important, and there is a role for regulation actually in assisting businesses because businesses are trying to deliver that trust proposition to consumers. But by what metric? Right. And that’s, I think, where regulation can provide a common standard. So, you know, we require, it’s a regulatory tool, that you have to do data protection by design. You have to do data protection impact assessment. You know, we expect. Privacy by design. We expect. respect risk assessment. So all of these things are regulatory requirements, but they are also tools that help intermediate between businesses and the consumers to demonstrate that there is a basis for trust.

And an organization like the ICO is there for both sides to see, well, there’s someone actually overseeing that. And that’s a role that we do discharge. Judge, to your point about the absence of prescriptive regulation in the UK on AI, we don’t see that particularly as a deficit. I mean, I think there’s a lot of policy work going on in areas where policymakers and regulators do need to step in. That’s ongoing, and I won’t comment on that. But, you know, there are ongoing issues about the distribution of proceeds from the use of creative materials and the like. That carries on. But… In the absence of an explicit rule, it’s incumbent on my office to deliver safety and confidence and metrics for industry and to deliver certainty over what can be seen as an uncertain law.

So we’ve gone out and said, well, here’s how we see the technology -neutral general principles of the GDPR apply when you train a model, for example. We see, for example, the EU AI Act in Article 10 talks about the need for fairness. Well, we’ve been able to articulate those obligations by way of guidance, linking it back already to the GDPR principles. So, you know, there’s a mapping. I don’t think at the moment for the available applications of artificial intelligence technologies that there is a lacuna. It’s there. With the GDPR. And we are there to provide. confidence and certainty about how you apply that, how you improve your products with it, and how by doing so you engender that trust that you described at the outset.

Trevor Hughes

Excellent. Okay, so Amanda, tell us, do you agree that there’s not a need for additional rails, traffic indicators in AI? Is John right that the existing regulatory structure is really providing enough guidance or is it the case that Microsoft is using internal principles, frameworks, standards that you might adopt to build programs and services that you think meet the expectations of trust and safety of the marketplace?

Amanda Craig

Thank you. From a Microsoft perspective, we are focused on implementing our responsible AI governance program and see opportunity for lots of different governance models that governments could pursue in terms of implementing existing regulation, developing additional regulation that complements that existing regulation. I think the through line for us, the bottom line, is very much what Alex started us off with, that we do very much see, we’ve seen through multiple generations of technology, we’re not going to have adoption, we’re not going to have use of this technology without trust. And we need to have governance programs at technology companies. We need to have governance efforts by governments that are ensuring that we have an evolving conversation about trust.

Because if I pull the thread on the analogy you started us with, like how do you flip on a light switch and that can be scary when you’ve not done it before, I think the other thing that is very challenging, true about this technology, is that it is also very dynamic. It is evolving very quickly. quickly. And people might even be scared that, like, they won’t know where to find the light switch next week. And that brings a whole different set of challenges. And so that requires not just confidence in how you are able to sort of trust the technology today, but also that there’s trust in a governance process that will continue to iterate and evolve alongside the technology.

Trevor Hughes

Excellent. Denise, help us then here. I know Singapore has released guidelines, standards around AI. Tell us about the Singaporean experience in thinking about regulating trust and safety in AI.

Denise Wong

Thanks so much, Trevor. And thank you to the IAPP for putting this together and for having us. Maybe I’ll answer that question by linking some of the concepts that we’ve talked about. And that sort of underpins our philosophy. Trust and safety is the outcome that we want. You know, we want to create the necessary conditions for the society. to thrive for the public and the enterprises to use the technology with confidence. So AI for that public good. To do that, we need governance. We need a framework of thinking about how we can govern the technology, and we’ve been doing this for all sorts of technology. AI is but one. Regulations are a mechanism, a type of governance mechanism that you use when the necessary and correct conditions exist.

And so that map of that concept informs how we think about our governance approach. So on issues that are very clear, where there are clear harms, we have stepped in to regulate. An example of this is elections regulations that we put in place where we prohibited the use of AI deepfakes to represent candidates. It was time -limited. It was for the period of elections, but we stepped in and put a law in place for that. We also have laws for AI creating online harms, as well as AI in scam situations. So that is the part where we regulate for clear and present harms. For the rest of it, a lot of it we leave to sectoral regulations where there’s already a web of existing regulations, and on specific issues as well.

John and I and many of us are in the data protection field where, as John has said, there are already existing laws that can be tacked on, updated, reviewed in order to deal with this new technology that has come about. So where we have done AI governance frameworks and tools that you’ve mentioned is where we’ve seen a need to create some sort of horizontal principles and platforms to think about the sector agnostic general issues on transparency, on what model governance for corporates could look like. We haven’t seen the need to regulate that horizontal layer just yet, but certainly a need to articulate some of these principles. And that also allows us to create more certainty for the market, to give them some direction that actually this can be a market -driven assurance system that has demand, has supply, and has what we’ll call them proto -standards, early days of standards about what good looks like.

So that’s the work that we’ve been doing and trying to create and simplify. We have a seed and an assurance ecosystem that sits, I would say, adjacent and complementary to regulations where they’re needed.

Trevor Hughes

Fantastic. Please, please.

Alexandra Reeve Givens

So just to comment on that, one area that I think is proving very important, and people are discovering this across jurisdictions, is even where existing laws apply, there is a problem where AI systems make it hard to know whether or not those laws are being broken. So this is where that transparency layer you were articulating really becomes important.

Trevor Hughes

Give us an example.

Alexandra Reeve Givens

Yeah, and I’m going to make it U .S.-centric just because it’s the one that’s top of mind, so forgive the bias here. So in the U .S., we have equal employment laws. It is against the law to discriminate in the course of hiring. So in theory, a piece of software that perpetuates discrimination against particular candidates, for example, not considering the resumes of people over a certain age, is violating an existing law. So people will say we don’t need any further regulation. We’re done. The problem is you can tell in a human run system where it was just a bad apple in the HR department, it’s been historically easier to prove that case. Now, when it’s AI powered software making that decision, it is really hard as a worker who’s just put in a resume and never got an answer back to know if something was going wrong.

If you actually get up your courage and file a case, it is really hard to prove your case if there is discrimination. And so without some type of disclosure regime that requires transparency in these high risk scenarios, high impact scenarios, to have transparency and disclosure about the system that is being used, impact assessments to make sure that discrimination isn’t happening, you actually don’t get the remedy that people really need under existing law. And so that’s where I think this horizontal piece can complement the sector specific vertical laws in a light touch way, but actually gives meaning to the laws on the books.

Trevor Hughes

So I think that’s a great example of the harm trigger that Denise described, that we identify a clear harm and that may be a place where additional regulatory structure might be helpful. I think we heard pretty significant consensus across our panel. Trust and safety is good. That’s good that we’re there. That’s a great consensus to achieve. And not complete consensus on the idea that additional regulation is needed yet. With the exception perhaps of a few scenarios in which we can identify high risk or harm. Let’s go to our audience for a second. Help us describe the relationship between innovation and regulation in AI. If you think it’s a great relationship, thumbs up. If you think it’s a bad relationship, thumbs down.

If you think it’s complicated, make it complicated. What do we think? Oh, I see a lot of content. What does our panel think? I think it’s a good relationship between innovation and appropriate regulation. Fascinating. We have a very strongly opinioned audience here. That’s great. Let’s talk about regulation again and dive in just a bit deeper. I think one of the things that’s tremendously challenging is prescriptive regulation, trying to understand harms that might occur before technology is fully adopted broadly in the marketplace. I’m a veteran of the privacy world going back to 1995, 1996. And in the late 1990s, we were talking extensively about cookies and how do we regulate cookies and the privacy issues associated with cookies.

Guess what? We’re still talking about cookies often. And I know for many of the privacy and data protection, they’re nodding already. They’re crying a little bit because it’s so, so painful to implement. implement many of the cookie banners and cookie consent mechanisms that we have. And I’m not entirely sure, we might get John to admit this even, that, you know, those cookie banners are actually driving the outcomes that we hope for. We identified the biggest and worst harm or concern and dedicated resources appropriately to that. Amanda, I’m going to jump right to you. Talk to us a little bit about identifying those harms. Alex gave us one, which is perhaps AI reviewing HR submissions, resumes, CVs, and language in those CVs may actually create results that were not intended, that create bias, that, you know, in a human -driven system would be easier to find, in an AI -driven system just much, much harder to find.

That’s a great example. How do we identify those prescriptive harms, those harms that we’re not quite sure about yet, that may emerge? Do we do it through principles, through ethics, through what?

Amanda Craig

I think all of the above to some extent. Part of why we start with principles in our governance program is I think it’s helpful to orient towards what do we care about, right, as we then try to build a program that realizes those outcomes. I think we also can look at existing law that reflects where there are harms, like in the employment context, where people could be mistreated or treated unfairly that we know we care about. And there’s been a lot of effort and regulation to define high risk, high impact. At Microsoft, we have something called the sensitive uses sort of scenarios where, you know, we have three categories where technology could have like an impact on someone’s life opportunity or consequential life impact of something like employment or education opportunities, for example, or how someone’s treated under the law otherwise, all sort of fit in that context.

We have the second big category of harm that we have defined as around sort of the risk for psychological or physical harm. So think about vulnerable populations there. Think about the use of AI in critical infrastructure. And then the third category is the use of AI that impacts human rights. So, you know, we have our way of defining what is really high impact. You know, a lot of governments, again, have taken different routes. I think the other thing that we’ve seen is the kind of emergence of a conversation around sort of technology itself that poses specific high risks. For example, highly capable models that have a whole other set of risks that are the risks that are being defined.

And that’s one thing that I just want to draw out as we think about this and drawing upon what I feel like, you know, and I didn’t grow up in the privacy world, I grew up in the cybersecurity world. And one of the things that I think a lot about as we work on, you know, defining these harms and figuring out what to do about them, that we can learn from the kind of… decades of work on cybersecurity is the challenge of thinking about how to address risk across the supply chain. And I think it’s a slightly different conversation in AI than it has been traditionally in security with software and cloud technologies. But there is like a common principle or approach that I think we should really look at closely, which is, you know, we are oftentimes in the context of AI thinking about risk and harm where the technology is actually used, right?

And then what’s difficult is figuring out what do we do across the whole supply chain to manage that risk and have that be cohesive. And one of the things that in the cybersecurity context, we know what the risk or harm, it’s much simpler. It’s security risk, that we care about. But we have the same challenge in terms of like, how do we manage that risk across the supply chain? And one of the challenges over decades of work in the cybersecurity context is… Instead of wanting to… put emphasis on one part of the supply chain or the other at any given moment instead of, like, really dealing with the really hard governance challenge that it is everything at once.

And so I think when we, you know, think about the complexity of defining harms in the AI space, that’s important work to do. And also, in the context of managing risk for any of those harms being realized, we also need to think really hard about looking across the whole supply chain at once. Even though it’s hard from a governance perspective, that’s going to be most important for managing the risk ultimately.

Trevor Hughes

Fantastic. And I misspoke. It’s prospective, not prescriptive regulation. But John and Denise, maybe talk to us a little bit about that. And let me frame it for you both. And Denise, we’ll have you start. Clearly, with data protection regulation, we have had the GDPR now for over seven years. And the effect of that stands out. And I think it’s important to think about that. And I think it’s important to on the global policy environment has been enormous. We now have over 120 countries that have privacy laws in place. Many, many, many of them have genealogical lines that point back to the GDPR. And yet we haven’t seen that in AI yet. The EU AI Act has not taken off around the world.

We don’t see a Brussels effect happening on AI. Is it because the challenge of identifying harm, the challenge of prospectively trying to identify what might

Denise Wong

You always ask me the tough questions. I think, first of all, the harms question, because I think that’s relevant to the regulation question that you’re asking. I think the starting point must be that every country has a unique context. And it’s the job of the government to figure out how to do that. And I think that’s the challenge of prospectively trying to identify what’s harmful, what’s harmful. harmful to their society. I think there’s going to be a huge amount of overlap, but at the end of the day, what’s harmful in one context that’s harmful in India may not be the same as what’s harmful in the US. And the cultural context matters. That said, I think there’s actually increasing consensus, I feel, about what harms or archetypes of harms there are vis -a -vis AI.

And we see that, for example, the International AI Safety Report is starting to anchor some of this taxonomy and sort of buckets and archetypes of harm, and we also see that beginning to happen at Iceland, for example. Those conversations are happening. How does that link to prescriptive regulation or legislation? I think that if the harms are still being coalesced and formed, it’s quite difficult to be very prescriptive about how you deal with those harms, because that, by definition, is sort of changing and still coalescing. It’s still quite nascent. That’s not to say… we should step back. I think we just probably need a slightly more agile way of thinking about that broader concept of governance.

So in the social media context in Singapore, we did it via codes of practice. So we have a broad sort of umbrella legislation that creates a legislative frame for which these codes of practice apply. But the codes of practice can be updated more easily. Same thing, actually, with our data protection law, the PDPA, which is structured quite differently from the GDPR. Our PDPA is actually very not prescriptive. It’s outcome driven. It’s fairly broad. But most of the guidance that PDPC provides, and these are for compliance, is done in advisory guidelines. So I think there are regulatory mechanisms you can use that are less prescriptive than primary legislation. And that gives you enough levers. It’s tools in a toolkit, basically, to be able to deal with the harms and with the problems that the society is facing.

Trevor Hughes

Excellent.

Alexandra Reeve Givens

To dispute you a little bit on the lack of a Brussels effect, I will say, I mean, going actually back to Denise’s point, so not only is there some harmonization happening around the scoping of the harms, I think that certainly is happening, but also on potential points of intervention. So, for example, one of the key elements of the EUAI Act is looking at high -risk scenarios and having their remitigations in place. We have similar laws under consideration in multiple states in the United States, one in the books already in Colorado. They would never say it is a copycat. It came from its own origins. But it is lawmakers thinking what is an appropriate right -scale intervention to that particular risk.

You can look at the recent transparency laws that were passed in California and New York, very similar discussions to the Code of Practice for General Purpose AI models that came out under the EU practice. You can look at the EUAI Act’s provision for regulatory sandboxes and this notion that we want small and medium -sized enterprises, and others to be able to innovate and get a little bit of forgiveness or wiggle room under the laws as they figure out how the regulations apply. That law just got passed in Utah. So there are these glimmers where we are seeing… smart solutions to specific problems and people learning from each other.

Trevor Hughes

I think in the absence of that umbrella AI standard, that interaction with fellow regulators across disciplines and domains becomes really important. Or I will ask you, does it become really important?

John Edwards

Yeah, it is. It’s hugely important that we coordinate. You know, these are new challenges that we’re all facing. On the GROK issue, obviously, it’s under investigation, so I won’t be able to say too much about it. But, you know, we’re interested in what, you know, how models are trained, what data they’re trained on, what output filters are included, what kind of safety mechanisms. I’m interested in what kind of ingestion there is of data when it’s used at that level. But there’s some complexity in that case as well because, you know, you’ve got users using a tool that’s amplified. It’s amplified by social media. I don’t know whether the same functionality is available in any other image generation tool that just hasn’t got the same media because it’s not amplified by a social media platform.

And but, you know, very early on, I think I was back home in New Zealand, actually, on about the 5th of January and started to see this. And I messaged back to the office and said, what are we doing? What’s Ofcom doing? How are we connecting to our international colleagues? And that’s so important. And so we’ve, you know, we’ve messaged into GPA. We’ve coordinated very closely with Ofcom. And, you know, we have to cope with the fact that regulation is a little bit fragmented. So Ofcom is responsible for administering the Online Safety Act in the UK. Now, that is legislation that seeks to regulate the kinds of harmful content that can be delivered to a child’s device, for example.

Right. I see this thing. Is that regulated by online safety? If so, it’s Ofcom. How did that get to me? Well, that depends on how the underlying data was processed. That becomes an ICO, you know, GDPR issue. So we need to be working very, very closely, and we are. But also with the crock issue, one of the very early things we did was to reach out to our colleagues in the GPA, the Global Privacy Assembly, and say, who else is looking at this? Let’s make sure that we’re not sort of treading on each other’s toes, or at least that we’re sharing information, that we’ve got the same ideas, that we think the same way. And that can be tremendously powerful, whether or not you can point to a regulation that that app or that platform is clearly in breach of.

To describe a set of expectations about harm mitigation across a coordinated group of global regulators, I think can be quite powerful. And, you know, just to see how, you know, the alternative for some of these platforms is not necessarily being investigated and trained by the ICO. So it’s like what I noticed the first day that I was here when I went to flip TikTok on and saw this is not available in this country. So if the offering in a particular jurisdiction does not meet the standards and norms of that jurisdiction, these organizations need to understand that they can be switched off, that they are not actually all powerful.

Trevor Hughes

I just have the image of the U .K. Information Commissioner doom -scrolling TikTok in my head now. Let’s do a quick round, and please do keep your answers short, but innovation is not limited to technology, is not limited to business practices. It’s also very powerful in the… The privacy -enhancing, safety -enhancing tools that we use inside organizations. It’s in regulatory structures. Denise has mentioned regulatory sandboxes, or maybe it was Alex, but we’ve heard regulatory sandboxes mentioned. What is the one innovative idea in trust and safety that you think holds real promise? And I’ll let you do one sentence to explain it, but this is a speed round. So we’ll start with Amanda and then work down and come back to Alice.

Amanda Craig

One sentence. Okay. Is that my sentence? I think about provenance tools as an area of innovation. Again, this is calling upon my cybersecurity background, but I think, you know, something like agentic AI is an area where there’s a lot of interest -concerned governance momentum. And one of the challenges is being able to look at something that is fundamentally not just like one technology. It’s technology. It’s a bunch of very dynamic components, models, platform tools, services, applications all working together. And while that feels like a really new, hard challenge, we actually can draw upon what we know of software to actually be a set of dynamic components as well. And one of the ways that we’ve figured out how to govern that or working towards figuring out how to govern it is with software -built materials, something that really allows you to have the ability to track those dynamic components.

And I think that’s something we can apply to agents.

Trevor Hughes

So it increases transparency. It tells you, you know, which algorithm or which system this might have come from. It helps with accountability broadly. Yeah. Excellent. John, what’s the most promising trust and safety innovation that we have?

John Edwards

Well, you challenged us with one word. So I’m going to go with agency. Agency. And I think it’s, for me, it’s a word that, you know, so much of our world is dominated by consent, which is, I won’t say broken, but it’s under strain as a useful concept. Agency, I think, has capacity to recognize that, the objective is to maintain and to restore and maintain an individual’s agency as it uses any product. And that’s consent. It’s actually making sure that provenance is delivered, for example. You can’t have agency if you don’t know the origin of the data that is delivering this agentic miracle to you. It gives you tools at the other end. And consent is always conceived of as a front -end authorizing concept.

But agency says, okay, I’ve done that now. Where’s my delete everything button? Or my I don’t want to do this anymore button. So I think if developers can be thinking about how they deliver the best possible service in a way that restores and maintains the agency of the consumer, I think that will go a long way to addressing some of the problems that we’re seeing. And I think

Trevor Hughes

Fantastic. I had a lot of… Professor years ago now who described burden -shifting wrenches in the law. And I think consent is a burden -shifting wrench that moved much of the burden to the data subject, to the individual. Agency, it sounds to me, is an idea to move back to those who might be accountable and have them have fiduciary or stewardship responsibilities for that person. Denise?

Denise Wong

I would pick privacy -enhancing technology. I think it’s an interesting technological way to deal with at least one part of the problem, which is how do we secure the data, how do we make sure that the personal information is well protected. And it’s advancing so quickly. So two years ago, we were looking at federated learning for training of AI models, and no one could figure it out. I think it’s actually being done in production now. So there is – I’m a lawyer, so I can say this. Sometimes the law cannot solve the problem. But actually, maybe another technology can.

Trevor Hughes

Fantastic. Alex?

Alexandra Reeve Givens

Well -staffed, empowered, independent regulatory bodies that can help represent the public interest. Wow. And because in some countries those are under attack right now where that is not available, well -resourced, technically informed, independent civil society that can play that role in the interim.

Trevor Hughes

Fantastic. Yeah, the importance of having watchdogs, yeah, entities that are watching and observing, commenting, enforcing, really powerful. So there are four great innovations, provenance, agency, privacy -enhancing technologies, and well -funded regulators or civil society. Well done. I think that is a great start. Let’s do another audience poll. How many of you here in this audience are responsible for AI or AI governance, AI ethics, AI safety inside your organization? Hands up. It’s almost the whole room. Keep your hand up if you’re also responsible for something else in addition to AI or it’s just AI. It’s more. I think it is a pretty complete overlap, almost a complete overlap. There’s at least a significant percentage that were responsible for more than one thing, and one of those things was AI.

I think that’s an example of the complexity that we see inside organizations today. John described the coordination necessary between Ofcom and the ICO in the Grok investigation, which is ongoing, because there was not a single place where regulatory authority existed to address that concern. This is a really complex environment. The number of harms or issues span from children’s safety to intellectual property. From bias and algorithmic discrimination all the way through deepfakes and other things. Alex, how do we… How do we put that all into a pot and make it something meaningful?

Alexandra Reeve Givens

Well, what if you can’t put it all into a pot? A pot is a common denominator in all of those things, but AI is a tool that touches everything. So I really do think you actually need a nuanced approach that looks at a particular risk, what those mitigations are for that risk, and then goes from there. The privacy considerations when you are sharing your most intimate concerns and questions about the world with a chat bot is very different than these questions about deep fakes and fraud and impersonation. It’s just you need to have a different legal regime. I think some of the common elements that run through, one is that transparency and rigorous approaches to risk mitigation really matter, and that can either be through regulation or through principles and best practices with meaning and standardization and watchdogs reading those disclosures.

And the second is this burden of the user. So when Trevor introduced me, we described my org. organization. We represent users’ rights around the world. I am all for user empowerment. And also, we cannot put the burden solely on users to navigate this moment. Indeed. And that is the major lesson of the cookie example you were saying before. We didn’t misdiagnose the harm. We misdiagnosed the remedy, which was the burden on individual users when we don’t actually have market choice, nor the time or mental energy to just read a whole bunch of disclosures and act alone. And so solutions that acknowledge the harm are tailored, but also take that burden off individual users. So you’re empowering users, but not burdening them or leaving them to essentially defend themselves unprotected.

We have to think about that.

Trevor Hughes

Okay. Sadly, we are at the end of our time, but we have one more pop question for all of you, and we’re going to let this be our close. We have gone through the AI Impact Summit, the AI Action Summit, the AI Safety Summit. Five years from now. What is the AI summit going to be called? What’s the word that’s going to be in the middle there? So this is a one -word answer again. What’s it going to be? I know it’s a tough question. So, Denise, I’ll start with you because you’re able to handle the toughest questions. Ah, the AI Trust Summit. Okay, John?

John Edwards

Nostalgia.

Trevor Hughes

Nostalgia.

Amanda Craig

Thriving.

Trevor Hughes

Thriving, AI Thriving Summit. Okay.

Alexandra Reeve Givens

I’m going to cheat. For the people, by the people. It’s more words. They’re so strange.

Trevor Hughes

Some of the people. It’s hilarious. To get on a poster. Here’s what I know. I know that there is incredibly hard work that needs to be done to bring trust and safety to this ridiculously powerful technology that I think, as Sundar Pichai says, will be more profound than electricity. That hard work happens every single day inside organizations. I’m going to cheat. I’m going to cheat. I’m going to cheat. I’m going to cheat. I’m going to cheat. I’m going to cheat. that are implementing these tools inside civil society, that are watching and guiding that behavior inside regulatory offices that are navigating to ensure that marketplaces around the world, that the digital economy gets this right. I feel better because people like this are doing that work every day, and I hope you’ll join me in thanking them.

Thank you very much. Thank you so very much. Thank you. Thank you. Well done. Well done. You were fantastic as expected. So what is the I am and I fly to London for magic.

Related ResourcesKnowledge base sources related to the discussion topics (18)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Alexandra Reeve Givens is the CEO of the Center for Democracy and Technology.”

The knowledge base lists Alexandra Reeve Givens as the CEO of the Center for Democracy and Technology in multiple entries [S1] and [S3].

Additional Contextmedium

“Regulation can act as a guarantee of trust and an engine for economic growth in AI.”

Broader commentary in the knowledge base describes regulation as a force for economic growth and a guarantee of trust in AI contexts [S89] and [S90], providing nuance to the panelists’ framing of principle-based regulation as a trust-fuel.

Additional Contextmedium

“AI‑generated deep‑fakes used in elections are a clear present harm and Singapore’s PDPC has enacted specific regulations to address them.”

The knowledge base highlights deep-fakes as a significant challenge that requires government oversight and regulation [S97], and notes Singapore’s awareness of vulnerable groups and the need to bridge digital divides, which aligns with a targeted regulatory response [S94].

Additional Contextlow

“Singapore’s regulator relies on existing sectoral laws and rapidly updatable codes of practice as an outcome‑driven complement to the more prescriptive PDPA.”

While the knowledge base does not detail the exact regulatory mechanism, it mentions Singapore’s focus on protecting vulnerable groups and ensuring inclusive digital policies, suggesting an outcome-driven, flexible approach alongside existing legislation [S94].

External Sources (97)
S1
How Trust and Safety Drive Innovation and Sustainable Growth — -Alexandra Reeve Givens- CEO of the Center for Democracy and Technology, one of the leading advocacy organizations worki…
S2
Open Forum: A Primer on AI — Artificial Intelligence (AI) has been widely adopted across various sectors, including facial recognition, online shoppi…
S3
https://dig.watch/event/india-ai-impact-summit-2026/how-trust-and-safety-drive-innovation-and-sustainable-growth — I just have the image of the U.K. Information Commissioner doom -scrolling TikTok in my head now. Let’s do a quick round…
S4
How Trust and Safety Drive Innovation and Sustainable Growth — – Alexandra Reeve Givens- Amanda Craig – Denise Wong- Amanda Craig
S5
How Trust and Safety Drive Innovation and Sustainable Growth — – Alexandra Reeve Givens- Trevor Hughes- Amanda Craig
S6
https://dig.watch/event/india-ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — And welcome. And… And her background is also in this both biomedical field, science innovation field, but also has ext…
S7
How Trust and Safety Drive Innovation and Sustainable Growth — and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for D…
S8
https://dig.watch/event/india-ai-impact-summit-2026/how-trust-and-safety-drive-innovation-and-sustainable-growth — Fantastic. I had a lot of… Professor years ago now who described burden -shifting wrenches in the law. And I think con…
S9
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Denis Wong serves as both the Data Protection Deputy Commissioner and the Assistant Chief Executive of IMDA, highlightin…
S10
https://dig.watch/event/india-ai-impact-summit-2026/how-trust-and-safety-drive-innovation-and-sustainable-growth — and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for D…
S11
How Trust and Safety Drive Innovation and Sustainable Growth — -John Edwards- Information Commissioner of the United Kingdom
S12
WS #362 Incorporating Human Rights in AI Risk Management — Criticism of lack of enforceability but potential value in encouraging company participation, challenges in articulating…
S13
Advancing digital inclusion and human-rights:ROAM-X approach | IGF 2023 — However, the assessment process has been impeded by insufficient data and other challenges. This has hindered the accura…
S14
NYC’s anti-bias law holds algorithms accountable in hiring decisions — New York City has enforced a new law called Local Law 144,which mandates that employers utilising algorithms for hiring,…
S15
EU’s AI Act faces tech giants’ resistance — As the EU finalises its groundbreaking AI Act, major technology firms arelobbyingfor lenient regulations to minimise the…
S16
Navigating AI regulation: US state lawmakers strive for innovation and accountability — Lawmakers in various US states are directing their focus towards AI,grappling with the intricacies of this rapidly evolv…
S17
Laying the foundations for AI governance — This perspective suggests that well-designed regulation could support innovation by providing clear guidelines and consi…
S18
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — Harmonizing cross-border regulations and practices within the African continent presents challenges due to differing reg…
S19
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Thomas Schneider:Yes, thank you. And it is actually good that we live in a hybrid world, so I was able to follow the dis…
S20
https://dig.watch/event/india-ai-impact-summit-2026/agents-of-change-ai-for-government-services-climate-resilience — This is kind of, the result is a little off. I’m going to give it some more feedback. I’m going to reassess the results….
S21
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Applying the right to information laws to these entities ensures that transparency is maintained and that they are held …
S22
For the record: AI, creativity, and the future of music — Copyright Protection and Legal Framework Legal and regulatory | Human rights Victoria Oakley argues that existing copy…
S23
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Matilda Road:Mathilda, over to you. Thank you, Florian. Good morning everyone. It’s great to see so many of you here and…
S24
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S25
Opening address of the co-chairs of the AI Governance Dialogue — Tomas Lamanauskas: Thank you, thank you very much Charlotte indeed, and thank you everyone coming here this morning to j…
S26
Keynote Adresses at India AI Impact Summit 2026 — “We find ourselves grappling with a global supply chain that is massively over -concentrated.”[87]. “We are forging a su…
S27
From summer disillusionment to autumn clarity: Ten lessons for AI — Additionally, the EU’s long-negotiated AI Act imposes strict rules on AI systems (e.g. high-risk systems must meet safet…
S28
Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403 — The analysis examines topics such as online crime, the dark web, internet fragmentation, internet companies, innovation,…
S29
Do we really need specialised AI regulation? — The pyramid reveals a clear pattern:most layers of AI are already regulated. Hardware is controlled, data is protected (…
S30
WS #162 Overregulation: Balance Policy and Innovation in Technology — Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance …
S31
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S32
AI Meets Cybersecurity Trust Governance & Global Security — Maria stresses that identifying AI‑related harms requires joint effort from governments, civil society, and industry, no…
S33
Secure Finance Risk-Based AI Policy for the Banking Sector — The panel explored how AI governance frameworks must account for India’s linguistic diversity, demographic heterogeneity…
S34
WS #98 Towards a global, risk-adaptive AI governance framework — Sector-specific and use case-specific governance may be needed rather than one-size-fits-all approaches
S35
WS #453 Leveraging Tech Science Diplomacy for Digital Cooperation — Muñoz emphasized that “science diplomacy doesn’t remain confined to policy papers. It creates concrete tools, infrastruc…
S36
Driving Indias AI Future Growth Innovation and Impact — These key comments fundamentally shaped the discussion by expanding it beyond technical infrastructure to encompass trus…
S37
Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights — Need for regulatory coherence and coordination Relationship Between Different Regulatory Frameworks The Council of Eur…
S38
Lightning Talk #107 Irish Regulator Builds a Safe and Trusted Online Environment — Both recognize the critical need for effective coordination between regulators across jurisdictions, though they acknowl…
S39
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — Coordinated enforcement across jurisdictions is deemed crucial for effective regulation. The EU’s Digital Markets Act se…
S40
Policy Network on Artificial Intelligence | IGF 2023 — A context-specific and rule of law approach, advocated by Sarayu Natarajan, is key to effectively addressing this issue….
S41
Discussion Summary: US AI Governance Strategy Under the Trump Administration — Ball supports addressing AI-related problems through traditional legal mechanisms like courts and liability systems rath…
S42
Technology Regulation and AI Governance Panel Discussion — Regulate against those harms and figure out who’s gonna be responsible for it and legislate that way. I think is the rig…
S43
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Finally, the analysis suggests that laws in countries like Canada can have a significant influence on global regulations…
S44
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — The “Brussels effect” is mentioned as a factor that may have negative impacts in non-European contexts. Concerns are rai…
S45
Tech diplomacy could help solve global challenges — Protecting citizens with comprehensive and global regulation is a priority to promote the ethical, responsible, human-ce…
S46
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — Regulation can foster innovation rather than constrain it when properly implemented There’s unexpected consensus that r…
S47
Youth-Driven Tech: Empowering Next-Gen Innovators | IGF 2023 WS #417 — Regulation is seen as a means to foster innovation, rather than block it. The stance is that regulation can actually enc…
S48
WS #162 Overregulation: Balance Policy and Innovation in Technology — Flexible, principle-based approaches can foster innovation while protecting rights Regulation is necessary but should n…
S49
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — Building trust in digital systems and expanding participation in AI decision-making are essential for successful impleme…
S50
Closing remarks – Charting the path forward — Al Mesmar emphasizes the importance of unified policy approaches that can adapt to technological changes while maintaini…
S51
Closing the Governance Gaps: New Paradigms for a Safer DNS — Although regulation in the DNS industry is inevitable, it should aim to avoid fragmented jurisdictional approaches. If t…
S52
Informal Stakeholder Consultation Session — Digital transformation affects every sector, so coordinated policymaking helps ensure coherence and better outcomes for …
S53
WS #55 Future of Governance in Africa — Effective digital governance requires collaboration between government and industry stakeholders. This approach ensures …
S54
Setting the Rules_ Global AI Standards for Growth and Governance — So consensus around the need to do it, consensus around the fact that it’s hard, but it’s important for consumers and bu…
S55
How Trust and Safety Drive Innovation and Sustainable Growth — “So this is where trust really is the fuel of innovation because it is what’s going to be the economic driver of these t…
S56
AI as critical infrastructure for continuity in public services — “Trust also can influence economic confidence and cross -border collaboration.”[54]. “Standards are a very important pil…
S57
How AI Drives Innovation and Economic Growth — And I’m seeing two patterns. One is about trust in technology, and the second part is about the reality of the policy wo…
S58
AI for Social Empowerment_ Driving Change and Inclusion — She argues that immediate policy action is required across competition, tax, labour and social protection to mitigate AI…
S59
Open Forum #17 AI Regulation Insights From Parliaments — Balancing innovation incentives with regulatory protection There’s a critical balance needed between regulation and inn…
S60
Israel’s Policy on Artificial Intelligence Regulation and Ethics — Empowering sector-specific regulators:The need for any regulation of the development and use of artificial intelligence …
S61
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Collaboration among different countries and stakeholders is seen as a key driver for advancing regulatory sandboxes and …
S62
Workshop 1: AI & non-discrimination in digital spaces: from prevention to redress — Equality bodies cannot address algorithmic discrimination alone and need to work with data protection authorities, consu…
S63
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — Coordinated enforcement across jurisdictions is deemed crucial for effective regulation. The EU’s Digital Markets Act se…
S64
Digital Embassies for Sovereign AI — This addresses the need for adaptive governance frameworks that can keep pace with rapid technological change
S65
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — – Alexander E. Brunner- Enzo Maria Le Fevre Cervini – Enzo Maria Le Fevre Cervini- Armando Geller While disagreeing th…
S66
From principles to practice: Governing advanced AI in action — – Balancing rapid technological advancement with necessary governance frameworks across different regional approaches T…
S67
WS #97 Interoperability of AI Governance: Scope and Mechanism — Rapid technological advancement poses challenges for governance frameworks to keep pace
S68
How Trust and Safety Drive Innovation and Sustainable Growth — And I come from the IAPP. If you don’t know the IAPP, we are a global professional association, a not -for -profit but a…
S69
Keynote-Roy Jakobs — This comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just…
S70
Do we really need specialised AI regulation? — The pyramid reveals a clear pattern:most layers of AI are already regulated. Hardware is controlled, data is protected (…
S71
Artificial Intelligence & Emerging Tech — The need for new mechanisms to safeguard data, in addition to consent, is becoming increasingly important There is a gr…
S72
Main Session 2: The governance of artificial intelligence — Claybaugh contends that there are already legal frameworks in place that pre-date ChatGPT covering issues like copyright…
S73
WS #162 Overregulation: Balance Policy and Innovation in Technology — Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance …
S74
Rethinking AI regulation: Are new laws really necessary? — Specialised AI regulation may not be necessary, as existing laws already cover many aspects of AI-related concerns. Jova…
S75
Practical Toolkits for AI Risk Mitigation for Businesses — In healthcare, risks involve threats to life, privacy, equality, and individual autonomy. Similarly, the retail sector a…
S76
AI Meets Cybersecurity Trust Governance & Global Security — Maria stresses that identifying AI‑related harms requires joint effort from governments, civil society, and industry, no…
S77
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S78
WS #98 Towards a global, risk-adaptive AI governance framework — Sector-specific and use case-specific governance may be needed rather than one-size-fits-all approaches
S79
WS #453 Leveraging Tech Science Diplomacy for Digital Cooperation — Muñoz emphasized that “science diplomacy doesn’t remain confined to policy papers. It creates concrete tools, infrastruc…
S80
Secure Finance Risk-Based AI Policy for the Banking Sector — Trust is built when systems are predictable, explainable, and accountable. Trust deepens when innovation aligns with pub…
S81
Interdisciplinary approaches — Online trust today faces several main challenges. The technical entities that run the global infrastructure need to pres…
S82
Laying the foundations for AI governance — How to balance the need for regulation with avoiding fragmentation across different jurisdictions
S83
Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights — Need for regulatory coherence and coordination Relationship Between Different Regulatory Frameworks The Council of Eur…
S84
Closing the Governance Gaps: New Paradigms for a Safer DNS — By showcasing their collective commitment to harm mitigation, the DNS sector can send a message to regulators about thei…
S85
Lightning Talk #107 Irish Regulator Builds a Safe and Trusted Online Environment — Both recognize the critical need for effective coordination between regulators across jurisdictions, though they acknowl…
S86
Data free flow with trust: a collaborative path to progress (ICC) — However, concerns about national security, privacy, and economic safety have arisen, leading to the implementation of re…
S87
Day 0 Event #220 Restoring Internet Credibility and Preserving Democracy — Nagai briefly mentioned what he called a “dead loop” in democratic governance, observing that disinformation campaigns c…
S88
Democratizing AI Building Trustworthy Systems for Everyone — I think open source is going to be in my mind a critical aspect of it. You’ll have to see how far open source movement t…
S89
Technology Rewiring Global Finance: A Panel Discussion Summary — Koffey emphasized that regulation must be a force for economic growth and innovation, breeding adoption and trust throug…
S90
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Regulation is not the enemy of innovation but a guarantee of trust, requiring the right balance between innovation and r…
S91
Main Session on Artificial Intelligence | IGF 2023 — Clara Neppel:Well, I think that these are certain things which can be addressed both on a voluntary level as well as at …
S92
Exploring Emerging PE³Ts for Data Governance with Trust | IGF 2023 Open Forum #161 — Additionally, a platform is used for companies to provide feedback and declare their compliance. Interestingly, the syst…
S93
Cross-Border Data Flows: Harmonizing trust through interoperability mechanisms (DCO) — Another noteworthy concern raised during the discussion was the dominance of larger tech companies and its impact on sma…
S94
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Additionally, there is apprehension about the potential negative impacts of technology, especially in terms of widening …
S95
Michigan to introduce legislation combatting deceptive uses of AI in political advertising — Michigan is set to introduce state-level policies aimed at combating deceptive uses of artificial intelligence (AI) and …
S96
ByteDance unveils AI that creates uncannily realistic deepfakes — ByteDance, the company behind TikTok, hasintroducedOmniHuman-1, an advanced AI system capable of generating highly reali…
S97
360° on AI Regulations — Deepfakes pose significant challenges as they can manipulate information and distort reality. Effective government overs…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Alexandra Reeve Givens
6 arguments190 words per minute1283 words403 seconds
Argument 1
Trust as a prerequisite for adoption and economic growth
EXPLANATION
Alexandra argues that long‑term business and societal sustainability of AI depends on users trusting the technology. Trust drives adoption, which in turn fuels economic growth and innovation.
EVIDENCE
She notes that adoption is the overwhelming theme of the summit and that people need trust in multiple facets-fit-for-purpose, language, culture, privacy, data security, and model quality-before they will use AI, and that this trust will become the economic driver of tool adoption [53-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust is identified as essential for technology adoption and a driver of innovation and sustainable growth in the Trust and Safety discussion [S1].
MAJOR DISCUSSION POINT
Trust prerequisite for adoption
AGREED WITH
Trevor Hughes, John Edwards, Denise Wong, Amanda Craig
Argument 2
Lack of transparency hampers enforcement of current anti‑discrimination laws
EXPLANATION
Alexandra points out that existing US equal‑employment laws are difficult to enforce when AI systems make hiring decisions, because the opacity of the algorithms prevents plaintiffs from proving discrimination.
EVIDENCE
She describes a scenario where AI-driven hiring software may ignore older candidates, making it hard for a job applicant to know a violation occurred or to prove it in court without a disclosure regime that requires transparency and impact assessments [154-162].
MAJOR DISCUSSION POINT
Transparency needed for anti‑discrimination enforcement
AGREED WITH
John Edwards, Amanda Craig, Denise Wong
Argument 3
Discrimination in AI‑driven hiring illustrates need for disclosure and impact assessments
EXPLANATION
Building on the previous point, Alexandra emphasizes that without mandatory disclosure of AI systems used in high‑risk hiring, affected individuals cannot obtain the remedy provided by existing law.
EVIDENCE
She explains that AI-powered hiring tools can hide discriminatory outcomes, and without a transparency regime and impact assessments, victims lack evidence to bring a case under US equal-employment statutes [154-162].
MAJOR DISCUSSION POINT
Disclosure required for AI hiring decisions
Argument 4
Emerging AI‑specific statutes (EU AI Act, US state bills) show targeted regulation can aid innovation
EXPLANATION
Alexandra notes that despite the lack of a global AI law, several jurisdictions are introducing focused legislation—such as the EU AI Act’s high‑risk provisions and state‑level transparency laws in the US—that can provide clear guardrails while still encouraging innovation.
EVIDENCE
She cites the EU AI Act’s high-risk framework, Colorado’s AI law, California and New York transparency statutes, and Utah’s regulatory sandbox provisions as examples of emerging, targeted regulation [262-270].
MAJOR DISCUSSION POINT
Targeted AI legislation supports innovation
AGREED WITH
Amanda Craig, John Edwards, Denise Wong
DISAGREED WITH
Amanda Craig, John Edwards, Denise Wong
Argument 5
Well‑staffed, independent regulatory bodies and empowered civil society
EXPLANATION
Alexandra stresses that robust, well‑resourced, technically informed regulators and an active civil‑society watchdog are essential to represent the public interest and maintain trust in AI systems.
EVIDENCE
She explicitly calls for “well-staffed, empowered, independent regulatory bodies” and “well-resourced, technically informed, independent civil society” to play a role in safeguarding AI trust [355-356].
MAJOR DISCUSSION POINT
Importance of strong regulators and civil society
AGREED WITH
Trevor Hughes, John Edwards, Denise Wong, Amanda Craig
Argument 6
Emerging harmonisation through high‑risk provisions, sandboxes, and shared best practices
EXPLANATION
Alexandra observes that jurisdictions are beginning to align on high‑risk AI categories, regulatory sandboxes, and shared standards, creating early signs of global harmonisation despite the absence of a strong Brussels effect.
EVIDENCE
She references the EU AI Act’s high-risk approach, US state transparency laws, and Utah’s sandbox provision as examples of cross-jurisdictional learning and coordination [262-270].
MAJOR DISCUSSION POINT
Early global alignment on AI risk management
AGREED WITH
Amanda Craig, Denise Wong
DISAGREED WITH
Trevor Hughes, Denise Wong, John Edwards
A
Amanda Craig
6 arguments173 words per minute1045 words361 seconds
Argument 1
Trust requires evolving governance processes inside firms
EXPLANATION
Amanda argues that trust cannot be static; companies must maintain dynamic governance programs that evolve alongside rapidly changing AI technologies.
EVIDENCE
She describes Microsoft’s responsible AI governance program, the need for ongoing conversation about trust, and the challenge of a technology that changes quickly, likening it to a light switch that may be in a different place next week [112-120].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Amanda’s view that trust depends on dynamic internal governance is reflected in the Trust and Safety dialogue that pairs her with Alexandra and in discussions on evolving AI governance [S1][S19].
MAJOR DISCUSSION POINT
Dynamic internal governance for trust
AGREED WITH
Denise Wong, Alexandra Reeve Givens
DISAGREED WITH
John Edwards, Denise Wong, Alexandra Reeve Givens
Argument 2
Alignment of internal AI principles with existing data‑protection statutes
EXPLANATION
Amanda notes that Microsoft’s internal responsible‑AI frameworks are designed to complement and map onto existing data‑protection regulations, ensuring compliance while fostering trust.
EVIDENCE
She states that Microsoft sees opportunities to implement existing regulation and develop additional regulation, indicating that internal principles are aligned with current data-protection statutes [112-115].
MAJOR DISCUSSION POINT
Internal AI principles map to data‑protection law
Argument 3
Combination of internal responsible‑AI frameworks and external regulation is essential
EXPLANATION
Amanda emphasizes that both corporate governance programs and governmental regulation are needed to build and sustain trust in AI, with each reinforcing the other.
EVIDENCE
She explains that Microsoft focuses on responsible AI governance while also seeing the need for government-led governance models, highlighting the complementary role of internal frameworks and external regulation [112-119].
MAJOR DISCUSSION POINT
Synergy of internal and external governance
Argument 4
Definition of high‑risk “sensitive use” categories and supply‑chain risk management
EXPLANATION
Amanda outlines Microsoft’s categorisation of “sensitive uses”—such as employment, education, and critical infrastructure—and stresses the importance of managing risk across the entire AI supply chain.
EVIDENCE
She lists three categories of sensitive use (life-opportunity impacts, psychological/physical harm, human-rights impacts) and discusses the challenge of addressing risk across the whole supply chain rather than focusing on a single component [199-203] and [209-220].
MAJOR DISCUSSION POINT
Sensitive‑use taxonomy and supply‑chain risk
AGREED WITH
Alexandra Reeve Givens, John Edwards, Denise Wong
DISAGREED WITH
Denise Wong, John Edwards, Alexandra Reeve Givens
Argument 5
Provenance tools for tracking dynamic AI components
EXPLANATION
Amanda proposes that provenance tools—software‑built materials that record the lineage of AI components—can increase transparency and accountability for complex, dynamic AI systems.
EVIDENCE
She describes “software-built materials” that allow tracking of dynamic components across models, platforms, tools, and services, thereby providing provenance for agents [316-322].
MAJOR DISCUSSION POINT
Provenance for AI transparency
AGREED WITH
Alexandra Reeve Givens, John Edwards, Denise Wong
Argument 6
Learning from other jurisdictions’ approaches to AI risk
EXPLANATION
Amanda suggests that AI risk management can benefit from lessons learned in cybersecurity and from the regulatory practices of other countries, encouraging a broader, cross‑jurisdictional perspective.
EVIDENCE
She draws parallels between AI risk and decades of cybersecurity work, noting the need to look across the whole supply chain and to learn from other governments’ approaches [208-210].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-jurisdictional learning and harmonised sandbox approaches are advocated in multistakeholder AI standards initiatives [S23] and sandbox coordination studies [S9][S18].
MAJOR DISCUSSION POINT
Cross‑jurisdictional learning for AI risk
J
John Edwards
6 arguments143 words per minute1144 words477 seconds
Argument 1
Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals
EXPLANATION
John explains that data‑protection requirements such as privacy‑by‑design, impact assessments, and ICO oversight give businesses concrete ways to demonstrate trustworthiness to consumers.
EVIDENCE
He lists GDPR-by-design obligations, data-protection impact assessments, privacy-by-design, and risk assessments as regulatory tools that act as trust signals, with the ICO providing oversight for both businesses and consumers [84-95].
MAJOR DISCUSSION POINT
GDPR tools as trust metrics
AGREED WITH
Alexandra Reeve Givens, Amanda Craig, Denise Wong
DISAGREED WITH
Amanda Craig, Denise Wong, Alexandra Reeve Givens
Argument 2
UK GDPR supplies a practical regulatory regime for AI
EXPLANATION
John states that, even without a specific AI law, the UK’s implementation of the GDPR creates a de‑facto regulatory framework that applies to AI systems handling personal data.
EVIDENCE
He notes that data-protection laws apply wherever technology touches personal data, describing the UK GDPR as a “de facto regulatory regime” for AI and highlighting obligations such as data-protection-by-design and impact assessments [86-87] and [90-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UK’s implementation of GDPR is described as a de-facto AI regulatory framework that applies to AI systems handling personal data [S21].
MAJOR DISCUSSION POINT
UK GDPR as AI regulator
Argument 3
Absence of a dedicated AI law is not a regulatory deficit; guidance fills the gap
EXPLANATION
John argues that the lack of a specific AI statute does not leave a void because the ICO issues guidance that maps existing GDPR principles onto AI use, providing certainty for industry.
EVIDENCE
He explains that the office issues technology-neutral principles, links EU AI Act obligations back to GDPR, and provides guidance to fill any perceived lacuna, ensuring confidence and certainty for AI developers [97-107].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidance that translates existing GDPR principles to AI use is cited as filling the statutory gap in the AI-compatible data-protection session [S21].
MAJOR DISCUSSION POINT
Guidance compensates for missing AI law
Argument 4
Cross‑agency coordination (ICO, Ofcom, GPA) is vital for addressing multi‑jurisdictional risks
EXPLANATION
John highlights the importance of collaboration among UK regulators and international bodies to manage AI risks that span different regulatory domains and jurisdictions.
EVIDENCE
He recounts reaching out to Ofcom (Online Safety Act), the Global Privacy Assembly (GPA), and coordinating with international colleagues to share information and align expectations on AI safety investigations [286-300].
MAJOR DISCUSSION POINT
Inter‑agency cooperation on AI risk
AGREED WITH
Trevor Hughes, Denise Wong
DISAGREED WITH
Trevor Hughes, Alexandra Reeve Givens, Denise Wong
Argument 5
“Agency” concept to restore user control and post‑consent rights
EXPLANATION
John proposes that focusing on user agency—providing mechanisms to understand data provenance and to withdraw consent—can re‑balance power between providers and individuals.
EVIDENCE
He defines agency as maintaining individual control, linking it to provenance, and describing features such as a “delete everything” button that go beyond traditional consent models [328-339].
MAJOR DISCUSSION POINT
Agency as post‑consent empowerment
Argument 6
Global Privacy Assembly collaboration to avoid regulatory overlap
EXPLANATION
John notes that the GPA serves as a platform for privacy regulators worldwide to coordinate, preventing duplicated efforts and ensuring consistent approaches to AI‑related privacy challenges.
EVIDENCE
He describes early outreach to GPA colleagues during the Grok investigation to ensure shared information and avoid stepping on each other’s toes [286-300].
MAJOR DISCUSSION POINT
GPA as coordination mechanism
D
Denise Wong
6 arguments164 words per minute969 words353 seconds
Argument 1
Trust as the desired outcome of governance and policy frameworks
EXPLANATION
Denise frames trust and safety as the ultimate goal of AI governance, emphasizing that policies should create conditions where both the public and enterprises can use AI with confidence.
EVIDENCE
She states that “trust and safety is the outcome that we want” and that governance frameworks aim to create necessary conditions for society and enterprises to thrive with AI [128-132].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust as the ultimate governance outcome is a central theme in the Trust and Safety analysis that includes Denise’s contributions [S1].
MAJOR DISCUSSION POINT
Trust as governance outcome
AGREED WITH
Trevor Hughes, Alexandra Reeve Givens, John Edwards, Amanda Craig
DISAGREED WITH
Amanda Craig, John Edwards, Alexandra Reeve Givens
Argument 2
Singapore’s PDPA is outcome‑driven and supported by advisory guidance
EXPLANATION
Denise explains that Singapore’s Personal Data Protection Act (PDPA) is deliberately broad and outcome‑focused, with the PDPC providing non‑prescriptive advisory guidelines to help organisations comply.
EVIDENCE
She notes that the PDPA is “outcome driven,” not prescriptive, and that most compliance guidance is delivered through advisory guidelines rather than hard law [254-258].
MAJOR DISCUSSION POINT
Outcome‑driven PDPA framework
Argument 3
Regulate only clear, high‑impact harms; rely on sectoral rules and standards otherwise
EXPLANATION
Denise argues that regulation should be reserved for situations with evident, serious harms (e.g., election deepfakes), while other issues can be managed through existing sector‑specific regulations and voluntary standards.
EVIDENCE
She cites concrete regulations on election deepfakes, online harms, and scams as examples of clear-harm regulation, and mentions leaving broader AI issues to sectoral rules and emerging horizontal principles [136-143] and [144-147].
MAJOR DISCUSSION POINT
Targeted regulation for clear harms
AGREED WITH
Alexandra Reeve Givens, Amanda Craig, John Edwards
Argument 4
International harm taxonomy (e.g., AI Safety Report) guides regulatory focus
EXPLANATION
Denise points to emerging global taxonomies, such as the International AI Safety Report, which categorize AI harms and help regulators prioritize interventions.
EVIDENCE
She references the International AI Safety Report as a source of emerging harm buckets and archetypes that are being adopted in places like Iceland [244-245].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The International AI Safety Report’s harm taxonomy is referenced as a tool for prioritising regulatory interventions in discussions of AI safety taxonomy [S24].
MAJOR DISCUSSION POINT
Harm taxonomy informs regulation
AGREED WITH
John Edwards, Trevor Hughes
DISAGREED WITH
Trevor Hughes, Alexandra Reeve Givens, John Edwards
Argument 5
Privacy‑enhancing technologies such as federated learning
EXPLANATION
Denise highlights that technical solutions like federated learning can address data‑privacy challenges that law alone may not solve, enabling AI model training without exposing raw personal data.
EVIDENCE
She notes that two years ago federated learning was theoretical, but it is now being deployed in production to protect personal information while training AI models [349-351].
MAJOR DISCUSSION POINT
Technical privacy solutions for AI
Argument 6
Use of codes of practice and agile governance to adapt to evolving harms
EXPLANATION
Denise describes Singapore’s approach of pairing umbrella legislation with easily updatable codes of practice, allowing the regulatory framework to stay current with rapidly changing AI risks.
EVIDENCE
She explains that Singapore employs codes of practice under a broad legislative frame, which can be updated more readily than primary legislation, and that guidance from the PDPC complements this agile approach [251-258].
MAJOR DISCUSSION POINT
Agile governance via codes of practice
AGREED WITH
Amanda Craig, Alexandra Reeve Givens
T
Trevor Hughes
4 arguments143 words per minute2428 words1015 seconds
Argument 1
Observation of a paradox between deregulation talk and pervasive trust messaging
EXPLANATION
Trevor notes the contradictory situation where the industry talks about stepping back from regulation while simultaneously emphasizing trust, safety, and risk‑management in AI.
EVIDENCE
He observes that “there clearly is a deregulatory mood in the air” yet every banner mentions trust, safety, or privacy, highlighting the dichotomy [28-38].
MAJOR DISCUSSION POINT
Deregulation vs trust paradox
AGREED WITH
John Edwards, Denise Wong
DISAGREED WITH
Alexandra Reeve Givens, Denise Wong, John Edwards
Argument 2
Cookie‑consent experience illustrates how privacy law shapes trust
EXPLANATION
Trevor uses the long‑standing struggle with cookie consent banners as an example of how privacy regulation can both create and undermine user trust, showing the complexity of prescriptive rules.
EVIDENCE
He recounts the history of cookie regulation, the ongoing pain of implementing consent banners, and suggests that these mechanisms may actually drive the desired outcomes despite their burdensome nature [181-188].
MAJOR DISCUSSION POINT
Cookie consent as trust mechanism
Argument 3
Questioning whether the current deregulatory mood truly reduces guardrails
EXPLANATION
Trevor asks whether the apparent deregulatory atmosphere actually means fewer safeguards, or merely a quieter discussion about needed guardrails in AI.
EVIDENCE
He poses the question, “Are we actually in a deregulatory moment, or have we just gotten quiet about the need for guardrails?” and later wonders if the deregulatory mood truly reduces safeguards [36-40] and [169-170].
MAJOR DISCUSSION POINT
Deregulation vs actual guardrails
Argument 4
Lack of a clear “Brussels effect” for AI highlights need for global alignment
EXPLANATION
Trevor points out that, unlike data‑protection law, the EU AI Act has not generated a worldwide “Brussels effect,” underscoring the necessity for coordinated international AI governance.
EVIDENCE
He states that the EU AI Act has not taken off globally and that we do not see a Brussels effect happening on AI, suggesting a gap in global alignment [232-235].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses note that the EU AI Act has not generated a worldwide Brussels effect, underscoring the need for coordinated international AI governance [S15] and for global standards cooperation [S23].
MAJOR DISCUSSION POINT
Absence of AI Brussels effect
Agreements
Agreement Points
Trust is essential for AI adoption and economic growth
Speakers: Trevor Hughes, Alexandra Reeve Givens, John Edwards, Denise Wong, Amanda Craig
Trust as a prerequisite for adoption and economic growth Well‑staffed, independent regulatory bodies and empowered civil society Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals Trust as the desired outcome of governance and policy frameworks Trust requires evolving governance processes inside firms
All speakers emphasized that trust is a prerequisite for AI adoption and drives economic growth; regulators see trust as an outcome of governance, industry sees it as requiring dynamic internal programs, and civil society frames it as the ultimate goal of policy. [16-18][56-63][87-95][128-132][112-119]
POLICY CONTEXT (KNOWLEDGE BASE)
Trust is identified as a key driver of AI adoption and economic confidence, highlighted in multiple IGF and WEF discussions emphasizing that trust fuels innovation and cross-border collaboration [S55][S56][S57].
Transparency and provenance are needed to enforce existing laws and build trust
Speakers: Alexandra Reeve Givens, John Edwards, Amanda Craig, Denise Wong
Lack of transparency hampers enforcement of current anti‑discrimination laws Agency concept to restore user control and post‑consent rights Provenance tools for tracking dynamic AI components Use of codes of practice and agile governance to adapt to evolving harms
Speakers agreed that transparency, provenance, and disclosure are critical for enforcing anti-discrimination and other existing laws and for establishing trust in AI systems. Alexandra highlighted opacity in hiring AI, John linked provenance to user agency, Amanda described provenance tools, and Denise advocated agile codes of practice to maintain transparency. [154-162][322-324][316-322][251-258]
POLICY CONTEXT (KNOWLEDGE BASE)
Transparency and provenance are linked to standards that enable enforcement of laws and build trust, as noted in discussions on the role of standards and the need for clear definitions in AI regulation [S55][S56][S44].
Targeted, risk‑based regulation can support innovation rather than stifle it
Speakers: Alexandra Reeve Givens, Amanda Craig, John Edwards, Denise Wong
Emerging AI‑specific statutes (EU AI Act, US state bills) show targeted regulation can aid innovation Definition of high‑risk “sensitive use” categories and supply‑chain risk management Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals Regulate only clear, high‑impact harms; rely on sectoral rules and standards otherwise
All participants noted that focused, risk-based regulatory approaches-such as high-risk categories, sector-specific rules, or targeted statutes-provide necessary guardrails while still encouraging innovation. Alexandra cited emerging AI laws, Amanda described sensitive-use taxonomy, John pointed to GDPR tools, and Denise argued for regulation only where clear harms exist. [262-270][199-203][84-95][136-143]
POLICY CONTEXT (KNOWLEDGE BASE)
Risk-based, principle-based regulation is argued to foster innovation, with examples such as regulatory sandboxes and EU AI Act risk assessments encouraging growth while mitigating risks [S46][S47][S48][S49].
Cross‑jurisdictional coordination among regulators and stakeholders is vital
Speakers: John Edwards, Trevor Hughes, Denise Wong
Cross‑agency coordination (ICO, Ofcom, GPA) is vital for addressing multi‑jurisdictional risks Observation of a paradox between deregulation talk and pervasive trust messaging International harm taxonomy (e.g., AI Safety Report) guides regulatory focus
Speakers concurred that international and inter-agency cooperation is essential to manage AI risks that cross borders. John described coordination with Ofcom and the GPA, Trevor highlighted the need for coordination amid deregulation trends, and Denise referenced global harm taxonomies that aid alignment. [286-300][272-279][244-247]
POLICY CONTEXT (KNOWLEDGE BASE)
Coordinated policymaking across international, regional, and subnational levels and multi-stakeholder engagement are repeatedly emphasized as essential for coherent AI governance [S52][S53][S61].
Dynamic, evolving governance frameworks are needed to keep pace with AI change
Speakers: Amanda Craig, Denise Wong, Alexandra Reeve Givens
Trust requires evolving governance processes inside firms Use of codes of practice and agile governance to adapt to evolving harms Emerging harmonisation through high‑risk provisions, sandboxes, and shared best practices
All agreed that AI governance must be adaptable. Amanda stressed the need for evolving internal programs, Denise promoted agile codes of practice, and Alexandra noted emerging harmonisation efforts that require flexible approaches. [112-119][251-258][262-270]
POLICY CONTEXT (KNOWLEDGE BASE)
Recent panels and workshops call for adaptive, agile governance that evolves with rapid AI advances, stressing flexible frameworks and continuous updates [S50][S64][S65][S66][S67].
Similar Viewpoints
Both regulators and industry see provenance and agency as key mechanisms to give users control and build trust in AI systems. [322-324][328-339]
Speakers: John Edwards, Amanda Craig
Agency concept to restore user control and post‑consent rights Provenance tools for tracking dynamic AI components
Both civil‑society and regulator perspectives recognise emerging global harm taxonomies and targeted statutes as a basis for coordinated, innovation‑friendly regulation. [262-270][244-247]
Speakers: Denise Wong, Alexandra Reeve Givens
Emerging AI‑specific statutes (EU AI Act, US state bills) show targeted regulation can aid innovation International harm taxonomy (e.g., AI Safety Report) guides regulatory focus
Unexpected Consensus
Regulators endorsing provenance tools as a trust‑building innovation
Speakers: John Edwards, Amanda Craig
Agency concept to restore user control and post‑consent rights Provenance tools for tracking dynamic AI components
It is notable that a data-protection regulator (John) and a corporate AI leader (Amanda) both highlighted provenance and agency as promising innovations for trust, despite their different institutional roles. [322-324][328-339]
Civil‑society and industry both advocating agile, code‑of‑practice governance
Speakers: Denise Wong, Amanda Craig, Alexandra Reeve Givens
Use of codes of practice and agile governance to adapt to evolving harms Definition of high‑risk “sensitive use” categories and supply‑chain risk management Emerging harmonisation through high‑risk provisions, sandboxes, and shared best practices
While regulators often favour formal legislation, the panel showed unexpected alignment among civil-society (Denise), industry (Amanda), and civil-society again (Alexandra) on the need for flexible, code-based approaches to keep pace with AI evolution. [251-258][112-119][262-270]
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder approaches and agile, code-of-practice models are advocated by civil society and industry to ensure practical, inclusive AI governance [S52][S50][S40].
Overall Assessment

The panel displayed strong consensus that trust is foundational for AI adoption, that transparency and provenance are essential for enforcing existing laws, and that targeted, risk‑based regulation—combined with agile governance—can support innovation. Participants also agreed on the necessity of cross‑jurisdictional coordination and dynamic governance models.

High consensus across regulators, industry, and civil‑society on the core principles of trust, transparency, and coordinated, risk‑based regulation, suggesting a shared roadmap for future AI governance that balances innovation with safeguards.

Differences
Different Viewpoints
Extent of regulatory intervention needed for AI
Speakers: John Edwards, Amanda Craig, Denise Wong, Alexandra Reeve Givens, Trevor Hughes
Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals Trust requires evolving governance processes inside firms Regulate only clear, high‑impact harms; rely on sectoral rules and agile codes for the rest Emerging AI‑specific statutes (EU AI Act, US state bills) show targeted regulation can aid innovation Observation of a paradox between deregulation talk and pervasive trust messaging
John argues that the UK GDPR already supplies a sufficient de-facto regime for AI, so no new AI-specific law is needed [84-87][90-95][97-107]. Amanda stresses that both internal responsible-AI programs and external regulation are needed to sustain trust [112-119]. Denise contends that regulation should be limited to clear high-impact harms and that sector-specific rules and codes of practice are preferable for other issues [136-143][144-147][251-258]. Alexandra points to emerging targeted AI legislation (EU AI Act, US state laws) as beneficial for innovation and guardrails [262-270]. Trevor highlights the contradictory deregulatory mood versus the ubiquity of trust messaging [28-38][232-235].
POLICY CONTEXT (KNOWLEDGE BASE)
Views diverge between proponents of traditional legal mechanisms and limited pre-emptive regulation versus advocates for targeted regulatory measures to address AI harms [S41][S42].
Preferred mechanism to achieve trust and safety
Speakers: Amanda Craig, John Edwards, Denise Wong, Alexandra Reeve Givens
Trust requires evolving governance processes inside firms Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals Trust as the desired outcome of governance and policy frameworks Emerging AI‑specific statutes (EU AI Act, US state bills) show targeted regulation can aid innovation
Amanda argues that dynamic internal governance programs are essential for trust, complementing external regulation [112-119]. John emphasizes that regulatory requirements such as privacy-by-design and ICO oversight give concrete trust signals to consumers [84-95]. Denise frames trust as the ultimate outcome of governance, achieved through outcome-driven laws and advisory guidance rather than prescriptive rules [128-132][251-258]. Alexandra adds that thoughtful, targeted regulation (high-risk provisions, sandboxes) can also fuel innovation and trust [262-270]. All agree trust is vital but differ on whether internal corporate measures, regulatory mandates, or a mix are the primary driver.
POLICY CONTEXT (KNOWLEDGE BASE)
Mechanisms such as regulatory sandboxes, risk assessments, and principle-based codes are discussed as ways to build trust and ensure safety in AI systems [S46][S47][S61][S48].
Approach to identifying and addressing AI harms
Speakers: Denise Wong, Amanda Craig, John Edwards, Alexandra Reeve Givens
International harm taxonomy (e.g., AI Safety Report) guides regulatory focus Definition of high‑risk “sensitive use” categories and supply‑chain risk management Cross‑agency coordination (ICO, Ofcom, GPA) is vital for addressing multi‑jurisdictional risks Emerging harmonisation through high‑risk provisions, sandboxes, and shared best practices
Denise cites the International AI Safety Report as a developing taxonomy to prioritize harms [244-245]. Amanda outlines Microsoft’s taxonomy of sensitive uses and stresses managing risk across the entire AI supply chain [199-203][209-220]. John highlights the need for coordination among regulators (ICO, Ofcom, GPA) to handle cross-jurisdictional risks [286-300]. Alexandra notes early global alignment on high-risk categories, sandboxes, and shared standards, though she acknowledges the process is nascent [262-270]. The speakers differ on whether a top-down taxonomy, corporate risk categories, inter-agency coordination, or emerging harmonisation should lead the effort.
POLICY CONTEXT (KNOWLEDGE BASE)
Approaches range from targeting specific harms and assigning responsibility to broader policy actions across sectors to mitigate AI-driven disruptions [S42][S58][S59].
Extent of global harmonisation and the ‘Brussels effect’ for AI
Speakers: Trevor Hughes, Alexandra Reeve Givens, Denise Wong, John Edwards
Observation of a paradox between deregulation talk and pervasive trust messaging Emerging harmonisation through high‑risk provisions, sandboxes, and shared best practices International harm taxonomy (e.g., AI Safety Report) guides regulatory focus Cross‑agency coordination (ICO, Ofcom, GPA) is vital for addressing multi‑jurisdictional risks
Trevor observes that unlike data-protection law, the EU AI Act has not produced a Brussels effect, suggesting a lack of global alignment [232-235]. Alexandra counters that there are early signs of alignment via high-risk frameworks, sandboxes, and cross-jurisdictional learning [262-270]. Denise points to an emerging international harm taxonomy that is beginning to be adopted globally [244-245]. John stresses the practical need for regulator coordination across borders to manage AI risks [274-277][286-300]. The disagreement centers on how far global harmonisation has progressed.
POLICY CONTEXT (KNOWLEDGE BASE)
The ‘Brussels effect’, where EU regulations influence global standards, is highlighted as a factor shaping AI governance worldwide, raising questions about the degree of harmonisation needed [S43][S44][S45].
Unexpected Differences
Sufficiency of existing data‑protection law versus need for new agile regulatory tools
Speakers: John Edwards, Denise Wong
Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals Use of codes of practice and agile governance to adapt to evolving harms
John asserts that the UK GDPR and ICO guidance fully cover AI risks, leaving no regulatory deficit [97-107]. Denise, however, contends that because AI harms are still coalescing, the law must be complemented by flexible codes of practice and advisory guidance to stay current [251-258]. This contrast between confidence in existing law and the call for agile supplementary tools was not anticipated given their shared regulatory focus.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates focus on whether current data-protection frameworks are adequate or whether new, agile tools are required, with concerns about premature legislation and sector-specific regulatory needs [S44][S41][S60].
Overall Assessment

The panel shows strong consensus that trust and safety are critical for AI adoption, but there is notable disagreement on the scope and form of regulation needed. While regulators (John) emphasize existing data‑protection frameworks as sufficient, industry (Amanda) and civil‑society (Denise, Alexandra) call for additional targeted legislation, agile codes, and internal governance mechanisms. Disagreements also appear around the degree of global harmonisation and the best approach to identifying AI harms.

Moderate to high disagreement on regulatory strategy and global coordination, which could impede unified policy development but also reflects a healthy multi‑stakeholder debate that may lead to more nuanced, hybrid governance models.

Partial Agreements
All speakers concur that trust and safety are essential for AI adoption and societal benefit, but they diverge on the primary means to achieve it: Amanda stresses internal corporate governance, John highlights regulatory compliance mechanisms, Denise focuses on outcome‑driven policy and advisory guidance, while Alexandra advocates for targeted legislative interventions [112-119][84-95][128-132][262-270].
Speakers: Amanda Craig, John Edwards, Denise Wong, Alexandra Reeve Givens
Trust requires evolving governance processes inside firms Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals Trust as the desired outcome of governance and policy frameworks Emerging AI‑specific statutes (EU AI Act, US state bills) show targeted regulation can aid innovation
Both agree that regulation should be proportionate and focused on clear harms, but John believes existing GDPR tools already cover AI sufficiently, whereas Denise argues that additional agile codes of practice are needed to address emerging AI risks beyond what current law captures [84-95][251-258].
Speakers: John Edwards, Denise Wong
Regulatory tools (e.g., GDPR‑by‑design) provide measurable trust signals Regulate only clear, high‑impact harms; rely on sectoral rules and agile codes for the rest
Takeaways
Key takeaways
Trust and safety are seen as essential drivers of AI adoption and economic growth; without trust, users will not ‘flip the switch’ on AI technologies. Existing data‑protection regimes (e.g., UK GDPR, Singapore PDPA) already provide a de‑facto regulatory layer for AI, offering tools such as privacy‑by‑design, DPIAs, and outcome‑driven guidance. There is a perceived paradox between a deregulatory climate and the pervasive emphasis on trust and safety in industry and policy messaging. Regulators, industry, and civil‑society agree that thoughtful, principle‑based regulation can fuel innovation rather than stifle it. High‑risk or clearly harmful AI applications (e.g., deep‑fakes in elections, discriminatory hiring tools) merit targeted regulatory action, while broader AI use can be governed through sectoral rules and internal standards. Identifying AI harms requires a mix of principles, existing law, sector‑specific risk taxonomies, and supply‑chain‑wide risk management. Innovative mechanisms such as provenance tools, the “agency” concept, privacy‑enhancing technologies (e.g., federated learning), and well‑resourced independent watchdogs are viewed as promising ways to strengthen trust. International coordination (e.g., ICO‑Ofcom collaboration, Global Privacy Assembly, codes of practice) is critical to avoid fragmented oversight and to share emerging best practices.
Resolutions and action items
Regulators (e.g., ICO, PDPC) will continue issuing guidance that maps existing data‑protection principles to AI use cases. Industry (Microsoft) will advance responsible‑AI governance programs, focusing on provenance tools and dynamic component tracking. Stakeholders will pursue agile, outcome‑driven regulatory mechanisms such as codes of practice and sandboxes to address evolving harms. Cross‑agency coordination mechanisms will be maintained and expanded (e.g., ICO‑Ofcom, GPA collaboration).
Unresolved issues
The extent to which new, AI‑specific legislation is needed beyond existing data‑protection frameworks. How to prospectively define and prioritize emerging AI harms in a way that is globally consistent. Achieving a true “Brussels effect” for AI governance and harmonising standards across jurisdictions. Specific implementation details for high‑risk AI sandboxes and how they will balance innovation with oversight. How to allocate responsibility for AI risk across the entire supply chain without over‑burdening individual users.
Suggested compromises
Regulate only clear, high‑impact harms while relying on sectoral rules and internal responsible‑AI standards for the broader AI landscape. Use existing data‑protection laws as the baseline regulatory layer and supplement them with agile codes of practice or guidance for AI‑specific issues. Combine internal governance tools (e.g., provenance, agency mechanisms) with external oversight to provide measurable trust signals without heavy prescriptive legislation.
Thought Provoking Comments
In 1891, when electricity was first being brought into the White House, President Benjamin Harrison and his wife were terrified of flipping the light switch. They hired an electrician just to turn it on. The lesson: we won’t use technology if we don’t trust it.
Uses a vivid historical analogy to illustrate that trust is a prerequisite for adoption of any new technology, framing the entire panel around the central theme of trust as an engine for growth.
Set the tone for the discussion, prompting each panelist to address trust from their perspective and leading directly to the first question about why trust and safety matter for innovation.
Speaker: Trevor Hughes
Regulation isn’t a brake on innovation; thoughtful, well‑designed regulation can actually be fuel for innovation because it outsources the trust‑building work from individual users to a common standard.
Challenges the common narrative that regulation stifles progress and reframes it as a catalyst, introducing a nuanced view that bridges civil‑society concerns with business interests.
Shifted the conversation from a binary ‘regulation vs. innovation’ debate to a more collaborative framing, prompting John and others to discuss how existing rules already serve that purpose.
Speaker: Alexandra Reeve Givens
The UK doesn’t need a separate AI law because the UK GDPR already provides a de‑facto regulatory regime for AI. We map GDPR principles—privacy by design, DPIAs, fairness—to AI use cases, giving businesses certainty.
Highlights a pragmatic approach: leveraging existing data‑protection law to cover AI, thereby questioning the necessity of new, AI‑specific legislation.
Reinforced the idea that existing frameworks can fill gaps, leading Amanda and Denise to discuss whether additional rails are needed or if sector‑specific guidance suffices.
Speaker: John Edwards
We regulate only where harms are clear (e.g., election deep‑fakes, online scams). For the rest we rely on sectoral regulations and horizontal principles—proto‑standards and assurance ecosystems—that sit adjacent to law.
Introduces a layered governance model that distinguishes between clear‑cut harms requiring law and broader, evolving issues handled by standards and market‑driven assurance, adding complexity to the regulatory discussion.
Prompted Alexandra to note the transparency problem in existing laws, and led the group to explore the need for a “horizontal” layer of accountability beyond sector‑specific rules.
Speaker: Denise Wong
In the U.S., existing equal‑employment laws prohibit discrimination, but AI‑driven hiring tools make it practically impossible for a candidate to prove bias without a disclosure regime. Transparency and impact assessments are needed to give those laws meaning.
Provides a concrete, jurisdiction‑specific example where existing law is insufficient without AI‑specific transparency, illustrating the gap between legal theory and practical enforcement.
Deepened the analysis of why new governance mechanisms (e.g., disclosure requirements) are essential, influencing Denise’s point about agile codes of practice and prompting further discussion on enforcement challenges.
Speaker: Alexandra Reeve Givens
From cybersecurity we’ve learned how to manage risk across the entire supply chain. AI risk isn’t just at the point of use; we need a holistic, supply‑chain‑wide governance approach.
Brings cross‑domain expertise to the AI debate, suggesting that lessons from a mature field (cybersecurity) can inform AI risk management, thereby expanding the conversation beyond AI‑specific silos.
Shifted the dialogue toward systemic risk management, encouraging other panelists to think about broader, coordinated regulatory and industry responses rather than isolated measures.
Speaker: Amanda Craig
Because harms are still being coalesced, prescriptive legislation is premature. Instead we should use agile tools like outcome‑driven umbrella legislation combined with quickly updatable codes of practice.
Advocates for a flexible, iterative regulatory approach, directly addressing the difficulty of forecasting AI harms and offering a practical alternative to rigid statutes.
Reinforced the earlier theme of layered governance, and led to a consensus that while some high‑risk scenarios merit direct regulation, most AI governance will evolve through standards and best‑practice frameworks.
Speaker: Denise Wong
The EU AI Act’s high‑risk provisions, regulatory sandboxes, and transparency laws are already being echoed in U.S. states like Colorado, New York, and Utah—showing a nascent ‘Brussels effect’ for AI.
Counters the claim that there is no global harmonisation, pointing out concrete examples of cross‑jurisdictional learning and diffusion of regulatory ideas.
Broadened the perspective from a purely national view to a global one, encouraging the panel to acknowledge emerging international convergence and influencing John’s remarks on coordination among regulators.
Speaker: Alexandra Reeve Givens
Agency—not just consent—is the innovation we need: give users the ability to understand provenance, withdraw consent, and control their data after the fact.
Proposes a shift from the traditional consent model to a more dynamic, user‑centric notion of agency, adding a fresh conceptual tool for building trust.
Inspired a brief exchange on burden‑shifting, with Trevor linking agency to fiduciary responsibilities, and set the stage for the rapid “innovation round” where each panelist highlighted a promising idea.
Speaker: John Edwards
Provenance tools—software‑built material that tracks dynamic AI components—can bring transparency to agentic AI systems.
Introduces a concrete technical innovation that could operationalise the abstract concepts of trust and accountability discussed throughout the panel.
Provided a tangible example for the speed‑round, linking back to earlier calls for transparency and influencing John’s and Denise’s selections of agency and privacy‑enhancing technologies respectively.
Speaker: Amanda Craig
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that reframed the trust‑and‑safety debate from a binary regulation‑vs‑innovation stance to a nuanced, layered governance model. Trevor’s opening story anchored the theme of trust, while Alexandra’s and John’s insights about leveraging existing law and the EU AI Act’s influence opened space for pragmatic solutions. Denise’s distinction between clear‑harm regulation and horizontal standards, coupled with Amanda’s cross‑domain supply‑chain perspective, added depth and highlighted the need for agile, coordinated approaches. Concrete examples—such as the U.S. employment discrimination case and provenance tools—grounded the abstract concepts, leading the panel to converge on four promising innovations (provenance, agency, privacy‑enhancing tech, and well‑resourced regulators). Collectively, these comments shifted the tone from skepticism about regulation to a collaborative view that sees thoughtful governance as essential infrastructure for AI innovation.

Follow-up Questions
How can we develop effective transparency and disclosure regimes for AI systems in high‑risk contexts (e.g., hiring) to enable enforcement of existing anti‑discrimination laws?
Without transparency, existing laws such as equal‑employment regulations cannot be applied to AI‑driven decisions, leaving victims without remedy.
Speaker: Alexandra Reeve Givens
What mechanisms can regulators use to prospectively identify and classify AI‑related harms in a culturally specific way, given the difficulty of a one‑size‑fits‑all approach?
Prospective identification of harms is essential for crafting agile, context‑sensitive regulation that avoids over‑ or under‑regulation across diverse societies.
Speaker: Denise Wong
How can international regulator coordination (e.g., ICO, Ofcom, GPA) be structured to address cross‑jurisdictional AI issues such as the Grok incident?
Fragmented oversight hampers effective enforcement; a clear coordination framework would enable consistent responses to AI‑driven harms that cross borders.
Speaker: John Edwards
What is the effectiveness of regulatory sandboxes and codes of practice as less‑prescriptive tools for AI governance, and how can they be evaluated?
Sandboxes and codes aim to provide flexibility while protecting users, but their impact is unclear; systematic evaluation would inform whether they achieve desired outcomes.
Speaker: Denise Wong
How can provenance tools and software‑built materials be standardized to provide traceability for agentic AI systems?
Provenance enhances transparency and accountability for complex, dynamic AI components, helping regulators and users understand system origins.
Speaker: Amanda Craig
How can the concept of “agency” be operationalized in AI products to shift responsibility back to providers rather than burdening users?
Embedding agency (e.g., clear opt‑out, data‑deletion mechanisms) restores user control and reduces reliance on consent as the sole protection mechanism.
Speaker: John Edwards
What is the current state of adoption and practical impact of privacy‑enhancing technologies such as federated learning in production AI systems?
Understanding real‑world deployment of PETs informs whether they can fill gaps that law cannot, guiding both policy and industry investment.
Speaker: Denise Wong
Why has the EU AI Act not generated a “Brussels effect” similar to GDPR, and what factors influence global diffusion of AI regulatory models?
Identifying barriers to international regulatory convergence helps policymakers design frameworks that are more likely to be adopted worldwide.
Speaker: Alexandra Reeve Givens
How can independent, well‑staffed regulatory bodies be protected and resourced to effectively represent public interest in AI governance?
Robust, independent regulators are critical for trustworthy oversight; without adequate resources they cannot fulfill their mandate.
Speaker: Alexandra Reeve Givens
Are current consent mechanisms (e.g., cookie banners) adequate for AI‑driven data processing, or do we need new user‑centric remedies?
AI introduces opaque processing that may render traditional consent ineffective, necessitating new models of user protection.
Speaker: Trevor Hughes (implied) and Alexandra Reeve Givens
How can a “horizontal” layer of AI transparency principles be designed to complement sector‑specific regulations without creating regulatory duplication?
A unified transparency framework can provide consistent expectations across sectors while allowing tailored vertical rules where needed.
Speaker: Alexandra Reeve Givens, Denise Wong
What lessons can be learned from the Grok investigation about the need for multi‑agency collaboration and the gaps in existing regulatory frameworks?
The Grok case highlights practical coordination challenges and regulatory blind spots that must be addressed for future AI incidents.
Speaker: John Edwards
How can risk across the entire AI supply chain be managed cohesively, drawing on cybersecurity supply‑chain risk management practices?
AI supply‑chain risks are distributed; a holistic approach similar to cybersecurity is needed to prevent fragmented mitigation.
Speaker: Amanda Craig
What criteria should define “high‑risk” AI uses for targeted regulation versus sector‑specific self‑regulation?
Clear, evidence‑based thresholds ensure that regulatory effort focuses on the most harmful applications while allowing innovation elsewhere.
Speaker: Denise Wong, Amanda Craig

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.