How Trust and Safety Drive Innovation and Sustainable Growth
20 Feb 2026 14:00h - 15:00h
How Trust and Safety Drive Innovation and Sustainable Growth
Summary
The panel, moderated by IAPP’s Trevor Hughes, brought together a civil-society leader (Alex Reed-Gibbons) [2], a tech-industry representative (Amanda Craig, Microsoft) [4], and two regulators (UK Information Commissioner John Edwards [7] and Singapore PDPC Deputy Commissioner Denise Wong) [8] to examine whether trust can act as an engine for AI growth [16-18]. Hughes noted a paradox: while a “deregulatory mood” is evident [28-30], every banner on the summit floor highlighted trust, safety or privacy [32-34], prompting the question of whether the market is truly stepping back from guardrails [36-38].
Alex Reed-Gibbons argued that adoption-and thus innovation-depends on multiple dimensions of trust, from cultural fit to data security [56-63], and that thoughtful regulation can supply the “fuel” for that trust [67-70]. John Edwards explained that, in the UK, existing data-protection law (the GDPR) already provides a de-facto regulatory regime that sets common standards such as privacy-by-design and impact assessments, thereby helping businesses demonstrate trust [84-92][94-99][102-108]. Denise Wong described Singapore’s approach of regulating clear harms (e.g., AI-generated deepfakes in elections) while leaving broader issues to sectoral rules and adaptable codes of practice [136-141][144-148][252-258].
A consensus emerged that trust and safety are essential, though additional regulation is only clearly needed for high-risk scenarios [164-169]. When asked for promising innovations, the panel highlighted provenance tools to increase transparency [316-322], the concept of agency to restore user control [328-334], privacy-enhancing technologies such as federated learning [346-351], and well-funded, independent watchdogs to represent the public interest [355-356]. The discussion concluded that ongoing coordination among regulators, industry and civil society is crucial to embed trust and safety into AI’s expanding role in society [412-419][423-425].
Keypoints
Major discussion points
– Trust and safety are seen as the engine of AI adoption and innovation, even amid a “deregulatory” climate.
Trevor notes the paradox of a deregulatory mood while trust-and-safety messaging dominates the summit ([28-39]). Alex expands that trust is the economic driver that makes AI tools usable and that thoughtful regulation can actually fuel innovation ([56-68]). John reinforces that regulatory standards (e.g., data-protection-by-design) provide a common metric for trust ([88-95]).
– Existing data-protection regimes are being repurposed for AI, but there is debate over the need for new, AI-specific rules.
The UK relies on the UK-GDPR as a de-facto AI framework and issues guidance that maps AI practices to GDPR principles ([84-107]). Singapore’s approach blends sector-specific regulation for clear harms with broader “codes of practice” that sit alongside the PDPA, showing a preference for flexible, non-prescriptive tools ([136-158][252-258]). Alex points out that even where laws exist, AI’s opacity makes enforcement difficult, highlighting the need for a transparency layer ([150-162]).
– Identifying and managing AI harms requires a mix of high-risk taxonomies, sector-agnostic principles, and supply-chain-wide governance.
Denise describes emerging global harm taxonomies (e.g., the International AI Safety Report) and the difficulty of prospectively defining harms, advocating for agile mechanisms like codes of practice ([236-247][250-257]). Amanda outlines Microsoft’s “sensitive-use” categories-impact on life opportunities, psychological/physical harm, and human-rights impacts-and stresses the challenge of governing risk across the entire AI supply chain ([195-204][209-219]).
– Promising innovations to strengthen trust and safety were highlighted, ranging from technical tools to institutional capacity.
Amanda cites provenance tools that track dynamic AI components as a way to increase transparency and accountability ([316-322]). John emphasizes “agency”-giving users control beyond consent, such as delete or opt-out mechanisms ([328-334]). Denise points to privacy-enhancing technologies like federated learning that can protect data when law falls short ([346-351]). Alex stresses the importance of well-staffed, independent regulatory bodies and civil-society watchdogs ([355-356]).
– Cross-jurisdictional coordination among regulators is essential to avoid fragmented oversight.
John describes active collaboration with Ofcom, the Global Privacy Assembly, and other regulators to share expectations and avoid duplicated effort ([274-301]). Trevor underscores that regulator interaction becomes critical in the absence of a unified AI standard ([272-273]).
Overall purpose / goal of the discussion
The panel, convened by the IAPP, aimed to explore why trust and safety are crucial for AI-driven growth, assess the current patchwork of regulatory and industry governance, identify gaps where new safeguards may be needed, and surface practical ideas-both technical and institutional-that can help align innovation with responsible, trustworthy AI deployment.
Overall tone and its evolution
The conversation begins analytically and slightly skeptical, highlighting a “deregulatory” mood versus pervasive trust messaging ([28-39]). It quickly shifts to a collaborative, solution-focused tone as panelists share examples of existing frameworks, emerging harm taxonomies, and innovative governance tools. By the later “speed-round” and closing remarks, the tone becomes upbeat and even humorous (e.g., playful audience polls, “cheat” comments), while maintaining a constructive optimism about building trustworthy AI ecosystems.
Speakers
– Alexandra Reeve Givens – CEO of the Center for Democracy and Technology; expertise in civil rights, civil liberties, AI governance, trust and safety [S1][S2]
– Amanda Craig – General Manager for Responsible AI Policy at Microsoft; expertise in responsible AI governance and policy
– Trevor Hughes – Moderator representing the International Association of Privacy Professionals (IAPP); expertise in privacy, data protection, and AI trust & safety
– Denise Wong – Deputy Commissioner of the Personal Data Protection Commission (PDPC) in Singapore and Assistant Chief Executive of IMDA; expertise in data protection, privacy-enhancing technologies, and AI regulatory frameworks [S9]
– John Edwards – Information Commissioner of the United Kingdom (ICO); expertise in data protection law, privacy regulation, and AI oversight [S11]
Additional speakers:
– None identified beyond the listed speakers.
The session was opened by IAPP’s Trevor Hughes, who introduced the four panelists – civil-society leader Alexandra Reeve Givens, CEO of the Center for Democracy and Technology; Microsoft’s General Manager for Responsible AI Policy, Amanda Craig; the UK Information Commissioner, John Edwards; and the Deputy Commissioner of Singapore’s PDPC, Denise Wong – and noted that the discussion would explore whether “trust can act as an engine for AI growth” [1-12][16-18]. Hughes highlighted a paradox: while a “deregulatory mood” seems to be prevailing in policy circles, virtually every banner on the summit floor referenced trust, safety or privacy, raising the question of whether the market is truly stepping back from guard-rails [28-38].
Alexandra Reeve Givens argued that the long-term success of AI depends on adoption, which in turn requires trust across multiple dimensions – fit-for-purpose, linguistic and cultural suitability, privacy protection, data security and the quality of the underlying information [56-63]. She framed trust as the economic driver that will fuel innovation and contended that thoughtful, principle-based regulation can supply the “fuel” for that trust by outsourcing the burden of assurance from individual users to a common standard [67-70].
John Edwards explained that the United Kingdom does not have a dedicated AI statute because the UK-GDPR already provides a de-facto regulatory regime for AI. He described how data-protection-by-design, data-protection impact assessments and other GDPR-derived obligations give businesses a measurable way to demonstrate trust, and how the ICO issues guidance that maps AI practices onto existing GDPR principles, thereby filling any perceived regulatory lacuna [84-108].
Denise Wong outlined Singapore’s layered approach. For harms that are clear and present – such as AI-generated deep-fakes used in elections or AI-facilitated scams – the PDPC has enacted specific regulations [136-141]. For the remainder of AI applications, the regulator relies on existing sectoral laws and on “codes of practice” that can be updated quickly, positioning these tools as an “outcome-driven” complement to the more prescriptive PDPA [144-148][252-258].
Alexandra Reeve Givens illustrated a concrete transparency problem: U.S. equal-employment-opportunity statutes prohibit discrimination, yet AI-driven hiring tools can hide bias, making it extremely difficult for a candidate to prove a violation without a disclosure regime that forces providers to reveal model details and conduct impact assessments [150-162]. She therefore stressed that a horizontal transparency layer is essential to give existing anti-discrimination laws practical effect [150-162].
Amanda Craig described Microsoft’s internal governance model, which categorises “sensitive uses” into three high-impact buckets – impacts on life opportunities (e.g., employment, education), psychological or physical harm (especially for vulnerable groups), and human-rights implications [199-204]. She argued that AI risk must be managed across the entire supply chain, drawing on lessons from cybersecurity where risk is addressed holistically rather than focusing on a single component [209-219].
Across the panel there was strong consensus that trust and safety are prerequisites for AI adoption and that transparency, provenance and robust governance are the means to achieve them. All speakers emphasized that trust is a central outcome of both policy and corporate action [56-63][84-92][112-119][128-132]; that provenance tools can make dynamic AI components traceable [316-322]; that the concept of “agency” can restore user control beyond mere consent [328-334]; that privacy-enhancing technologies such as federated learning can protect data where law cannot [346-351]; and that well-staffed, independent watchdogs are essential to represent the public interest [355-356].
Nevertheless, the panel diverged on how far regulation should go. Edwards maintained that the GDPR already supplies sufficient safeguards and that new AI-specific legislation would be redundant [84-107]. Craig counter-argued that internal responsible-AI programmes must be complemented by external regulation to sustain trust [112-119]. Wong advocated a risk-based, “clear-harm-first” approach, limiting formal statutes to obvious threats while using agile codes of practice for the rest [136-147][251-258]. Givens pointed to emerging AI-specific statutes in the EU and several U.S. states as examples of targeted regulation that can actually promote innovation [262-270]. Hughes highlighted the tension between the deregulatory narrative and the ubiquity of trust-focused messaging [28-38].
When asked to name a promising innovation, Craig highlighted “provenance tools” – software-built material that tracks the provenance of dynamic AI components, thereby increasing transparency and accountability [316-322]. Edwards responded with the word “agency”, suggesting that restoring users’ ability to withdraw consent, delete data and control outcomes is a more powerful safeguard than traditional consent mechanisms [328-334]. Wong selected privacy-enhancing technologies, noting that federated learning is already moving from research into production and can secure personal data where legislation falls short [346-351]. Givens concluded with a call for “well-staffed, empowered, independent regulatory bodies and technically informed civil-society organisations” to act as public-interest watchdogs [355-356].
Cross-jurisdictional coordination was repeatedly stressed as essential. Edwards described active collaboration between the ICO, Ofcom and the Global Privacy Assembly to share expectations on AI safety, especially in the ongoing GROK investigation, and warned that fragmented oversight can leave gaps [274-301]. Hughes echoed this, noting that in the absence of a unified AI standard, regulator interaction becomes critical [272-279]. Wong added that an emerging international harm taxonomy – such as the International AI Safety Report – is beginning to provide a common language for regulators worldwide [244-247].
The audience poll on the relationship between innovation and regulation produced a majority thumbs-up, indicating a generally positive view of the innovation-regulation relationship [181-188]. The discussion turned to the difficulty of prescriptive regulation, illustrated by the long-standing debate over cookie consent mechanisms, which still burden users despite decades of experience [181-188][190-197]. Craig linked this to AI by arguing that identifying harms requires a blend of high-risk taxonomies, sector-agnostic principles and supply-chain-wide governance [195-204][209-219].
In closing, Hughes asked the panel to imagine the name of the AI summit five years hence, prompting playful answers such as “AI Trust Summit”, “Nostalgia”, “Thriving” and “For the people, by the people” [392-399][403-408]. He then reflected that the hard work of embedding trust and safety into AI – likened to the historic challenge of bringing electricity into the White House – is being carried out daily by organisations, regulators and civil-society actors, and urged the audience to thank those who are doing this work [412-419][421-425]. Trevor closed by likening the challenge of building trust in AI to the historic task of bringing electricity to the White House, and thanked the audience for the work being done across sectors [412-425].
Key take-aways
– Trust and safety are essential drivers of AI adoption; without trust users will not “flip the switch” [45-46][56-63][84-92][112-119][128-132].
– Existing data-protection regimes (UK-GDPR, Singapore PDPA) already provide a baseline regulatory layer for AI, offering tools such as privacy-by-design and impact assessments [84-108][144-148].
– A paradox exists between a deregulatory climate and the pervasive emphasis on trust in industry and policy messaging [28-38].
– Thoughtful, principle-based regulation can act as a catalyst for innovation rather than a brake [67-70].
– High-risk or clearly harmful AI applications (e.g., election deep-fakes, discriminatory hiring tools) merit targeted regulation; broader AI use can be governed through sectoral rules and internal standards [136-141][199-204].
– Identifying AI harms requires a mix of existing law, emerging harm taxonomies, corporate risk taxonomies and supply-chain-wide governance [195-204][244-247].
– Promising innovations include provenance tools, the agency concept, privacy-enhancing technologies and well-resourced independent watchdogs [316-322][328-334][346-351][355-356].
– International coordination (ICO-Ofcom-GPA collaboration, global harm taxonomies) is critical to avoid fragmented oversight [274-301][272-279][244-247].
Unresolved issues and suggested compromises
– The extent to which new AI-specific legislation is needed beyond existing data-protection frameworks remains contested.
– Prospective definition and prioritisation of emerging AI harms must balance cultural specificity with global consistency.
– Achieving a true “Brussels effect” for AI governance is still a work in progress.
– A hybrid approach is proposed: regulate clear, high-impact harms; use sectoral rules and agile codes of practice for the rest; and supplement with internal responsible-AI programmes and technical tools such as provenance and PETs [136-147][251-258][262-270][112-119].
Follow-up questions raised
1. How can effective transparency and disclosure regimes be built for high-risk AI contexts such as hiring to enable enforcement of anti-discrimination laws? [150-162]
2. What mechanisms allow regulators to prospectively identify and classify AI-related harms in culturally specific ways? [236-247]
3. How should international regulator coordination be structured to address cross-jurisdictional AI incidents like the GROK case? [274-301]
4. What is the effectiveness of regulatory sandboxes and codes of practice as less-prescriptive tools, and how can they be evaluated? [251-258]
5. How can provenance tools be standardised to provide traceability for agentic AI systems? [316-322]
6. How can the concept of “agency” be operationalised to shift responsibility back to providers rather than burdening users? [328-334]
7. What is the current state of adoption and impact of privacy-enhancing technologies such as federated learning? [346-351]
8. Why has the EU AI Act not generated a Brussels-effect comparable to the GDPR, and what factors influence global diffusion of AI regulatory models? [232-235][262-270]
9. How can independent regulatory bodies be protected and resourced to effectively represent the public interest in AI governance? [355-356]
10. Are current consent mechanisms adequate for AI-driven data processing, or are new user-centric remedies required? [181-188][210-218]
These points capture the breadth of the discussion, the areas of consensus and contention, and the concrete ideas proposed for advancing trustworthy AI governance.
and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for Democracy and Technology, one of the leading advocacy organizations in the world, working on civil rights, civil liberties all around the world. She’s based in D .C. To her immediate left is Amanda Craig. Amanda is the General Manager for Responsible AI Policy at Microsoft. To Amanda’s left, we have John Edwards. John Edwards is known to many. He is the Information Commissioner of the United Kingdom. And to John’s left, we have Denise Wong, who is the Deputy Commissioner of the PDPC in Singapore, the Privacy and Data Protection Commission. Welcome to our panelists. So we have two regulators, an industry representative and a civil society representative.
And I come from the IAPP. If you don’t know the IAPP, we are a global professional association, a not -for -profit but also policy, and we’re neutral. We’re not just a company. an advocacy or a lobbying body, we bring together the people who do the work. Many of them are in the room right now who do the very hard work of data protection and AI governance all around the world. All right, let’s jump in. The title of the session reflects trust as an engine for growth. Let’s think about that just for a minute. Just a few short years ago, I think it was two and a half, maybe three years ago, this event started in Bletchley Park in England.
And in that iteration of the event, it was named the AI Safety Summit. Right around that time, the EUAI Act was being negotiated. It soon passed after that. But a lot has changed in that two or three years. This event is the AI Impact Summit. The event last year in Paris was the AI Action Summit. More recently, we have seen, the not yet fully implemented EUA. AI Act become subject to an omnibus package where some of the expectations of that original act are being dialed back a little bit. And we’ve seen broad critique of regulatory structures, trust and safety structures that might inhibit growth and innovation in AI. There clearly is a deregulatory mood in the air.
In fact, I think it’s notable that there has not been much discussion of law or regulatory initiatives that might create guardrails to help guide the adoption of AI. So clearly, we’re in an odd moment, and an odd moment for this panel. But as I walked around the campus of this event, this enormous campus, I noted something that was, I think, quite significant. Just about every second banner or poster, just about every large printout, printed word in the show floor, somewhere had trust. safety or privacy as part of the messaging. In fact, the sutras, and we’ll talk about them as we go through the session, the principles announced by the Indian government largely around trust and safety.
And so what gives? What’s the dichotomy here? At one moment we are saying it’s a deregulatory mode, we step back. Well, at the same time, we are actively embracing and discussing trust and safety, risk management, protecting consumers, citizens, human beings as they engage with AI. So do we care or not? Are we actually in a deregulatory moment, or have we just gotten quiet about the need for guardrails and trust and safety in these systems? I would say for business, risk exists regardless of whether there’s a law in place or not, and so businesses have an imperative to respond. I’m going to tell a very, very quick story, and that is that in 1891, when electricity was first being brought into the White House in the United States, then President Benjamin Harrison and his wife, Caroline, were actually terrified of flipping the light switch.
And so they hired the electrician from the Edison Company, a man named Ike Hoover, who went on to become the chief usher of the White House. They hired him to flip the light switch. I think the message of this story is that we won’t use it if we don’t trust it. And so as AI is being pulled through the walls of our world, as it’s creating light and switches and tools for us to use, I think we need to ensure that we’re comfortable flipping those switches. And that is the topic of our panel today. So let’s jump in. And our first question is going to be about just the moment that we find ourselves in.
And I’m going to start with Alex. why are trust and safety important to innovation? And maybe speak to this dichotomy that I’ve highlighted. Why is it in this moment that we can’t talk about regulation, but everywhere it seems we’re talking about trust and safety?
Yeah, first of all, thank you for convening us, and it’s a pleasure to be here. I think you really hit the nail on the head in your introduction, which is when we think about the long -term success and sustainability of AI, and that is business sustainability for the companies, as well as societal sustainability for all of us. The secret is not just an acceleration, the biggest, fastest, most capable model. The real story is one of adoption, and that has been the overwhelming theme of the summit this year. And for people to adopt this technology, they need to trust it. And that’s trust in multiple different facets, right? Is the tool fit for purpose? Does it work in your language?
Is it appropriate for your culture? Will it protect your privacy? Is your data going to be secured? What is the quality of the information that is grounding that model and those outputs? And I think people are really waking up to this, and they’re demanding more. This is both as individual users and then, of course, for enterprise customers, too, who themselves are saying, we’re on the front lines thinking about how to integrate AI into our business operations. We’re the ones who will likely be sued if this goes wrong. So this is where trust really is the fuel of innovation because it is what’s going to be the economic driver of these tools being adopted. And the other thing that I would add is what we see is not only that trust is important for innovation in the abstract, but this is also where responsible, thoughtful regulation can be fuel for innovation as well.
Because the same way that we want to be able to drive cars without all of us being experts in how a motor works, Product liability and good laws around the creation of these tools help make sure they outsource some of that work for us so that we don’t all have to be doing the individual labor of deciding whether we can trust. So many times people will create this false framing of regulation versus innovation, as opposed to thoughtful being regulation being the fuel that actually allows us to sell, buy, and use these tools.
Excellent. Fascinating. John, I’m going to jump to you, and Amanda, I will come right back. But I’m going to jump to you. The U .K. doesn’t have an AI law in place. It has lots of laws that will apply to AI. I think data protection and the GDPR Act in the U .K. is a great example of that. But talk to us a little bit about regulating in the absence of an AI law. What does that look like in the U .K.? And do you see organizations exhibiting behavior that demonstrates that they’re focused on the ideas that Alex suggested, that trust and safety matter regardless of the relationship? What is the regulatory structure that sits over them?
Yeah, absolutely. Absolutely. Absolutely.
There it is.
No, very much so. I mean, the data protection laws apply across the board wherever technology touches personal data. So we have a de facto regulatory regime under the UK GDPR. Coming back to your comment about trust, it’s so important, and there is a role for regulation actually in assisting businesses because businesses are trying to deliver that trust proposition to consumers. But by what metric? Right. And that’s, I think, where regulation can provide a common standard. So, you know, we require, it’s a regulatory tool, that you have to do data protection by design. You have to do data protection impact assessment. You know, we expect. Privacy by design. We expect. respect risk assessment. So all of these things are regulatory requirements, but they are also tools that help intermediate between businesses and the consumers to demonstrate that there is a basis for trust.
And an organization like the ICO is there for both sides to see, well, there’s someone actually overseeing that. And that’s a role that we do discharge. Judge, to your point about the absence of prescriptive regulation in the UK on AI, we don’t see that particularly as a deficit. I mean, I think there’s a lot of policy work going on in areas where policymakers and regulators do need to step in. That’s ongoing, and I won’t comment on that. But, you know, there are ongoing issues about the distribution of proceeds from the use of creative materials and the like. That carries on. But… In the absence of an explicit rule, it’s incumbent on my office to deliver safety and confidence and metrics for industry and to deliver certainty over what can be seen as an uncertain law.
So we’ve gone out and said, well, here’s how we see the technology -neutral general principles of the GDPR apply when you train a model, for example. We see, for example, the EU AI Act in Article 10 talks about the need for fairness. Well, we’ve been able to articulate those obligations by way of guidance, linking it back already to the GDPR principles. So, you know, there’s a mapping. I don’t think at the moment for the available applications of artificial intelligence technologies that there is a lacuna. It’s there. With the GDPR. And we are there to provide. confidence and certainty about how you apply that, how you improve your products with it, and how by doing so you engender that trust that you described at the outset.
Excellent. Okay, so Amanda, tell us, do you agree that there’s not a need for additional rails, traffic indicators in AI? Is John right that the existing regulatory structure is really providing enough guidance or is it the case that Microsoft is using internal principles, frameworks, standards that you might adopt to build programs and services that you think meet the expectations of trust and safety of the marketplace?
Thank you. From a Microsoft perspective, we are focused on implementing our responsible AI governance program and see opportunity for lots of different governance models that governments could pursue in terms of implementing existing regulation, developing additional regulation that complements that existing regulation. I think the through line for us, the bottom line, is very much what Alex started us off with, that we do very much see, we’ve seen through multiple generations of technology, we’re not going to have adoption, we’re not going to have use of this technology without trust. And we need to have governance programs at technology companies. We need to have governance efforts by governments that are ensuring that we have an evolving conversation about trust.
Because if I pull the thread on the analogy you started us with, like how do you flip on a light switch and that can be scary when you’ve not done it before, I think the other thing that is very challenging, true about this technology, is that it is also very dynamic. It is evolving very quickly. quickly. And people might even be scared that, like, they won’t know where to find the light switch next week. And that brings a whole different set of challenges. And so that requires not just confidence in how you are able to sort of trust the technology today, but also that there’s trust in a governance process that will continue to iterate and evolve alongside the technology.
Excellent. Denise, help us then here. I know Singapore has released guidelines, standards around AI. Tell us about the Singaporean experience in thinking about regulating trust and safety in AI.
Thanks so much, Trevor. And thank you to the IAPP for putting this together and for having us. Maybe I’ll answer that question by linking some of the concepts that we’ve talked about. And that sort of underpins our philosophy. Trust and safety is the outcome that we want. You know, we want to create the necessary conditions for the society. to thrive for the public and the enterprises to use the technology with confidence. So AI for that public good. To do that, we need governance. We need a framework of thinking about how we can govern the technology, and we’ve been doing this for all sorts of technology. AI is but one. Regulations are a mechanism, a type of governance mechanism that you use when the necessary and correct conditions exist.
And so that map of that concept informs how we think about our governance approach. So on issues that are very clear, where there are clear harms, we have stepped in to regulate. An example of this is elections regulations that we put in place where we prohibited the use of AI deepfakes to represent candidates. It was time -limited. It was for the period of elections, but we stepped in and put a law in place for that. We also have laws for AI creating online harms, as well as AI in scam situations. So that is the part where we regulate for clear and present harms. For the rest of it, a lot of it we leave to sectoral regulations where there’s already a web of existing regulations, and on specific issues as well.
John and I and many of us are in the data protection field where, as John has said, there are already existing laws that can be tacked on, updated, reviewed in order to deal with this new technology that has come about. So where we have done AI governance frameworks and tools that you’ve mentioned is where we’ve seen a need to create some sort of horizontal principles and platforms to think about the sector agnostic general issues on transparency, on what model governance for corporates could look like. We haven’t seen the need to regulate that horizontal layer just yet, but certainly a need to articulate some of these principles. And that also allows us to create more certainty for the market, to give them some direction that actually this can be a market -driven assurance system that has demand, has supply, and has what we’ll call them proto -standards, early days of standards about what good looks like.
So that’s the work that we’ve been doing and trying to create and simplify. We have a seed and an assurance ecosystem that sits, I would say, adjacent and complementary to regulations where they’re needed.
Fantastic. Please, please.
So just to comment on that, one area that I think is proving very important, and people are discovering this across jurisdictions, is even where existing laws apply, there is a problem where AI systems make it hard to know whether or not those laws are being broken. So this is where that transparency layer you were articulating really becomes important.
Give us an example.
Yeah, and I’m going to make it U .S.-centric just because it’s the one that’s top of mind, so forgive the bias here. So in the U .S., we have equal employment laws. It is against the law to discriminate in the course of hiring. So in theory, a piece of software that perpetuates discrimination against particular candidates, for example, not considering the resumes of people over a certain age, is violating an existing law. So people will say we don’t need any further regulation. We’re done. The problem is you can tell in a human run system where it was just a bad apple in the HR department, it’s been historically easier to prove that case. Now, when it’s AI powered software making that decision, it is really hard as a worker who’s just put in a resume and never got an answer back to know if something was going wrong.
If you actually get up your courage and file a case, it is really hard to prove your case if there is discrimination. And so without some type of disclosure regime that requires transparency in these high risk scenarios, high impact scenarios, to have transparency and disclosure about the system that is being used, impact assessments to make sure that discrimination isn’t happening, you actually don’t get the remedy that people really need under existing law. And so that’s where I think this horizontal piece can complement the sector specific vertical laws in a light touch way, but actually gives meaning to the laws on the books.
So I think that’s a great example of the harm trigger that Denise described, that we identify a clear harm and that may be a place where additional regulatory structure might be helpful. I think we heard pretty significant consensus across our panel. Trust and safety is good. That’s good that we’re there. That’s a great consensus to achieve. And not complete consensus on the idea that additional regulation is needed yet. With the exception perhaps of a few scenarios in which we can identify high risk or harm. Let’s go to our audience for a second. Help us describe the relationship between innovation and regulation in AI. If you think it’s a great relationship, thumbs up. If you think it’s a bad relationship, thumbs down.
If you think it’s complicated, make it complicated. What do we think? Oh, I see a lot of content. What does our panel think? I think it’s a good relationship between innovation and appropriate regulation. Fascinating. We have a very strongly opinioned audience here. That’s great. Let’s talk about regulation again and dive in just a bit deeper. I think one of the things that’s tremendously challenging is prescriptive regulation, trying to understand harms that might occur before technology is fully adopted broadly in the marketplace. I’m a veteran of the privacy world going back to 1995, 1996. And in the late 1990s, we were talking extensively about cookies and how do we regulate cookies and the privacy issues associated with cookies.
Guess what? We’re still talking about cookies often. And I know for many of the privacy and data protection, they’re nodding already. They’re crying a little bit because it’s so, so painful to implement. implement many of the cookie banners and cookie consent mechanisms that we have. And I’m not entirely sure, we might get John to admit this even, that, you know, those cookie banners are actually driving the outcomes that we hope for. We identified the biggest and worst harm or concern and dedicated resources appropriately to that. Amanda, I’m going to jump right to you. Talk to us a little bit about identifying those harms. Alex gave us one, which is perhaps AI reviewing HR submissions, resumes, CVs, and language in those CVs may actually create results that were not intended, that create bias, that, you know, in a human -driven system would be easier to find, in an AI -driven system just much, much harder to find.
That’s a great example. How do we identify those prescriptive harms, those harms that we’re not quite sure about yet, that may emerge? Do we do it through principles, through ethics, through what?
I think all of the above to some extent. Part of why we start with principles in our governance program is I think it’s helpful to orient towards what do we care about, right, as we then try to build a program that realizes those outcomes. I think we also can look at existing law that reflects where there are harms, like in the employment context, where people could be mistreated or treated unfairly that we know we care about. And there’s been a lot of effort and regulation to define high risk, high impact. At Microsoft, we have something called the sensitive uses sort of scenarios where, you know, we have three categories where technology could have like an impact on someone’s life opportunity or consequential life impact of something like employment or education opportunities, for example, or how someone’s treated under the law otherwise, all sort of fit in that context.
We have the second big category of harm that we have defined as around sort of the risk for psychological or physical harm. So think about vulnerable populations there. Think about the use of AI in critical infrastructure. And then the third category is the use of AI that impacts human rights. So, you know, we have our way of defining what is really high impact. You know, a lot of governments, again, have taken different routes. I think the other thing that we’ve seen is the kind of emergence of a conversation around sort of technology itself that poses specific high risks. For example, highly capable models that have a whole other set of risks that are the risks that are being defined.
And that’s one thing that I just want to draw out as we think about this and drawing upon what I feel like, you know, and I didn’t grow up in the privacy world, I grew up in the cybersecurity world. And one of the things that I think a lot about as we work on, you know, defining these harms and figuring out what to do about them, that we can learn from the kind of… decades of work on cybersecurity is the challenge of thinking about how to address risk across the supply chain. And I think it’s a slightly different conversation in AI than it has been traditionally in security with software and cloud technologies. But there is like a common principle or approach that I think we should really look at closely, which is, you know, we are oftentimes in the context of AI thinking about risk and harm where the technology is actually used, right?
And then what’s difficult is figuring out what do we do across the whole supply chain to manage that risk and have that be cohesive. And one of the things that in the cybersecurity context, we know what the risk or harm, it’s much simpler. It’s security risk, that we care about. But we have the same challenge in terms of like, how do we manage that risk across the supply chain? And one of the challenges over decades of work in the cybersecurity context is… Instead of wanting to… put emphasis on one part of the supply chain or the other at any given moment instead of, like, really dealing with the really hard governance challenge that it is everything at once.
And so I think when we, you know, think about the complexity of defining harms in the AI space, that’s important work to do. And also, in the context of managing risk for any of those harms being realized, we also need to think really hard about looking across the whole supply chain at once. Even though it’s hard from a governance perspective, that’s going to be most important for managing the risk ultimately.
Fantastic. And I misspoke. It’s prospective, not prescriptive regulation. But John and Denise, maybe talk to us a little bit about that. And let me frame it for you both. And Denise, we’ll have you start. Clearly, with data protection regulation, we have had the GDPR now for over seven years. And the effect of that stands out. And I think it’s important to think about that. And I think it’s important to on the global policy environment has been enormous. We now have over 120 countries that have privacy laws in place. Many, many, many of them have genealogical lines that point back to the GDPR. And yet we haven’t seen that in AI yet. The EU AI Act has not taken off around the world.
We don’t see a Brussels effect happening on AI. Is it because the challenge of identifying harm, the challenge of prospectively trying to identify what might
You always ask me the tough questions. I think, first of all, the harms question, because I think that’s relevant to the regulation question that you’re asking. I think the starting point must be that every country has a unique context. And it’s the job of the government to figure out how to do that. And I think that’s the challenge of prospectively trying to identify what’s harmful, what’s harmful. harmful to their society. I think there’s going to be a huge amount of overlap, but at the end of the day, what’s harmful in one context that’s harmful in India may not be the same as what’s harmful in the US. And the cultural context matters. That said, I think there’s actually increasing consensus, I feel, about what harms or archetypes of harms there are vis -a -vis AI.
And we see that, for example, the International AI Safety Report is starting to anchor some of this taxonomy and sort of buckets and archetypes of harm, and we also see that beginning to happen at Iceland, for example. Those conversations are happening. How does that link to prescriptive regulation or legislation? I think that if the harms are still being coalesced and formed, it’s quite difficult to be very prescriptive about how you deal with those harms, because that, by definition, is sort of changing and still coalescing. It’s still quite nascent. That’s not to say… we should step back. I think we just probably need a slightly more agile way of thinking about that broader concept of governance.
So in the social media context in Singapore, we did it via codes of practice. So we have a broad sort of umbrella legislation that creates a legislative frame for which these codes of practice apply. But the codes of practice can be updated more easily. Same thing, actually, with our data protection law, the PDPA, which is structured quite differently from the GDPR. Our PDPA is actually very not prescriptive. It’s outcome driven. It’s fairly broad. But most of the guidance that PDPC provides, and these are for compliance, is done in advisory guidelines. So I think there are regulatory mechanisms you can use that are less prescriptive than primary legislation. And that gives you enough levers. It’s tools in a toolkit, basically, to be able to deal with the harms and with the problems that the society is facing.
Excellent.
To dispute you a little bit on the lack of a Brussels effect, I will say, I mean, going actually back to Denise’s point, so not only is there some harmonization happening around the scoping of the harms, I think that certainly is happening, but also on potential points of intervention. So, for example, one of the key elements of the EUAI Act is looking at high -risk scenarios and having their remitigations in place. We have similar laws under consideration in multiple states in the United States, one in the books already in Colorado. They would never say it is a copycat. It came from its own origins. But it is lawmakers thinking what is an appropriate right -scale intervention to that particular risk.
You can look at the recent transparency laws that were passed in California and New York, very similar discussions to the Code of Practice for General Purpose AI models that came out under the EU practice. You can look at the EUAI Act’s provision for regulatory sandboxes and this notion that we want small and medium -sized enterprises, and others to be able to innovate and get a little bit of forgiveness or wiggle room under the laws as they figure out how the regulations apply. That law just got passed in Utah. So there are these glimmers where we are seeing… smart solutions to specific problems and people learning from each other.
I think in the absence of that umbrella AI standard, that interaction with fellow regulators across disciplines and domains becomes really important. Or I will ask you, does it become really important?
Yeah, it is. It’s hugely important that we coordinate. You know, these are new challenges that we’re all facing. On the GROK issue, obviously, it’s under investigation, so I won’t be able to say too much about it. But, you know, we’re interested in what, you know, how models are trained, what data they’re trained on, what output filters are included, what kind of safety mechanisms. I’m interested in what kind of ingestion there is of data when it’s used at that level. But there’s some complexity in that case as well because, you know, you’ve got users using a tool that’s amplified. It’s amplified by social media. I don’t know whether the same functionality is available in any other image generation tool that just hasn’t got the same media because it’s not amplified by a social media platform.
And but, you know, very early on, I think I was back home in New Zealand, actually, on about the 5th of January and started to see this. And I messaged back to the office and said, what are we doing? What’s Ofcom doing? How are we connecting to our international colleagues? And that’s so important. And so we’ve, you know, we’ve messaged into GPA. We’ve coordinated very closely with Ofcom. And, you know, we have to cope with the fact that regulation is a little bit fragmented. So Ofcom is responsible for administering the Online Safety Act in the UK. Now, that is legislation that seeks to regulate the kinds of harmful content that can be delivered to a child’s device, for example.
Right. I see this thing. Is that regulated by online safety? If so, it’s Ofcom. How did that get to me? Well, that depends on how the underlying data was processed. That becomes an ICO, you know, GDPR issue. So we need to be working very, very closely, and we are. But also with the crock issue, one of the very early things we did was to reach out to our colleagues in the GPA, the Global Privacy Assembly, and say, who else is looking at this? Let’s make sure that we’re not sort of treading on each other’s toes, or at least that we’re sharing information, that we’ve got the same ideas, that we think the same way. And that can be tremendously powerful, whether or not you can point to a regulation that that app or that platform is clearly in breach of.
To describe a set of expectations about harm mitigation across a coordinated group of global regulators, I think can be quite powerful. And, you know, just to see how, you know, the alternative for some of these platforms is not necessarily being investigated and trained by the ICO. So it’s like what I noticed the first day that I was here when I went to flip TikTok on and saw this is not available in this country. So if the offering in a particular jurisdiction does not meet the standards and norms of that jurisdiction, these organizations need to understand that they can be switched off, that they are not actually all powerful.
I just have the image of the U .K. Information Commissioner doom -scrolling TikTok in my head now. Let’s do a quick round, and please do keep your answers short, but innovation is not limited to technology, is not limited to business practices. It’s also very powerful in the… The privacy -enhancing, safety -enhancing tools that we use inside organizations. It’s in regulatory structures. Denise has mentioned regulatory sandboxes, or maybe it was Alex, but we’ve heard regulatory sandboxes mentioned. What is the one innovative idea in trust and safety that you think holds real promise? And I’ll let you do one sentence to explain it, but this is a speed round. So we’ll start with Amanda and then work down and come back to Alice.
One sentence. Okay. Is that my sentence? I think about provenance tools as an area of innovation. Again, this is calling upon my cybersecurity background, but I think, you know, something like agentic AI is an area where there’s a lot of interest -concerned governance momentum. And one of the challenges is being able to look at something that is fundamentally not just like one technology. It’s technology. It’s a bunch of very dynamic components, models, platform tools, services, applications all working together. And while that feels like a really new, hard challenge, we actually can draw upon what we know of software to actually be a set of dynamic components as well. And one of the ways that we’ve figured out how to govern that or working towards figuring out how to govern it is with software -built materials, something that really allows you to have the ability to track those dynamic components.
And I think that’s something we can apply to agents.
So it increases transparency. It tells you, you know, which algorithm or which system this might have come from. It helps with accountability broadly. Yeah. Excellent. John, what’s the most promising trust and safety innovation that we have?
Well, you challenged us with one word. So I’m going to go with agency. Agency. And I think it’s, for me, it’s a word that, you know, so much of our world is dominated by consent, which is, I won’t say broken, but it’s under strain as a useful concept. Agency, I think, has capacity to recognize that, the objective is to maintain and to restore and maintain an individual’s agency as it uses any product. And that’s consent. It’s actually making sure that provenance is delivered, for example. You can’t have agency if you don’t know the origin of the data that is delivering this agentic miracle to you. It gives you tools at the other end. And consent is always conceived of as a front -end authorizing concept.
But agency says, okay, I’ve done that now. Where’s my delete everything button? Or my I don’t want to do this anymore button. So I think if developers can be thinking about how they deliver the best possible service in a way that restores and maintains the agency of the consumer, I think that will go a long way to addressing some of the problems that we’re seeing. And I think
Fantastic. I had a lot of… Professor years ago now who described burden -shifting wrenches in the law. And I think consent is a burden -shifting wrench that moved much of the burden to the data subject, to the individual. Agency, it sounds to me, is an idea to move back to those who might be accountable and have them have fiduciary or stewardship responsibilities for that person. Denise?
I would pick privacy -enhancing technology. I think it’s an interesting technological way to deal with at least one part of the problem, which is how do we secure the data, how do we make sure that the personal information is well protected. And it’s advancing so quickly. So two years ago, we were looking at federated learning for training of AI models, and no one could figure it out. I think it’s actually being done in production now. So there is – I’m a lawyer, so I can say this. Sometimes the law cannot solve the problem. But actually, maybe another technology can.
Fantastic. Alex?
Well -staffed, empowered, independent regulatory bodies that can help represent the public interest. Wow. And because in some countries those are under attack right now where that is not available, well -resourced, technically informed, independent civil society that can play that role in the interim.
Fantastic. Yeah, the importance of having watchdogs, yeah, entities that are watching and observing, commenting, enforcing, really powerful. So there are four great innovations, provenance, agency, privacy -enhancing technologies, and well -funded regulators or civil society. Well done. I think that is a great start. Let’s do another audience poll. How many of you here in this audience are responsible for AI or AI governance, AI ethics, AI safety inside your organization? Hands up. It’s almost the whole room. Keep your hand up if you’re also responsible for something else in addition to AI or it’s just AI. It’s more. I think it is a pretty complete overlap, almost a complete overlap. There’s at least a significant percentage that were responsible for more than one thing, and one of those things was AI.
I think that’s an example of the complexity that we see inside organizations today. John described the coordination necessary between Ofcom and the ICO in the Grok investigation, which is ongoing, because there was not a single place where regulatory authority existed to address that concern. This is a really complex environment. The number of harms or issues span from children’s safety to intellectual property. From bias and algorithmic discrimination all the way through deepfakes and other things. Alex, how do we… How do we put that all into a pot and make it something meaningful?
Well, what if you can’t put it all into a pot? A pot is a common denominator in all of those things, but AI is a tool that touches everything. So I really do think you actually need a nuanced approach that looks at a particular risk, what those mitigations are for that risk, and then goes from there. The privacy considerations when you are sharing your most intimate concerns and questions about the world with a chat bot is very different than these questions about deep fakes and fraud and impersonation. It’s just you need to have a different legal regime. I think some of the common elements that run through, one is that transparency and rigorous approaches to risk mitigation really matter, and that can either be through regulation or through principles and best practices with meaning and standardization and watchdogs reading those disclosures.
And the second is this burden of the user. So when Trevor introduced me, we described my org. organization. We represent users’ rights around the world. I am all for user empowerment. And also, we cannot put the burden solely on users to navigate this moment. Indeed. And that is the major lesson of the cookie example you were saying before. We didn’t misdiagnose the harm. We misdiagnosed the remedy, which was the burden on individual users when we don’t actually have market choice, nor the time or mental energy to just read a whole bunch of disclosures and act alone. And so solutions that acknowledge the harm are tailored, but also take that burden off individual users. So you’re empowering users, but not burdening them or leaving them to essentially defend themselves unprotected.
We have to think about that.
Okay. Sadly, we are at the end of our time, but we have one more pop question for all of you, and we’re going to let this be our close. We have gone through the AI Impact Summit, the AI Action Summit, the AI Safety Summit. Five years from now. What is the AI summit going to be called? What’s the word that’s going to be in the middle there? So this is a one -word answer again. What’s it going to be? I know it’s a tough question. So, Denise, I’ll start with you because you’re able to handle the toughest questions. Ah, the AI Trust Summit. Okay, John?
Nostalgia.
Nostalgia.
Thriving.
Thriving, AI Thriving Summit. Okay.
I’m going to cheat. For the people, by the people. It’s more words. They’re so strange.
Some of the people. It’s hilarious. To get on a poster. Here’s what I know. I know that there is incredibly hard work that needs to be done to bring trust and safety to this ridiculously powerful technology that I think, as Sundar Pichai says, will be more profound than electricity. That hard work happens every single day inside organizations. I’m going to cheat. I’m going to cheat. I’m going to cheat. I’m going to cheat. I’m going to cheat. I’m going to cheat. that are implementing these tools inside civil society, that are watching and guiding that behavior inside regulatory offices that are navigating to ensure that marketplaces around the world, that the digital economy gets this right. I feel better because people like this are doing that work every day, and I hope you’ll join me in thanking them.
Thank you very much. Thank you so very much. Thank you. Thank you. Well done. Well done. You were fantastic as expected. So what is the I am and I fly to London for magic.
And I come from the IAPP. If you don’t know the IAPP, we are a global professional association, a not -for -profit but also policy, and we’re neutral. We’re not just a company. an advocacy or a lobbyi…
EventThis comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just technical capabilities. It suggests that success depends on coordinating multip…
EventThe pyramid reveals a clear pattern:most layers of AI are already regulated. Hardware is controlled, data is protected (albeit imperfectly), and algorithms are evolving beyond the reach of rigid metri…
BlogThe need for new mechanisms to safeguard data, in addition to consent, is becoming increasingly important There is a growing need for new mechanisms to safeguard data, in addition to consent, to prot…
EventClaybaugh contends that there are already legal frameworks in place that pre-date ChatGPT covering issues like copyright, data use, misinformation, and safety. She suggests focusing on whether these e…
EventGalvez suggests that countries should consider their local needs and existing regulations when developing AI governance frameworks. She argues for complementing existing regulations rather than creati…
EventSpecialised AI regulation may not be necessary, as existing laws already cover many aspects of AI-related concerns. Jovan Kurbalija, executive director of Diplo, argues in hisblogthat before enacting …
UpdatesIn healthcare, risks involve threats to life, privacy, equality, and individual autonomy. Similarly, the retail sector anticipates risks to livelihood, standard of living, and worker autonomy. This hi…
EventMaria stresses that identifying AI‑related harms requires joint effort from governments, civil society, and industry, noting that such collaboration can surface vulnerabilities that any single actor w…
EventWhat is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed multiple dimensions of AI that stakeholders believe require regulation, spannin…
EventSector-specific and use case-specific governance may be needed rather than one-size-fits-all approaches
EventMuñoz emphasized that “science diplomacy doesn’t remain confined to policy papers. It creates concrete tools, infrastructure, and institutional capacity.” The mission’s success is evidenced by Colombi…
EventTrust is built when systems are predictable, explainable, and accountable. Trust deepens when innovation aligns with public interest. And trust endures when leadership anticipates risk rather than rea…
EventOnline trust today faces several main challenges. The technical entities that run the global infrastructure need to preserve their already built trust capacity by ensuring that the internet functions….
TopicHow to balance the need for regulation with avoiding fragmentation across different jurisdictions
EventNeed for regulatory coherence and coordination Relationship Between Different Regulatory Frameworks The Council of Europe emphasizes the importance of ensuring consistency across multiple regulatory…
EventBy showcasing their collective commitment to harm mitigation, the DNS sector can send a message to regulators about their initiatives and proactive approach. Although regulation in the DNS industry is…
EventBoth recognize the critical need for effective coordination between regulators across jurisdictions, though they acknowledge challenges when some member states may have weaker regulatory capacity or g…
Event“Regulation can act as a guarantee of trust and an engine for economic growth in AI.”
Broader commentary in the knowledge base describes regulation as a force for economic growth and a guarantee of trust in AI contexts [S89] and [S90], providing nuance to the panelists’ framing of principle-based regulation as a trust-fuel.
“AI‑generated deep‑fakes used in elections are a clear present harm and Singapore’s PDPC has enacted specific regulations to address them.”
The knowledge base highlights deep-fakes as a significant challenge that requires government oversight and regulation [S97], and notes Singapore’s awareness of vulnerable groups and the need to bridge digital divides, which aligns with a targeted regulatory response [S94].
“Singapore’s regulator relies on existing sectoral laws and rapidly updatable codes of practice as an outcome‑driven complement to the more prescriptive PDPA.”
While the knowledge base does not detail the exact regulatory mechanism, it mentions Singapore’s focus on protecting vulnerable groups and ensuring inclusive digital policies, suggesting an outcome-driven, flexible approach alongside existing legislation [S94].
The panel displayed strong consensus that trust is foundational for AI adoption, that transparency and provenance are essential for enforcing existing laws, and that targeted, risk‑based regulation—combined with agile governance—can support innovation. Participants also agreed on the necessity of cross‑jurisdictional coordination and dynamic governance models.
High consensus across regulators, industry, and civil‑society on the core principles of trust, transparency, and coordinated, risk‑based regulation, suggesting a shared roadmap for future AI governance that balances innovation with safeguards.
The panel shows strong consensus that trust and safety are critical for AI adoption, but there is notable disagreement on the scope and form of regulation needed. While regulators (John) emphasize existing data‑protection frameworks as sufficient, industry (Amanda) and civil‑society (Denise, Alexandra) call for additional targeted legislation, agile codes, and internal governance mechanisms. Disagreements also appear around the degree of global harmonisation and the best approach to identifying AI harms.
Moderate to high disagreement on regulatory strategy and global coordination, which could impede unified policy development but also reflects a healthy multi‑stakeholder debate that may lead to more nuanced, hybrid governance models.
The discussion was driven forward by a series of pivotal remarks that reframed the trust‑and‑safety debate from a binary regulation‑vs‑innovation stance to a nuanced, layered governance model. Trevor’s opening story anchored the theme of trust, while Alexandra’s and John’s insights about leveraging existing law and the EU AI Act’s influence opened space for pragmatic solutions. Denise’s distinction between clear‑harm regulation and horizontal standards, coupled with Amanda’s cross‑domain supply‑chain perspective, added depth and highlighted the need for agile, coordinated approaches. Concrete examples—such as the U.S. employment discrimination case and provenance tools—grounded the abstract concepts, leading the panel to converge on four promising innovations (provenance, agency, privacy‑enhancing tech, and well‑resourced regulators). Collectively, these comments shifted the tone from skepticism about regulation to a collaborative view that sees thoughtful governance as essential infrastructure for AI innovation.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

