How Trust and Safety Drive Innovation and Sustainable Growth
20 Feb 2026 14:00h - 15:00h
How Trust and Safety Drive Innovation and Sustainable Growth
Session at a glance
Summary
This panel discussion at the AI Impact Summit explored the relationship between trust, safety, and innovation in artificial intelligence development and regulation. The conversation featured Trevor Hughes from the IAPP moderating a discussion with Alexandra Reeve Givens (CEO of Center for Democracy and Technology), Amanda Craig (General Manager for Responsible AI Policy at Microsoft), John Edwards (UK Information Commissioner), and Denise Wong (Deputy Commissioner of Singapore’s Privacy and Data Protection Commission).
The central theme focused on whether the current moment represents a deregulatory period for AI or simply a shift in how trust and safety are discussed. All panelists agreed that trust is essential for AI adoption, with Givens emphasizing that people won’t use technology they don’t trust, making trust the fuel for innovation rather than an obstacle. Edwards explained how the UK regulates AI through existing data protection laws like GDPR, arguing that current frameworks provide sufficient oversight when properly applied. Craig highlighted Microsoft’s approach using internal governance programs and principles, while noting the dynamic nature of AI technology requires evolving governance structures.
Wong described Singapore’s nuanced approach, using targeted regulations for clear harms like election deepfakes while relying on sectoral regulations and guidance frameworks for broader AI governance. The discussion revealed consensus that existing laws often apply to AI but may need transparency mechanisms to be effectively enforced, particularly in cases like employment discrimination where AI makes violations harder to detect.
The panelists identified promising innovations including provenance tools, user agency concepts, privacy-enhancing technologies, and well-resourced regulatory bodies. The conversation concluded with predictions for future AI summits, ranging from “trust” to “thriving,” reflecting optimism about achieving responsible AI development through collaborative governance approaches.
Keypoints
Major Discussion Points:
– Trust as a prerequisite for AI adoption and innovation: The panel emphasized that trust and safety are essential drivers of AI adoption, not barriers to innovation. Without user trust in AI systems’ reliability, privacy protection, and fairness, widespread adoption cannot occur, making trust a fundamental economic driver for the technology’s success.
– Regulatory approaches in the absence of specific AI laws: The discussion explored how existing regulatory frameworks (like GDPR and data protection laws) can effectively govern AI applications, with different jurisdictions taking varied approaches – from the UK’s principles-based system to Singapore’s targeted regulations for specific harms like election deepfakes.
– The challenge of identifying and addressing AI harms proactively: Panelists discussed the difficulty of creating prescriptive regulations for emerging AI risks, using examples like algorithmic bias in hiring processes where discrimination may be harder to detect and prove in AI-driven systems compared to human-driven processes.
– Cross-jurisdictional coordination and regulatory innovation: The conversation highlighted the importance of international cooperation among regulators (exemplified by the Grok investigation) and innovative regulatory tools like privacy-enhancing technologies, regulatory sandboxes, and software bill of materials for AI systems.
– The complexity of AI governance across multiple domains: The panel addressed how AI governance spans numerous areas (privacy, safety, intellectual property, bias) requiring coordination between different regulatory bodies and expertise, reflecting the multifaceted nature of AI’s impact on society.
Overall Purpose:
The discussion aimed to explore the relationship between trust, safety, and innovation in AI development, examining how regulatory frameworks can support rather than hinder AI advancement while protecting users and society from potential harms.
Overall Tone:
The discussion maintained a collaborative and constructive tone throughout, with panelists generally agreeing on core principles while offering nuanced perspectives based on their different roles (regulator, industry, civil society). The tone was professional yet accessible, with moments of levity (such as the moderator’s comment about the UK Information Commissioner “doom-scrolling TikTok”). The conversation remained optimistic about finding balanced approaches to AI governance, emphasizing cooperation and innovation in regulatory approaches rather than adversarial relationships between regulation and innovation.
Speakers
Speakers from the provided list:
– Trevor Hughes – Moderator from the IAPP (International Association of Privacy Professionals), a global professional association that is not-for-profit and policy neutral, bringing together data protection and AI governance professionals
– Alexandra Reeve Givens – CEO of the Center for Democracy and Technology, one of the leading advocacy organizations working on civil rights and civil liberties globally, based in Washington D.C.
– John Edwards – Information Commissioner of the United Kingdom
– Amanda Craig – General Manager for Responsible AI Policy at Microsoft
– Denise Wong – Deputy Commissioner of the PDPC (Privacy and Data Protection Commission) in Singapore
Additional speakers:
– Sundar Pichai – Only mentioned in passing by Trevor Hughes in his closing remarks, not an active participant in the discussion
Full session report
This panel discussion at the AI Impact Summit brought together leading voices from regulation, industry, and civil society to examine the critical relationship between trust, safety, and innovation in artificial intelligence development. The conversation, moderated by Trevor Hughes from the International Association of Privacy Professionals (IAPP), featured Alexandra Reeve Givens (CEO of the Center for Democracy and Technology), Amanda Craig (General Manager for Responsible AI Policy at Microsoft), John Edwards (UK Information Commissioner), and Denise Wong (Deputy Commissioner of Singapore’s Privacy and Data Protection Commission).
Setting the Stage: Historical Perspective and Current Tensions
Hughes opened the discussion with a compelling historical parallel, recounting how President Benjamin Harrison in 1891 was afraid to touch the newly installed electric light switches in the White House, leaving lights burning all night rather than risk electrocution. This story served as a powerful metaphor for our current relationship with AI technology—highlighting how transformative technologies often require us to navigate between innovation and caution.
The moderator then presented a striking paradox in today’s AI landscape. While there appears to be a deregulatory mood—evidenced by the evolution from the “AI Safety Summit” at Bletchley Park to the “AI Action Summit” and now the “AI Impact Summit,” along with expectations that the EU AI Act might be “dialed back” as part of an omnibus package—the conference floor was dominated by messaging around trust, safety, and privacy. This tension raised fundamental questions about whether the industry genuinely prioritizes guardrails or has simply become more circumspect about discussing them publicly.
Audience polling revealed the complexity of attendees’ views on the regulation-innovation relationship, with mixed responses about whether regulation helps or hinders AI innovation. Notably, most participants indicated they were responsible for AI governance in addition to other duties, reflecting the cross-cutting nature of AI governance challenges.
Reframing the Trust-Innovation Relationship
Reeve Givens provided a crucial reframing that shaped the entire discussion, arguing that trust is not merely important for innovation but is actually “the fuel of innovation.” She emphasized that AI success lies not in creating the “biggest, fastest, most capable model” but in achieving widespread adoption. For people to adopt AI technology, they need trust across multiple dimensions: functionality, cultural appropriateness, privacy protection, data security, and information quality.
This trust imperative extends beyond individual users to enterprise customers who find themselves on the front lines of AI integration and potential liability. Perhaps most significantly, Reeve Givens challenged the traditional framing of regulation versus innovation, arguing instead that “responsible, thoughtful regulation can be fuel for innovation as well.” Using the analogy of driving cars without being motor experts, she explained how product liability laws and appropriate regulations help “outsource some of that work for us so that we don’t all have to be doing the individual labour of deciding whether we can trust.”
Diverse Regulatory Approaches Across Jurisdictions
The panel revealed fascinating differences in how various jurisdictions approach AI governance, reflecting Hughes’ reference to different countries developing their own “sutras” or guiding principles.
Edwards explained the UK’s approach, which relies on existing regulatory frameworks rather than AI-specific legislation. Under UK data protection law, his office has regulatory authority over AI systems that process personal data. He argued that this technology-neutral approach provides adequate coverage, with regulators offering guidance on how general principles apply to specific AI applications. Edwards emphasized that regulation provides “common standards” that help businesses demonstrate trustworthiness to consumers, with requirements like data protection by design and privacy impact assessments serving as both compliance obligations and trust-building tools.
Wong described Singapore’s more nuanced approach, using different governance mechanisms for different types of challenges. For clear and present harms, Singapore has implemented targeted regulations—such as prohibiting AI deepfakes in election contexts and addressing AI-enabled scams and online harms. For broader, more complex issues, they rely on existing sectoral regulations and horizontal governance frameworks that provide principles and guidance without prescriptive rules. Wong also mentioned Singapore’s use of regulatory sandboxes to allow controlled experimentation with new technologies.
This approach reflects Singapore’s philosophy that “trust and safety is the outcome that we want” and that “regulations are a mechanism, a type of governance mechanism that you use when the necessary and correct conditions exist.” Wong emphasized that each country must determine what constitutes harm in their unique cultural and social context.
Craig, representing the industry perspective, highlighted Microsoft’s focus on implementing responsible AI governance programs while supporting various governmental regulatory models. Drawing on her background in cybersecurity, she stressed the dynamic nature of AI technology and the need for governance processes that can “continue to iterate and evolve alongside the technology.”
The Challenge of AI Harms and Enforcement
A significant portion of the discussion focused on the complex challenge of identifying AI harms, particularly in contexts where existing laws apply but AI systems make violations harder to detect and prove. Reeve Givens provided a compelling example from employment discrimination: while it’s illegal to discriminate in hiring, AI-powered software that perpetuates age discrimination is much harder to identify and prosecute than human-driven discrimination. “When it’s AI powered software making that decision, it is really hard as a worker who’s just put in a resume and never got an answer back to know if something was going wrong.”
This insight led to her argument that transparency and disclosure requirements are essential to make existing anti-discrimination laws enforceable in AI contexts. She also referenced Colorado’s law and other state-level initiatives as examples of jurisdictions beginning to address these transparency gaps.
Craig outlined Microsoft’s approach to categorizing high-risk scenarios, identifying three main categories: impacts on life opportunities (employment, education, legal treatment), risks of psychological or physical harm (particularly for vulnerable populations and critical infrastructure), and impacts on human rights. She emphasized the need to manage risk across the entire AI supply chain rather than focusing solely on end-use applications.
Real-World Regulatory Coordination: The Grok Investigation
Edwards provided a fascinating real-world example of regulatory coordination through an ongoing investigation involving AI-generated images appearing on social media platforms. He recounted how, while in New Zealand, he saw concerning AI-generated content and immediately initiated coordination with Ofcom (which regulates online safety) and reached out to international colleagues through the Global Privacy Assembly.
This case illustrated both the necessity and complexity of multi-agency, multi-jurisdictional coordination in AI governance. The investigation highlighted how AI issues often span multiple regulatory domains—in this case, data protection (ICO’s remit) and harmful content delivery to children (Ofcom’s responsibility under the Online Safety Act). Edwards noted that while regulation may be “a little bit fragmented,” coordinated responses from multiple regulators can be “quite powerful” in setting expectations for harm mitigation.
Edwards also mentioned how some platforms like TikTok are unavailable in certain jurisdictions, illustrating the complex landscape of technology availability and regulatory responses.
Innovative Solutions: A Speed Round of Ideas
In a rapid-fire Q&A format, Hughes asked panelists to identify the most promising innovations in AI trust and safety:
Craig highlighted provenance tools and software bill of materials concepts, drawing from cybersecurity experience to address the challenge of tracking dynamic AI system components. This approach recognizes that agentic AI systems comprise multiple dynamic components—models, platforms, tools, services, and applications—all working together.
Edwards introduced the concept of “agency” as a superior alternative to consent-based models. He argued that consent is “under strain as a useful concept” and that agency-focused approaches should aim to “maintain and to restore and maintain an individual’s agency.” This includes not just front-end authorization but also ongoing control mechanisms like comprehensive deletion capabilities and other tools that give users meaningful control over their AI interactions.
Wong advocated for privacy-enhancing technologies as technological solutions to governance challenges, noting rapid advances in areas like federated learning, which has moved from theoretical concepts to production use for AI model training. As she put it, “Sometimes the law cannot solve the problem. But actually, maybe another technology can.”
Reeve Givens emphasized the importance of “well-staffed, empowered, independent regulatory bodies that can help represent the public interest,” and where those aren’t available, “well-resourced, technically informed, independent civil society that can play that role in the interim.”
Avoiding the Burden-Shifting Trap
A crucial theme throughout the discussion was the question of where responsibility should lie in AI governance. Hughes referenced the concept of “burden-shifting wrenches” in law, noting how consent mechanisms have historically shifted much of the burden to data subjects. The panel grappled with how to avoid repeating the mistakes of cookie consent banners.
Reeve Givens argued that cookie banners represented a misdiagnosis of the remedy rather than the harm: “We cannot put the burden solely on users to navigate this moment. We didn’t misdiagnose the harm. We misdiagnosed the remedy, which was the burden on individual users when we don’t actually have market choice, nor the time or mental energy to just read a whole bunch of disclosures and act alone.”
This insight reinforced the emerging consensus that effective AI governance requires systemic solutions that empower users without overwhelming them with individual responsibility for complex technical decisions.
Looking to the Future
The discussion concluded with panelists predicting what AI summits might be called in five years’ time. Their responses reflected both optimism and realism:
– Wong: “Trust”
– Edwards: “Nostalgia”
– Craig: “Thriving”
– Reeve Givens: “For the people, by the people”
These predictions captured both hope for achieving responsible AI development and recognition of the substantial work required to get there.
Hughes concluded by acknowledging the “incredibly hard work that needs to be done to bring trust and safety to this ridiculously powerful technology” that may be “more profound than electricity.” He emphasized that this work happens daily inside organizations implementing AI tools, within civil society organizations providing oversight, and in regulatory offices ensuring that digital economies get AI governance right.
Key Takeaways
The discussion revealed several important insights for AI governance. First, there was remarkable consensus that trust is fundamental to AI adoption and that appropriate governance can fuel rather than hinder innovation. Second, existing regulatory frameworks, particularly data protection laws, provide a solid foundation for AI governance when properly applied and interpreted.
However, the panel also identified critical gaps, particularly around transparency mechanisms needed to make existing laws enforceable in AI contexts. The dynamic nature of AI technology requires adaptive governance approaches that can evolve alongside technological development.
Perhaps most importantly, the conversation demonstrated that while stakeholders may disagree on specific regulatory mechanisms, there is broad consensus on fundamental principles: the importance of trust, the value of international coordination, the need for transparency in high-risk scenarios, and the imperative to avoid placing unreasonable burdens on individual users. This alignment suggests a mature understanding across different sectors that AI governance requires nuanced, collaborative approaches rather than adversarial relationships between regulation and innovation.
The panel’s diverse perspectives—from civil society advocacy to industry implementation to regulatory enforcement—illustrated that effective AI governance will require continued collaboration across all these domains, with each playing a crucial role in ensuring that AI development serves the public interest while enabling beneficial innovation.
Session transcript
and then we’re going to dive right into my immediate left. I have Alex Reed -Gibbons, who is the CEO of the Center for Democracy and Technology, one of the leading advocacy organizations in the world, working on civil rights, civil liberties all around the world. She’s based in D.C. To her immediate left is Amanda Craig. Amanda is the General Manager for Responsible AI Policy at Microsoft. To Amanda’s left, we have John Edwards. John Edwards is known to many. He is the Information Commissioner of the United Kingdom. And to John’s left, we have Denise Wong, who is the Deputy Commissioner of the PDPC in Singapore, the Privacy and Data Protection Commission. Welcome to our panelists. So we have two regulators, an industry representative and a civil society representative.
And I come from the IAPP. If you don’t know the IAPP, we are a global professional association, a not -for -profit but also policy, and we’re neutral. We’re not just a company. an advocacy or a lobbying body, we bring together the people who do the work. Many of them are in the room right now who do the very hard work of data protection and AI governance all around the world. All right, let’s jump in. The title of the session reflects trust as an engine for growth. Let’s think about that just for a minute. Just a few short years ago, I think it was two and a half, maybe three years ago, this event started in Bletchley Park in England.
And in that iteration of the event, it was named the AI Safety Summit. Right around that time, the EUAI Act was being negotiated. It soon passed after that. But a lot has changed in that two or three years. This event is the AI Impact Summit. The event last year in Paris was the AI Action Summit. More recently, we have seen, the not yet fully implemented EUA. AI Act become subject to an omnibus package where some of the expectations of that original act are being dialed back a little bit. And we’ve seen broad critique of regulatory structures, trust and safety structures that might inhibit growth and innovation in AI. There clearly is a deregulatory mood in the air.
In fact, I think it’s notable that there has not been much discussion of law or regulatory initiatives that might create guardrails to help guide the adoption of AI. So clearly, we’re in an odd moment, and an odd moment for this panel. But as I walked around the campus of this event, this enormous campus, I noted something that was, I think, quite significant. Just about every second banner or poster, just about every large printout, printed word in the show floor, somewhere had trust. safety or privacy as part of the messaging. In fact, the sutras, and we’ll talk about them as we go through the session, the principles announced by the Indian government largely around trust and safety.
And so what gives? What’s the dichotomy here? At one moment we are saying it’s a deregulatory mode, we step back. Well, at the same time, we are actively embracing and discussing trust and safety, risk management, protecting consumers, citizens, human beings as they engage with AI. So do we care or not? Are we actually in a deregulatory moment, or have we just gotten quiet about the need for guardrails and trust and safety in these systems? I would say for business, risk exists regardless of whether there’s a law in place or not, and so businesses have an imperative to respond. I’m going to tell a very, very quick story, and that is that in 1891, when electricity was first being brought into the White House in the United States, then President Benjamin Harrison and his wife, Caroline, were actually terrified of flipping the light switch.
And so they hired the electrician from the Edison Company, a man named Ike Hoover, who went on to become the chief usher of the White House. They hired him to flip the light switch. I think the message of this story is that we won’t use it if we don’t trust it. And so as AI is being pulled through the walls of our world, as it’s creating light and switches and tools for us to use, I think we need to ensure that we’re comfortable flipping those switches. And that is the topic of our panel today. So let’s jump in. And our first question is going to be about just the moment that we find ourselves in.
And I’m going to start with Alex. why are trust and safety important to innovation? And maybe speak to this dichotomy that I’ve highlighted. Why is it in this moment that we can’t talk about regulation, but everywhere it seems we’re talking about trust and safety?
Yeah, first of all, thank you for convening us, and it’s a pleasure to be here. I think you really hit the nail on the head in your introduction, which is when we think about the long -term success and sustainability of AI, and that is business sustainability for the companies, as well as societal sustainability for all of us. The secret is not just an acceleration, the biggest, fastest, most capable model. The real story is one of adoption, and that has been the overwhelming theme of the summit this year. And for people to adopt this technology, they need to trust it. And that’s trust in multiple different facets, right? Is the tool fit for purpose? Does it work in your language?
Is it appropriate for your culture? Will it protect your privacy? Is your data going to be secured? What is the quality of the information that is grounding that model and those outputs? And I think people are really waking up to this, and they’re demanding more. This is both as individual users and then, of course, for enterprise customers, too, who themselves are saying, we’re on the front lines thinking about how to integrate AI into our business operations. We’re the ones who will likely be sued if this goes wrong. So this is where trust really is the fuel of innovation because it is what’s going to be the economic driver of these tools being adopted. And the other thing that I would add is what we see is not only that trust is important for innovation in the abstract, but this is also where responsible, thoughtful regulation can be fuel for innovation as well.
Because the same way that we want to be able to drive cars without all of us being experts in how a motor works, Product liability and good laws around the creation of these tools help make sure they outsource some of that work for us so that we don’t all have to be doing the individual labor of deciding whether we can trust. So many times people will create this false framing of regulation versus innovation, as opposed to thoughtful being regulation being the fuel that actually allows us to sell, buy, and use these tools.
Excellent. Fascinating. John, I’m going to jump to you, and Amanda, I will come right back. But I’m going to jump to you. The U.K. doesn’t have an AI law in place. It has lots of laws that will apply to AI. I think data protection and the GDPR Act in the U.K. is a great example of that. But talk to us a little bit about regulating in the absence of an AI law. What does that look like in the U.K.? And do you see organizations exhibiting behavior that demonstrates that they’re focused on the ideas that Alex suggested, that trust and safety matter regardless of the relationship? What is the regulatory structure that sits over them?
Yeah, absolutely. Absolutely. Absolutely.
There it is.
No, very much so. I mean, the data protection laws apply across the board wherever technology touches personal data. So we have a de facto regulatory regime under the UK GDPR. Coming back to your comment about trust, it’s so important, and there is a role for regulation actually in assisting businesses because businesses are trying to deliver that trust proposition to consumers. But by what metric? Right. And that’s, I think, where regulation can provide a common standard. So, you know, we require, it’s a regulatory tool, that you have to do data protection by design. You have to do data protection impact assessment. You know, we expect. Privacy by design. We expect. respect risk assessment. So all of these things are regulatory requirements, but they are also tools that help intermediate between businesses and the consumers to demonstrate that there is a basis for trust.
And an organization like the ICO is there for both sides to see, well, there’s someone actually overseeing that. And that’s a role that we do discharge. Judge, to your point about the absence of prescriptive regulation in the UK on AI, we don’t see that particularly as a deficit. I mean, I think there’s a lot of policy work going on in areas where policymakers and regulators do need to step in. That’s ongoing, and I won’t comment on that. But, you know, there are ongoing issues about the distribution of proceeds from the use of creative materials and the like. That carries on. But… In the absence of an explicit rule, it’s incumbent on my office to deliver safety and confidence and metrics for industry and to deliver certainty over what can be seen as an uncertain law.
So we’ve gone out and said, well, here’s how we see the technology -neutral general principles of the GDPR apply when you train a model, for example. We see, for example, the EU AI Act in Article 10 talks about the need for fairness. Well, we’ve been able to articulate those obligations by way of guidance, linking it back already to the GDPR principles. So, you know, there’s a mapping. I don’t think at the moment for the available applications of artificial intelligence technologies that there is a lacuna. It’s there. With the GDPR. And we are there to provide. confidence and certainty about how you apply that, how you improve your products with it, and how by doing so you engender that trust that you described at the outset.
Excellent. Okay, so Amanda, tell us, do you agree that there’s not a need for additional rails, traffic indicators in AI? Is John right that the existing regulatory structure is really providing enough guidance or is it the case that Microsoft is using internal principles, frameworks, standards that you might adopt to build programs and services that you think meet the expectations of trust and safety of the marketplace?
Thank you. From a Microsoft perspective, we are focused on implementing our responsible AI governance program and see opportunity for lots of different governance models that governments could pursue in terms of implementing existing regulation, developing additional regulation that complements that existing regulation. I think the through line for us, the bottom line, is very much what Alex started us off with, that we do very much see, we’ve seen through multiple generations of technology, we’re not going to have adoption, we’re not going to have use of this technology without trust. And we need to have governance programs at technology companies. We need to have governance efforts by governments that are ensuring that we have an evolving conversation about trust.
Because if I pull the thread on the analogy you started us with, like how do you flip on a light switch and that can be scary when you’ve not done it before, I think the other thing that is very challenging, true about this technology, is that it is also very dynamic. It is evolving very quickly. quickly. And people might even be scared that, like, they won’t know where to find the light switch next week. And that brings a whole different set of challenges. And so that requires not just confidence in how you are able to sort of trust the technology today, but also that there’s trust in a governance process that will continue to iterate and evolve alongside the technology.
Excellent. Denise, help us then here. I know Singapore has released guidelines, standards around AI. Tell us about the Singaporean experience in thinking about regulating trust and safety in AI.
Thanks so much, Trevor. And thank you to the IAPP for putting this together and for having us. Maybe I’ll answer that question by linking some of the concepts that we’ve talked about. And that sort of underpins our philosophy. Trust and safety is the outcome that we want. You know, we want to create the necessary conditions for the society. to thrive for the public and the enterprises to use the technology with confidence. So AI for that public good. To do that, we need governance. We need a framework of thinking about how we can govern the technology, and we’ve been doing this for all sorts of technology. AI is but one. Regulations are a mechanism, a type of governance mechanism that you use when the necessary and correct conditions exist.
And so that map of that concept informs how we think about our governance approach. So on issues that are very clear, where there are clear harms, we have stepped in to regulate. An example of this is elections regulations that we put in place where we prohibited the use of AI deepfakes to represent candidates. It was time -limited. It was for the period of elections, but we stepped in and put a law in place for that. We also have laws for AI creating online harms, as well as AI in scam situations. So that is the part where we regulate for clear and present harms. For the rest of it, a lot of it we leave to sectoral regulations where there’s already a web of existing regulations, and on specific issues as well.
John and I and many of us are in the data protection field where, as John has said, there are already existing laws that can be tacked on, updated, reviewed in order to deal with this new technology that has come about. So where we have done AI governance frameworks and tools that you’ve mentioned is where we’ve seen a need to create some sort of horizontal principles and platforms to think about the sector agnostic general issues on transparency, on what model governance for corporates could look like. We haven’t seen the need to regulate that horizontal layer just yet, but certainly a need to articulate some of these principles. And that also allows us to create more certainty for the market, to give them some direction that actually this can be a market -driven assurance system that has demand, has supply, and has what we’ll call them proto -standards, early days of standards about what good looks like.
So that’s the work that we’ve been doing and trying to create and simplify. We have a seed and an assurance ecosystem that sits, I would say, adjacent and complementary to regulations where they’re needed.
Fantastic. Please, please.
So just to comment on that, one area that I think is proving very important, and people are discovering this across jurisdictions, is even where existing laws apply, there is a problem where AI systems make it hard to know whether or not those laws are being broken. So this is where that transparency layer you were articulating really becomes important.
Give us an example.
Yeah, and I’m going to make it U.S.-centric just because it’s the one that’s top of mind, so forgive the bias here. So in the U.S., we have equal employment laws. It is against the law to discriminate in the course of hiring. So in theory, a piece of software that perpetuates discrimination against particular candidates, for example, not considering the resumes of people over a certain age, is violating an existing law. So people will say we don’t need any further regulation. We’re done. The problem is you can tell in a human run system where it was just a bad apple in the HR department, it’s been historically easier to prove that case. Now, when it’s AI powered software making that decision, it is really hard as a worker who’s just put in a resume and never got an answer back to know if something was going wrong.
If you actually get up your courage and file a case, it is really hard to prove your case if there is discrimination. And so without some type of disclosure regime that requires transparency in these high risk scenarios, high impact scenarios, to have transparency and disclosure about the system that is being used, impact assessments to make sure that discrimination isn’t happening, you actually don’t get the remedy that people really need under existing law. And so that’s where I think this horizontal piece can complement the sector specific vertical laws in a light touch way, but actually gives meaning to the laws on the books.
So I think that’s a great example of the harm trigger that Denise described, that we identify a clear harm and that may be a place where additional regulatory structure might be helpful. I think we heard pretty significant consensus across our panel. Trust and safety is good. That’s good that we’re there. That’s a great consensus to achieve. And not complete consensus on the idea that additional regulation is needed yet. With the exception perhaps of a few scenarios in which we can identify high risk or harm. Let’s go to our audience for a second. Help us describe the relationship between innovation and regulation in AI. If you think it’s a great relationship, thumbs up. If you think it’s a bad relationship, thumbs down.
If you think it’s complicated, make it complicated. What do we think? Oh, I see a lot of content. What does our panel think? I think it’s a good relationship between innovation and appropriate regulation. Fascinating. We have a very strongly opinioned audience here. That’s great. Let’s talk about regulation again and dive in just a bit deeper. I think one of the things that’s tremendously challenging is prescriptive regulation, trying to understand harms that might occur before technology is fully adopted broadly in the marketplace. I’m a veteran of the privacy world going back to 1995, 1996. And in the late 1990s, we were talking extensively about cookies and how do we regulate cookies and the privacy issues associated with cookies.
Guess what? We’re still talking about cookies often. And I know for many of the privacy and data protection, they’re nodding already. They’re crying a little bit because it’s so, so painful to implement. implement many of the cookie banners and cookie consent mechanisms that we have. And I’m not entirely sure, we might get John to admit this even, that, you know, those cookie banners are actually driving the outcomes that we hope for. We identified the biggest and worst harm or concern and dedicated resources appropriately to that. Amanda, I’m going to jump right to you. Talk to us a little bit about identifying those harms. Alex gave us one, which is perhaps AI reviewing HR submissions, resumes, CVs, and language in those CVs may actually create results that were not intended, that create bias, that, you know, in a human -driven system would be easier to find, in an AI -driven system just much, much harder to find.
That’s a great example. How do we identify those prescriptive harms, those harms that we’re not quite sure about yet, that may emerge? Do we do it through principles, through ethics, through what?
I think all of the above to some extent. Part of why we start with principles in our governance program is I think it’s helpful to orient towards what do we care about, right, as we then try to build a program that realizes those outcomes. I think we also can look at existing law that reflects where there are harms, like in the employment context, where people could be mistreated or treated unfairly that we know we care about. And there’s been a lot of effort and regulation to define high risk, high impact. At Microsoft, we have something called the sensitive uses sort of scenarios where, you know, we have three categories where technology could have like an impact on someone’s life opportunity or consequential life impact of something like employment or education opportunities, for example, or how someone’s treated under the law otherwise, all sort of fit in that context.
We have the second big category of harm that we have defined as around sort of the risk for psychological or physical harm. So think about vulnerable populations there. Think about the use of AI in critical infrastructure. And then the third category is the use of AI that impacts human rights. So, you know, we have our way of defining what is really high impact. You know, a lot of governments, again, have taken different routes. I think the other thing that we’ve seen is the kind of emergence of a conversation around sort of technology itself that poses specific high risks. For example, highly capable models that have a whole other set of risks that are the risks that are being defined.
And that’s one thing that I just want to draw out as we think about this and drawing upon what I feel like, you know, and I didn’t grow up in the privacy world, I grew up in the cybersecurity world. And one of the things that I think a lot about as we work on, you know, defining these harms and figuring out what to do about them, that we can learn from the kind of… decades of work on cybersecurity is the challenge of thinking about how to address risk across the supply chain. And I think it’s a slightly different conversation in AI than it has been traditionally in security with software and cloud technologies. But there is like a common principle or approach that I think we should really look at closely, which is, you know, we are oftentimes in the context of AI thinking about risk and harm where the technology is actually used, right?
And then what’s difficult is figuring out what do we do across the whole supply chain to manage that risk and have that be cohesive. And one of the things that in the cybersecurity context, we know what the risk or harm, it’s much simpler. It’s security risk, that we care about. But we have the same challenge in terms of like, how do we manage that risk across the supply chain? And one of the challenges over decades of work in the cybersecurity context is… Instead of wanting to… put emphasis on one part of the supply chain or the other at any given moment instead of, like, really dealing with the really hard governance challenge that it is everything at once.
And so I think when we, you know, think about the complexity of defining harms in the AI space, that’s important work to do. And also, in the context of managing risk for any of those harms being realized, we also need to think really hard about looking across the whole supply chain at once. Even though it’s hard from a governance perspective, that’s going to be most important for managing the risk ultimately.
Fantastic. And I misspoke. It’s prospective, not prescriptive regulation. But John and Denise, maybe talk to us a little bit about that. And let me frame it for you both. And Denise, we’ll have you start. Clearly, with data protection regulation, we have had the GDPR now for over seven years. And the effect of that stands out. And I think it’s important to think about that. And I think it’s important to on the global policy environment has been enormous. We now have over 120 countries that have privacy laws in place. Many, many, many of them have genealogical lines that point back to the GDPR. And yet we haven’t seen that in AI yet. The EU AI Act has not taken off around the world.
We don’t see a Brussels effect happening on AI. Is it because the challenge of identifying harm, the challenge of prospectively trying to identify what might
You always ask me the tough questions. I think, first of all, the harms question, because I think that’s relevant to the regulation question that you’re asking. I think the starting point must be that every country has a unique context. And it’s the job of the government to figure out how to do that. And I think that’s the challenge of prospectively trying to identify what’s harmful, what’s harmful. harmful to their society. I think there’s going to be a huge amount of overlap, but at the end of the day, what’s harmful in one context that’s harmful in India may not be the same as what’s harmful in the US. And the cultural context matters. That said, I think there’s actually increasing consensus, I feel, about what harms or archetypes of harms there are vis -a -vis AI.
And we see that, for example, the International AI Safety Report is starting to anchor some of this taxonomy and sort of buckets and archetypes of harm, and we also see that beginning to happen at Iceland, for example. Those conversations are happening. How does that link to prescriptive regulation or legislation? I think that if the harms are still being coalesced and formed, it’s quite difficult to be very prescriptive about how you deal with those harms, because that, by definition, is sort of changing and still coalescing. It’s still quite nascent. That’s not to say… we should step back. I think we just probably need a slightly more agile way of thinking about that broader concept of governance.
So in the social media context in Singapore, we did it via codes of practice. So we have a broad sort of umbrella legislation that creates a legislative frame for which these codes of practice apply. But the codes of practice can be updated more easily. Same thing, actually, with our data protection law, the PDPA, which is structured quite differently from the GDPR. Our PDPA is actually very not prescriptive. It’s outcome driven. It’s fairly broad. But most of the guidance that PDPC provides, and these are for compliance, is done in advisory guidelines. So I think there are regulatory mechanisms you can use that are less prescriptive than primary legislation. And that gives you enough levers. It’s tools in a toolkit, basically, to be able to deal with the harms and with the problems that the society is facing.
Excellent.
To dispute you a little bit on the lack of a Brussels effect, I will say, I mean, going actually back to Denise’s point, so not only is there some harmonization happening around the scoping of the harms, I think that certainly is happening, but also on potential points of intervention. So, for example, one of the key elements of the EUAI Act is looking at high -risk scenarios and having their remitigations in place. We have similar laws under consideration in multiple states in the United States, one in the books already in Colorado. They would never say it is a copycat. It came from its own origins. But it is lawmakers thinking what is an appropriate right -scale intervention to that particular risk.
You can look at the recent transparency laws that were passed in California and New York, very similar discussions to the Code of Practice for General Purpose AI models that came out under the EU practice. You can look at the EUAI Act’s provision for regulatory sandboxes and this notion that we want small and medium -sized enterprises, and others to be able to innovate and get a little bit of forgiveness or wiggle room under the laws as they figure out how the regulations apply. That law just got passed in Utah. So there are these glimmers where we are seeing… smart solutions to specific problems and people learning from each other.
I think in the absence of that umbrella AI standard, that interaction with fellow regulators across disciplines and domains becomes really important. Or I will ask you, does it become really important?
Yeah, it is. It’s hugely important that we coordinate. You know, these are new challenges that we’re all facing. On the GROK issue, obviously, it’s under investigation, so I won’t be able to say too much about it. But, you know, we’re interested in what, you know, how models are trained, what data they’re trained on, what output filters are included, what kind of safety mechanisms. I’m interested in what kind of ingestion there is of data when it’s used at that level. But there’s some complexity in that case as well because, you know, you’ve got users using a tool that’s amplified. It’s amplified by social media. I don’t know whether the same functionality is available in any other image generation tool that just hasn’t got the same media because it’s not amplified by a social media platform.
And but, you know, very early on, I think I was back home in New Zealand, actually, on about the 5th of January and started to see this. And I messaged back to the office and said, what are we doing? What’s Ofcom doing? How are we connecting to our international colleagues? And that’s so important. And so we’ve, you know, we’ve messaged into GPA. We’ve coordinated very closely with Ofcom. And, you know, we have to cope with the fact that regulation is a little bit fragmented. So Ofcom is responsible for administering the Online Safety Act in the UK. Now, that is legislation that seeks to regulate the kinds of harmful content that can be delivered to a child’s device, for example.
Right. I see this thing. Is that regulated by online safety? If so, it’s Ofcom. How did that get to me? Well, that depends on how the underlying data was processed. That becomes an ICO, you know, GDPR issue. So we need to be working very, very closely, and we are. But also with the crock issue, one of the very early things we did was to reach out to our colleagues in the GPA, the Global Privacy Assembly, and say, who else is looking at this? Let’s make sure that we’re not sort of treading on each other’s toes, or at least that we’re sharing information, that we’ve got the same ideas, that we think the same way. And that can be tremendously powerful, whether or not you can point to a regulation that that app or that platform is clearly in breach of.
To describe a set of expectations about harm mitigation across a coordinated group of global regulators, I think can be quite powerful. And, you know, just to see how, you know, the alternative for some of these platforms is not necessarily being investigated and trained by the ICO. So it’s like what I noticed the first day that I was here when I went to flip TikTok on and saw this is not available in this country. So if the offering in a particular jurisdiction does not meet the standards and norms of that jurisdiction, these organizations need to understand that they can be switched off, that they are not actually all powerful.
I just have the image of the U.K. Information Commissioner doom -scrolling TikTok in my head now. Let’s do a quick round, and please do keep your answers short, but innovation is not limited to technology, is not limited to business practices. It’s also very powerful in the… The privacy -enhancing, safety -enhancing tools that we use inside organizations. It’s in regulatory structures. Denise has mentioned regulatory sandboxes, or maybe it was Alex, but we’ve heard regulatory sandboxes mentioned. What is the one innovative idea in trust and safety that you think holds real promise? And I’ll let you do one sentence to explain it, but this is a speed round. So we’ll start with Amanda and then work down and come back to Alice.
One sentence. Okay. Is that my sentence? I think about provenance tools as an area of innovation. Again, this is calling upon my cybersecurity background, but I think, you know, something like agentic AI is an area where there’s a lot of interest -concerned governance momentum. And one of the challenges is being able to look at something that is fundamentally not just like one technology. It’s technology. It’s a bunch of very dynamic components, models, platform tools, services, applications all working together. And while that feels like a really new, hard challenge, we actually can draw upon what we know of software to actually be a set of dynamic components as well. And one of the ways that we’ve figured out how to govern that or working towards figuring out how to govern it is with software -built materials, something that really allows you to have the ability to track those dynamic components.
And I think that’s something we can apply to agents.
So it increases transparency. It tells you, you know, which algorithm or which system this might have come from. It helps with accountability broadly. Yeah. Excellent. John, what’s the most promising trust and safety innovation that we have?
Well, you challenged us with one word. So I’m going to go with agency. Agency. And I think it’s, for me, it’s a word that, you know, so much of our world is dominated by consent, which is, I won’t say broken, but it’s under strain as a useful concept. Agency, I think, has capacity to recognize that, the objective is to maintain and to restore and maintain an individual’s agency as it uses any product. And that’s consent. It’s actually making sure that provenance is delivered, for example. You can’t have agency if you don’t know the origin of the data that is delivering this agentic miracle to you. It gives you tools at the other end. And consent is always conceived of as a front -end authorizing concept.
But agency says, okay, I’ve done that now. Where’s my delete everything button? Or my I don’t want to do this anymore button. So I think if developers can be thinking about how they deliver the best possible service in a way that restores and maintains the agency of the consumer, I think that will go a long way to addressing some of the problems that we’re seeing. And I think
Fantastic. I had a lot of… Professor years ago now who described burden -shifting wrenches in the law. And I think consent is a burden -shifting wrench that moved much of the burden to the data subject, to the individual. Agency, it sounds to me, is an idea to move back to those who might be accountable and have them have fiduciary or stewardship responsibilities for that person. Denise?
I would pick privacy -enhancing technology. I think it’s an interesting technological way to deal with at least one part of the problem, which is how do we secure the data, how do we make sure that the personal information is well protected. And it’s advancing so quickly. So two years ago, we were looking at federated learning for training of AI models, and no one could figure it out. I think it’s actually being done in production now. So there is – I’m a lawyer, so I can say this. Sometimes the law cannot solve the problem. But actually, maybe another technology can.
Fantastic. Alex?
Well -staffed, empowered, independent regulatory bodies that can help represent the public interest. Wow. And because in some countries those are under attack right now where that is not available, well -resourced, technically informed, independent civil society that can play that role in the interim.
Fantastic. Yeah, the importance of having watchdogs, yeah, entities that are watching and observing, commenting, enforcing, really powerful. So there are four great innovations, provenance, agency, privacy -enhancing technologies, and well -funded regulators or civil society. Well done. I think that is a great start. Let’s do another audience poll. How many of you here in this audience are responsible for AI or AI governance, AI ethics, AI safety inside your organization? Hands up. It’s almost the whole room. Keep your hand up if you’re also responsible for something else in addition to AI or it’s just AI. It’s more. I think it is a pretty complete overlap, almost a complete overlap. There’s at least a significant percentage that were responsible for more than one thing, and one of those things was AI.
I think that’s an example of the complexity that we see inside organizations today. John described the coordination necessary between Ofcom and the ICO in the Grok investigation, which is ongoing, because there was not a single place where regulatory authority existed to address that concern. This is a really complex environment. The number of harms or issues span from children’s safety to intellectual property. From bias and algorithmic discrimination all the way through deepfakes and other things. Alex, how do we… How do we put that all into a pot and make it something meaningful?
Well, what if you can’t put it all into a pot? A pot is a common denominator in all of those things, but AI is a tool that touches everything. So I really do think you actually need a nuanced approach that looks at a particular risk, what those mitigations are for that risk, and then goes from there. The privacy considerations when you are sharing your most intimate concerns and questions about the world with a chat bot is very different than these questions about deep fakes and fraud and impersonation. It’s just you need to have a different legal regime. I think some of the common elements that run through, one is that transparency and rigorous approaches to risk mitigation really matter, and that can either be through regulation or through principles and best practices with meaning and standardization and watchdogs reading those disclosures.
And the second is this burden of the user. So when Trevor introduced me, we described my org. organization. We represent users’ rights around the world. I am all for user empowerment. And also, we cannot put the burden solely on users to navigate this moment. Indeed. And that is the major lesson of the cookie example you were saying before. We didn’t misdiagnose the harm. We misdiagnosed the remedy, which was the burden on individual users when we don’t actually have market choice, nor the time or mental energy to just read a whole bunch of disclosures and act alone. And so solutions that acknowledge the harm are tailored, but also take that burden off individual users. So you’re empowering users, but not burdening them or leaving them to essentially defend themselves unprotected.
We have to think about that.
Okay. Sadly, we are at the end of our time, but we have one more pop question for all of you, and we’re going to let this be our close. We have gone through the AI Impact Summit, the AI Action Summit, the AI Safety Summit. Five years from now. What is the AI summit going to be called? What’s the word that’s going to be in the middle there? So this is a one -word answer again. What’s it going to be? I know it’s a tough question. So, Denise, I’ll start with you because you’re able to handle the toughest questions. Ah, the AI Trust Summit. Okay, John?
Nostalgia.
Nostalgia.
Thriving.
Thriving, AI Thriving Summit. Okay.
I’m going to cheat. For the people, by the people. It’s more words. They’re so strange.
Some of the people. It’s hilarious. To get on a poster. Here’s what I know. I know that there is incredibly hard work that needs to be done to bring trust and safety to this ridiculously powerful technology that I think, as Sundar Pichai says, will be more profound than electricity. That hard work happens every single day inside organizations. I’m going to cheat. I’m going to cheat. I’m going to cheat. I’m going to cheat. I’m going to cheat. I’m going to cheat. that are implementing these tools inside civil society, that are watching and guiding that behavior inside regulatory offices that are navigating to ensure that marketplaces around the world, that the digital economy gets this right. I feel better because people like this are doing that work every day, and I hope you’ll join me in thanking them.
Thank you very much. Thank you so very much. Thank you. Thank you. Well done. Well done. You were fantastic as expected. So what is the I am and I fly to London for magic.
Trevor Hughes
Speech speed
143 words per minute
Speech length
2428 words
Speech time
1015 seconds
Trust as engine for AI innovation
Explanation
Trevor emphasizes that trust and safety are central to driving AI growth, noting that risk exists whether or not specific laws are in place, so businesses must proactively build trust. He also points out the lack of a strong Brussels effect for AI, highlighting the need for flexible mechanisms like sandboxes.
Evidence
“The title of the session reflects trust as an engine for growth.” [1] “I would say for business, risk exists regardless of whether there’s a law in place or not, and so businesses have an imperative to respond.” [16] “We don’t see a Brussels effect happening on AI.” [48] “And we have seen regulatory sandboxes mentioned.” [158]
Major discussion point
Trust and safety as the engine for AI innovation
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | The enabling environment for digital development
Need for flexible governance mechanisms
Explanation
Trevor argues that because AI regulation is fragmented and lacks a unifying Brussels effect, regulators should adopt flexible tools such as codes of practice and regulatory sandboxes to keep pace with innovation.
Evidence
“We don’t see a Brussels effect happening on AI.” [48] “And we have seen regulatory sandboxes mentioned.” [158] “The importance of having watchdogs, yeah, entities that are watching and observing, commenting, enforcing, really powerful.” [139]
Major discussion point
Coordination and agile global governance
Topics
Artificial intelligence | The enabling environment for digital development
Cross‑regulator coordination is essential
Explanation
Trevor highlights the necessity of coordination between regulators such as the ICO and Ofcom to address AI issues that span multiple jurisdictions and domains.
Evidence
“John described the coordination necessary between Ofcom and the ICO in the Grok investigation, which is ongoing, because there was not a single place where regulatory authority existed to address that concern.” [144]
Major discussion point
Coordination and agile global governance
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Alexandra Reeve Givens
Speech speed
190 words per minute
Speech length
1283 words
Speech time
403 seconds
Trust fuels adoption and economic growth
Explanation
Alexandra states that trust is the fuel for AI innovation, because without trust users and enterprises will not adopt AI technologies.
Evidence
“So this is where trust really is the fuel of innovation because it is what’s going to be the economic driver of these tools being adopted.” [2] “And for people to adopt this technology, they need to trust it.” [5]
Major discussion point
Trust and safety as the engine for AI innovation
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Existing laws lack transparency – need for horizontal disclosure
Explanation
She points out that while existing laws may apply, AI systems make it hard to know if they are being broken, calling for a horizontal transparency layer to enable enforcement and remedies.
Evidence
“one area that I think is proving very important, and people are discovering this across jurisdictions, is even where existing laws apply, there is a problem where AI systems make it hard to know whether or not those laws are being broken.” [23] “And so without some type of disclosure regime that requires transparency in these high risk scenarios, high impact scenarios, to have transparency and disclosure about the system that is being used, impact assessments to make sure that discrimination isn’t happening, you actually don’t get the remedy that people really need under existing law.” [66] “So this is where that transparency layer you were articulating really becomes important.” [69]
Major discussion point
Sufficiency of existing regulatory regimes versus need for new AI‑specific rules
Topics
Artificial intelligence | Data governance | The enabling environment for digital development
AI‑driven hiring can hide discrimination
Explanation
Alexandra warns that AI tools used in recruitment can embed bias that is harder to detect than in human‑driven processes, underscoring the need for impact assessments.
Evidence
“It is against the law to discriminate in the course of hiring.” [82] “Alex gave us one, which is perhaps AI reviewing HR submissions, resumes, CVs, and language in those CVs may actually create results that were not intended, that create bias, that, you know, in a human‑driven system would be easier to find, in an AI‑driven system just much, much harder to find.” [83]
Major discussion point
Identifying and mitigating AI harms
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Well‑staffed independent regulators are needed
Explanation
She argues that robust, independent regulatory bodies are essential to represent the public interest and ensure AI safety.
Evidence
“Well‑staffed, empowered, independent regulatory bodies that can help represent the public interest.” [105]
Major discussion point
Innovative approaches to trust and safety
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Transparency laws and user empowerment
Explanation
Alexandra notes recent transparency legislation in US states and stresses that solutions should empower users without over‑burdening them.
Evidence
“You can look at the recent transparency laws that were passed in California and New York, very similar discussions to the Code of Practice for General Purpose AI models that came out under the EU practice.” [89] “I am all for user empowerment.” [122] “So you’re empowering users, but not burdening them or leaving them to essentially defend themselves unprotected.” [128]
Major discussion point
Innovative approaches to trust and safety
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
John Edwards
Speech speed
143 words per minute
Speech length
1144 words
Speech time
477 seconds
UK GDPR as de‑facto AI regulation
Explanation
John contends that the UK’s implementation of the GDPR already provides a functional regulatory framework for AI, making separate AI legislation unnecessary.
Evidence
“So we have a de facto regulatory regime under the UK GDPR.” [43] “Judge, to your point about the absence of prescriptive regulation in the UK on AI, we don’t see that particularly as a deficit.” [45] “With the GDPR.” [46] “I think data protection and the GDPR Act in the U.K. is a great example of that.” [47]
Major discussion point
Sufficiency of existing regulatory regimes versus need for new AI‑specific rules
Topics
Artificial intelligence | Data governance | The enabling environment for digital development
Mapping GDPR principles to AI offers common standards
Explanation
He explains that GDPR’s general principles can be translated into AI‑specific guidance, providing a common baseline for trust and safety across jurisdictions.
Evidence
“We’ve gone out and said, well, here’s how we see the technology – neutral general principles of the GDPR apply when you train a model, for example.” [109] “And that’s, I think, where regulation can provide a common standard.” [110] “We have been able to articulate those obligations by way of guidance, linking it back already to the GDPR principles.” [108]
Major discussion point
Identifying and mitigating AI harms
Topics
Artificial intelligence | Data governance
Agency, privacy‑by‑design and consent as trust mechanisms
Explanation
John highlights that preserving user agency, embedding privacy by design, and clear consent are essential tools for building trust in AI products.
Evidence
“Agency, I think, has capacity to recognize that, the objective is to maintain and to restore and maintain an individual’s agency as it uses any product.” [121] “Privacy by design.” [130] “And that’s consent.” [125]
Major discussion point
Innovative approaches to trust and safety
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Coordination between ICO and Ofcom is crucial
Explanation
John stresses that close collaboration between the UK’s data protection authority (ICO) and communications regulator (Ofcom) is vital for effective AI oversight.
Evidence
“And an organization like the ICO is there for both sides to see, well, there’s someone actually overseeing that.” [145] “It’s hugely important that we coordinate.” [146] “We’ve coordinated very closely with Ofcom.” [147]
Major discussion point
Coordination and agile global governance
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Amanda Craig
Speech speed
173 words per minute
Speech length
1045 words
Speech time
361 seconds
Governance must evolve with AI dynamics
Explanation
Amanda stresses that trust requires not only confidence in current technology but also an evolving governance process that iterates alongside AI developments.
Evidence
“We need to have governance efforts by governments that are ensuring that we have an evolving conversation about trust.” [8] “And so that requires not just confidence in how you are able to sort of trust the technology today, but also that there’s trust in a governance process that will continue to iterate and evolve alongside the technology.” [39]
Major discussion point
Trust and safety as the engine for AI innovation
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Internal responsible‑AI programs complement existing law
Explanation
She notes that Microsoft’s responsible AI governance program illustrates how companies can build internal frameworks that work with, and sometimes extend, existing regulations.
Evidence
“From a Microsoft perspective, we are focused on implementing our responsible AI governance program and see opportunity for lots of different governance models that governments could pursue in terms of implementing existing regulation, developing additional regulation that complements that existing regulation.” [32]
Major discussion point
Sufficiency of existing regulatory regimes versus need for new AI‑specific rules
Topics
Artificial intelligence | The enabling environment for digital development
Define high‑risk “sensitive uses” and manage supply‑chain risk
Explanation
Amanda outlines Microsoft’s categorisation of sensitive uses (employment, education, etc.) and stresses the challenge of managing those risks across the entire AI supply chain.
Evidence
“At Microsoft, we have something called the sensitive uses sort of scenarios where, you know, we have three categories where technology could have like an impact on someone’s life opportunity or consequential life impact of something like employment or education opportunities, for example, or how someone’s treated under the law otherwise, all sort of fit in that context.” [93] “And then what’s difficult is figuring out what do we do across the whole supply chain to manage that risk and have that be cohesive.” [94] “And one of the things that I think a lot about as we work on, you know, defining these harms and figuring out what to do about them, that we can learn from the kind of… decades of work on cybersecurity is the challenge of thinking about how to address risk across the supply chain.” [95] “And also, in the context of managing risk for any of those harms being realized, we also need to think really hard about looking across the whole supply chain at once.” [96]
Major discussion point
Identifying and mitigating AI harms
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Provenance tools and software‑built materials as innovation
Explanation
She highlights provenance tooling and software‑built materials as technical innovations that enable tracking of dynamic AI components, improving transparency and safety.
Evidence
“And I think about provenance tools as an area of innovation.” [112] “I think we also can look at existing law that reflects where there are harms, like in the employment context, where people could be mistreated or treated unfairly that we know we care about.” [62] “And one of the things that I’ve figured out how to govern that or working towards figuring out how to govern it is with software‑built materials, something that really allows you to have the ability to track those dynamic components.” [111] “It’s a bunch of very dynamic components, models, platform tools, services, applications all working together.” [114]
Major discussion point
Innovative approaches to trust and safety
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Denise Wong
Speech speed
164 words per minute
Speech length
969 words
Speech time
353 seconds
Singapore’s targeted and sectoral AI regulation
Explanation
Denise explains that Singapore regulates clear AI harms directly while leaving other areas to existing sector‑specific frameworks, using codes of practice for flexibility.
Evidence
“So on issues that are very clear, where there are clear harms, we have stepped in to regulate.” [53] “For the rest of it, a lot of it we leave to sectoral regulations where there’s already a web of existing regulations, and on specific issues as well.” [56] “In the social media context in Singapore, we did it via codes of practice.” [57]
Major discussion point
Sufficiency of existing regulatory regimes versus need for new AI‑specific rules
Topics
Artificial intelligence | The enabling environment for digital development
Clear‑harm triggers justify regulation (e.g., deepfakes in elections)
Explanation
She points out that when AI creates unmistakable harms such as election deepfakes, targeted regulation is appropriate.
Evidence
“An example of this is elections regulations that we put in place where we prohibited the use of AI deepfakes to represent candidates.” [102] “So that is the part where we regulate for clear and present harms.” [53]
Major discussion point
Identifying and mitigating AI harms
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Privacy‑enhancing technologies protect data
Explanation
Denise advocates for privacy‑enhancing tools such as federated learning to safeguard personal information in AI systems.
Evidence
“I would pick privacy‑enhancing technology.” [127] “I think it’s an interesting technological way to deal with at least one part of the problem, which is how do we secure the data, how do we make sure that the personal information is well protected.” [135]
Major discussion point
Innovative approaches to trust and safety
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Data governance
Emerging global consensus on AI harm taxonomy
Explanation
Denise notes that international efforts like the AI Safety Report are beginning to standardise harm categories, supporting coordinated global governance.
Evidence
“And we see that, for example, the International AI Safety Report is starting to anchor some of this taxonomy and sort of buckets and archetypes of harm, and we also see that beginning to happen at Iceland, for example.” [149] “I think there’s actually increasing consensus, I feel, about what harms or archetypes of harms there are vis‑a‑vis AI.” [150]
Major discussion point
Coordination and agile global governance
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Trust and safety are outcomes needed for confidence
Explanation
Denise frames trust and safety as the desired result that enables both the public and enterprises to use AI with confidence.
Evidence
“Trust and safety is the outcome that we want.” [4] “to thrive for the public and the enterprises to use the technology with confidence.” [7]
Major discussion point
Trust and safety as the engine for AI innovation
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Agreements
Agreement points
Trust is fundamental for AI adoption and technology use
Speakers
– Alexandra Reeve Givens
– Trevor Hughes
– Amanda Craig
Arguments
Trust is essential for AI adoption and economic success – people won’t use technology they don’t trust
Trust is essential for technology adoption – people won’t use technology they don’t trust
Dynamic nature of AI technology requires evolving governance processes that can adapt alongside technological development
Summary
All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won’t use AI technology, making it essential for both individual adoption and business success. This trust must encompass multiple dimensions including functionality, cultural appropriateness, privacy protection, and data security.
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Existing regulatory frameworks can address many AI governance needs
Speakers
– John Edwards
– Denise Wong
– Amanda Craig
Arguments
Existing regulatory frameworks like GDPR provide adequate foundation for AI governance without need for prescriptive AI-specific laws
Singapore uses targeted regulation for clear harms while relying on sectoral regulations and governance frameworks for broader issues
Microsoft focuses on internal responsible AI governance programs while supporting various government regulatory models
Summary
There was consensus that existing regulatory frameworks, particularly data protection laws like GDPR, provide a solid foundation for AI governance. Rather than requiring entirely new AI-specific legislation, current laws can be applied and supplemented with targeted regulations for specific clear harms.
Topics
Artificial intelligence | The enabling environment for digital development | Data governance
Regulatory coordination and international cooperation are essential
Speakers
– John Edwards
– Alexandra Reeve Givens
Arguments
Regulatory coordination between agencies and internationally is crucial for addressing complex AI issues
Well-staffed, empowered, independent regulatory bodies that can help represent the public interest
Summary
Both speakers emphasized the critical importance of coordination between regulatory agencies and international cooperation. The complexity of AI issues often spans multiple regulatory domains, requiring coordinated responses rather than siloed approaches.
Topics
Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs
User empowerment should not place sole burden on individuals
Speakers
– Alexandra Reeve Givens
– John Edwards
Arguments
Solutions must empower users without burdening them with sole responsibility for navigating AI risks
Agency-focused approaches that maintain individual control are superior to consent-based models
Summary
Both speakers agreed that while user empowerment is important, the burden of managing AI risks cannot be placed solely on individual users. They advocated for approaches that provide meaningful control without requiring users to navigate complex technical decisions alone.
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Similar viewpoints
Both speakers identified similar categories of AI harms and emphasized the need for transparency mechanisms to address discrimination and bias in high-risk scenarios like employment, particularly where existing laws may be difficult to enforce in AI systems.
Speakers
– Alexandra Reeve Givens
– Amanda Craig
Arguments
Transparency requirements are needed to make existing anti-discrimination laws enforceable in AI systems
High-risk scenarios include impacts on life opportunities, psychological/physical harm, and human rights violations
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers recognized that while there may be common categories of AI harms, the specific definition and prioritization of these harms must be tailored to individual country contexts and cultural considerations.
Speakers
– Denise Wong
– Amanda Craig
Arguments
Each country must determine what constitutes harm in their unique cultural and social context
High-risk scenarios include impacts on life opportunities, psychological/physical harm, and human rights violations
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both regulators favored flexible, adaptive regulatory approaches over rigid prescriptive legislation, emphasizing the value of guidance, codes of practice, and existing frameworks that can evolve with technology.
Speakers
– Denise Wong
– John Edwards
Arguments
Regulatory mechanisms like codes of practice and advisory guidelines provide more agility than prescriptive legislation
Existing regulatory frameworks like GDPR provide adequate foundation for AI governance without need for prescriptive AI-specific laws
Topics
Artificial intelligence | The enabling environment for digital development
Unexpected consensus
Limited need for new AI-specific legislation
Speakers
– John Edwards
– Denise Wong
– Amanda Craig
Arguments
Existing regulatory frameworks like GDPR provide adequate foundation for AI governance without need for prescriptive AI-specific laws
Singapore uses targeted regulation for clear harms while relying on sectoral regulations and governance frameworks for broader issues
Microsoft focuses on internal responsible AI governance programs while supporting various government regulatory models
Explanation
Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was unexpected consensus that comprehensive new AI legislation may not be necessary. Instead, they favored targeted interventions for specific harms while relying on existing regulatory frameworks and adaptive governance mechanisms.
Topics
Artificial intelligence | The enabling environment for digital development
Innovation through technological solutions rather than just regulatory ones
Speakers
– Denise Wong
– Amanda Craig
Arguments
Privacy-enhancing technologies offer technological solutions to data protection challenges
Provenance tools and software bill of materials can help track dynamic AI system components
Explanation
Both the regulator and industry representative unexpectedly converged on technological solutions as key innovations for AI trust and safety, suggesting that technology itself can solve some governance challenges that law cannot address.
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Data governance
Overall assessment
Summary
The panel demonstrated remarkable consensus on fundamental principles: trust as essential for AI adoption, the adequacy of existing regulatory frameworks with targeted additions, the importance of regulatory coordination, and the need to avoid placing sole responsibility on users. There was also unexpected agreement between regulators and industry on preferring adaptive governance over prescriptive legislation.
Consensus level
High level of consensus with significant implications for AI governance approaches. The agreement suggests a mature understanding across stakeholders that AI governance requires nuanced, collaborative approaches rather than heavy-handed regulation. This consensus could facilitate more effective policy development and implementation, as it indicates alignment between regulatory authorities and industry on core principles and approaches.
Differences
Different viewpoints
Need for additional AI-specific regulation beyond existing frameworks
Speakers
– John Edwards
– Alexandra Reeve Givens
Arguments
Existing regulatory frameworks like GDPR provide adequate foundation for AI governance without need for prescriptive AI-specific laws
Transparency requirements are needed to make existing anti-discrimination laws enforceable in AI systems
Summary
Edwards argues that existing frameworks like GDPR are sufficient for AI governance, while Givens contends that additional transparency and disclosure requirements are needed to make existing laws enforceable in AI contexts
Topics
Artificial intelligence | The enabling environment for digital development | Human rights and the ethical dimensions of the information society
Approach to regulatory burden and user responsibility
Speakers
– John Edwards
– Alexandra Reeve Givens
Arguments
Agency-focused approaches that maintain individual control are superior to consent-based models
Solutions must empower users without burdening them with sole responsibility for navigating AI risks
Summary
Edwards advocates for agency-focused approaches that give users more control tools, while Givens emphasizes that the burden cannot be placed solely on users and criticizes approaches that leave users to defend themselves
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Unexpected differences
Role of user empowerment versus protection in AI governance
Speakers
– John Edwards
– Alexandra Reeve Givens
Arguments
Agency-focused approaches that maintain individual control are superior to consent-based models
Solutions must empower users without burdening them with sole responsibility for navigating AI risks
Explanation
This disagreement is unexpected because both speakers advocate for user rights, but Edwards emphasizes giving users more control tools while Givens warns against placing too much burden on individual users. This represents a fundamental tension in digital rights approaches between empowerment and protection
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Overall assessment
Summary
The main areas of disagreement center on the adequacy of existing regulatory frameworks versus need for new AI-specific requirements, and the balance between user empowerment and protection from regulatory burden
Disagreement level
The level of disagreement is moderate but significant for policy implications. While all speakers agree on the fundamental importance of trust and safety, their different approaches to achieving these goals – through existing frameworks, new transparency requirements, or flexible governance mechanisms – could lead to very different regulatory outcomes and user experiences
Partial agreements
Partial agreements
All speakers agree that trust and safety are essential for AI adoption and success, but they disagree on the specific regulatory mechanisms needed – some favor existing frameworks with guidance, others want new transparency requirements, and others prefer flexible governance approaches
Speakers
– Alexandra Reeve Givens
– John Edwards
– Amanda Craig
– Denise Wong
Arguments
Trust is essential for AI adoption and economic success – people won’t use technology they don’t trust
Existing regulatory frameworks like GDPR provide adequate foundation for AI governance without need for prescriptive AI-specific laws
Dynamic nature of AI technology requires evolving governance processes that can adapt alongside technological development
Trust and safety are outcomes that require appropriate governance mechanisms, with regulation used when clear harms exist
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | The enabling environment for digital development
Both speakers agree on the need to identify and address AI harms, but Craig focuses on universal categories of high-risk scenarios while Wong emphasizes that harm definitions must be culturally and contextually specific to each country
Speakers
– Amanda Craig
– Denise Wong
Arguments
High-risk scenarios include impacts on life opportunities, psychological/physical harm, and human rights violations
Each country must determine what constitutes harm in their unique cultural and social context
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Similar viewpoints
Both speakers identified similar categories of AI harms and emphasized the need for transparency mechanisms to address discrimination and bias in high-risk scenarios like employment, particularly where existing laws may be difficult to enforce in AI systems.
Speakers
– Alexandra Reeve Givens
– Amanda Craig
Arguments
Transparency requirements are needed to make existing anti-discrimination laws enforceable in AI systems
High-risk scenarios include impacts on life opportunities, psychological/physical harm, and human rights violations
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers recognized that while there may be common categories of AI harms, the specific definition and prioritization of these harms must be tailored to individual country contexts and cultural considerations.
Speakers
– Denise Wong
– Amanda Craig
Arguments
Each country must determine what constitutes harm in their unique cultural and social context
High-risk scenarios include impacts on life opportunities, psychological/physical harm, and human rights violations
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both regulators favored flexible, adaptive regulatory approaches over rigid prescriptive legislation, emphasizing the value of guidance, codes of practice, and existing frameworks that can evolve with technology.
Speakers
– Denise Wong
– John Edwards
Arguments
Regulatory mechanisms like codes of practice and advisory guidelines provide more agility than prescriptive legislation
Existing regulatory frameworks like GDPR provide adequate foundation for AI governance without need for prescriptive AI-specific laws
Topics
Artificial intelligence | The enabling environment for digital development
Takeaways
Key takeaways
Trust is fundamental to AI adoption and innovation – without trust, people and organizations won’t use AI technology regardless of its capabilities
There is strong consensus that trust and safety are essential, but disagreement on whether additional AI-specific regulation is needed beyond existing frameworks
Existing regulatory frameworks (like GDPR) can provide adequate governance for AI when properly applied and interpreted through guidance
AI’s dynamic and rapidly evolving nature requires agile governance approaches that can adapt alongside technological development
Transparency mechanisms are crucial because AI systems make it difficult to detect violations of existing laws (e.g., employment discrimination)
Risk management must consider the entire AI supply chain, not just end-use applications
Each jurisdiction must determine what constitutes harm in their unique cultural and social context
International coordination between regulators is essential for addressing complex, cross-border AI issues
Four key innovations show promise: provenance tools, agency-focused approaches, privacy-enhancing technologies, and well-resourced independent oversight bodies
Solutions must empower users without placing the entire burden of risk assessment on individuals
Resolutions and action items
None identified – this was a panel discussion focused on sharing perspectives rather than making decisions or assigning tasks
Unresolved issues
Whether additional AI-specific regulation is necessary or if existing frameworks are sufficient
How to effectively identify and address prospective harms before they fully manifest
How to balance innovation with appropriate safeguards in a rapidly evolving technological landscape
How to achieve effective coordination between multiple regulatory agencies within and across jurisdictions
How to manage AI risks across complex supply chains involving multiple stakeholders
The ongoing Grok investigation mentioned by John Edwards remains unresolved
How to create standardized approaches to AI governance while respecting different cultural and jurisdictional contexts
Suggested compromises
Use targeted regulation only for clear and present harms while relying on existing sectoral regulations and governance frameworks for broader issues
Employ more agile regulatory mechanisms like codes of practice and advisory guidelines rather than prescriptive primary legislation
Focus on high-risk, high-impact scenarios for regulatory intervention while allowing market-driven solutions for lower-risk applications
Combine principles-based approaches with specific transparency requirements to balance flexibility with accountability
Shift from consent-based models to agency-focused approaches that maintain individual control without overwhelming users with decisions
Thought provoking comments
The real story is one of adoption, and that has been the overwhelming theme of the summit this year. And for people to adopt this technology, they need to trust it… this is where responsible, thoughtful regulation can be fuel for innovation as well. Because the same way that we want to be able to drive cars without all of us being experts in how a motor works, Product liability and good laws around the creation of these tools help make sure they outsource some of that work for us so that we don’t all have to be doing the individual labor of deciding whether we can trust.
Speaker
Alexandra Reeve Givens
Reason
This comment reframes the entire regulation vs. innovation debate by positioning regulation not as a barrier but as an enabler of trust and adoption. The car analogy is particularly powerful in illustrating how regulation can reduce individual burden while enabling mass adoption.
Impact
This fundamentally shifted the panel’s framing from viewing regulation as potentially inhibiting innovation to seeing it as essential infrastructure for innovation. It set the tone for the rest of the discussion where panelists largely agreed on the value of appropriate governance structures.
Even where existing laws apply, there is a problem where AI systems make it hard to know whether or not those laws are being broken… without some type of disclosure regime that requires transparency in these high risk scenarios, high impact scenarios, to have transparency and disclosure about the system that is being used, impact assessments to make sure that discrimination isn’t happening, you actually don’t get the remedy that people really need under existing law.
Speaker
Alexandra Reeve Givens
Reason
This insight identifies a critical gap in current regulatory approaches – that AI creates an ‘enforcement invisibility’ problem where existing laws become practically unenforceable due to lack of transparency. This moves beyond theoretical discussions to practical implementation challenges.
Impact
This comment prompted deeper discussion about the need for transparency mechanisms and influenced the conversation toward more nuanced approaches to AI governance that complement existing laws rather than replacing them entirely.
I think the starting point must be that every country has a unique context… what’s harmful in one context that’s harmful in India may not be the same as what’s harmful in the US. And the cultural context matters… if the harms are still being coalesced and formed, it’s quite difficult to be very prescriptive about how you deal with those harms, because that, by definition, is sort of changing and still coalescing.
Speaker
Denise Wong
Reason
This comment introduces crucial nuance about the impossibility of one-size-fits-all AI regulation and explains why we haven’t seen a ‘Brussels effect’ with AI regulation like we did with privacy. It acknowledges both cultural specificity and the evolving nature of AI harms.
Impact
This shifted the discussion from questioning why there isn’t universal AI regulation to understanding why flexible, adaptive governance mechanisms might be more appropriate. It led to exploration of alternative regulatory tools like codes of practice and advisory guidelines.
Agency… so much of our world is dominated by consent, which is, I won’t say broken, but it’s under strain as a useful concept. Agency, I think, has capacity to recognize that, the objective is to maintain and to restore and maintain an individual’s agency as it uses any product… consent is always conceived of as a front-end authorizing concept. But agency says, okay, I’ve done that now. Where’s my delete everything button?
Speaker
John Edwards
Reason
This comment challenges a fundamental assumption in privacy law and proposes a paradigm shift from consent-based to agency-based frameworks. It recognizes the limitations of current privacy mechanisms and offers a more holistic approach to user empowerment.
Impact
This introduced a new conceptual framework that moved the discussion beyond traditional privacy concepts. Trevor’s immediate response about ‘burden-shifting wrenches’ showed how this comment sparked deeper thinking about where responsibility should lie in AI systems.
We cannot put the burden solely on users to navigate this moment… We didn’t misdiagnose the harm. We misdiagnosed the remedy, which was the burden on individual users when we don’t actually have market choice, nor the time or mental energy to just read a whole bunch of disclosures and act alone.
Speaker
Alexandra Reeve Givens
Reason
This comment provides a crucial critique of current privacy approaches (like cookie banners) and explains why individual empowerment alone is insufficient. It distinguishes between empowering users and burdening them, offering a more sophisticated understanding of user agency.
Impact
This comment tied together multiple threads from the discussion – the cookie banner example, the agency concept, and the broader question of where responsibility should lie. It reinforced the panel’s emerging consensus that systemic solutions are needed rather than individual-focused ones.
Overall assessment
These key comments fundamentally reframed the discussion from a traditional ‘regulation vs. innovation’ debate to a more nuanced exploration of how governance can enable trust and adoption. Alexandra Reeve Givens’ early intervention about regulation as ‘fuel for innovation’ set a collaborative tone that allowed for deeper exploration of complex issues. The discussion evolved from questioning whether regulation is needed to examining what forms of governance are most appropriate for AI’s unique challenges. Denise Wong’s insights about cultural context and evolving harms helped explain the complexity of AI governance, while John Edwards’ concept of ‘agency’ offered a new paradigm for thinking about user empowerment. The final comment about not burdening users tied these themes together, creating a sophisticated understanding that effective AI governance requires systemic solutions that empower without overwhelming individuals. Overall, these comments elevated the discussion from surface-level policy debates to deeper questions about the nature of trust, responsibility, and governance in the AI era.
Follow-up questions
How do we identify prescriptive harms in AI systems that may emerge before technology is fully adopted?
Speaker
Trevor Hughes
Explanation
This addresses the challenge of prospective regulation and identifying potential AI harms before they manifest widely in society, which is crucial for effective governance.
How can we prove discrimination cases when AI-powered software is making hiring decisions?
Speaker
Alexandra Reeve Givens
Explanation
This highlights the difficulty of demonstrating bias in AI systems compared to human-driven processes, requiring new approaches to transparency and accountability.
What disclosure regimes and transparency requirements are needed for high-risk AI scenarios?
Speaker
Alexandra Reeve Givens
Explanation
This is essential for ensuring existing anti-discrimination laws can be effectively enforced when AI systems are involved in decision-making.
How do we manage AI risk across the entire supply chain cohesively?
Speaker
Amanda Craig
Explanation
This addresses the complex challenge of coordinating risk management when AI systems involve multiple components, platforms, and stakeholders.
Why hasn’t the EU AI Act created a ‘Brussels effect’ like the GDPR did globally?
Speaker
Trevor Hughes
Explanation
Understanding why AI regulation hasn’t spread globally like privacy regulation did could inform future regulatory strategies and international coordination.
How can regulatory coordination be improved when AI issues span multiple regulatory domains?
Speaker
John Edwards
Explanation
The Grok investigation example shows the need for better coordination between different regulators (ICO, Ofcom) when AI issues cross jurisdictional boundaries.
How can we develop more agile governance mechanisms that can adapt to rapidly evolving AI technology?
Speaker
Denise Wong
Explanation
This addresses the challenge of creating regulatory frameworks that can keep pace with the dynamic nature of AI development.
How do we implement software bill of materials concepts for agentic AI systems?
Speaker
Amanda Craig
Explanation
This explores how provenance tools from cybersecurity could be adapted to track the dynamic components of AI agent systems.
How can we shift from consent-based models to agency-based models in AI governance?
Speaker
John Edwards
Explanation
This examines alternatives to current consent mechanisms that may be more effective for maintaining individual control over AI interactions.
How do we avoid putting the burden solely on users while still empowering them in AI systems?
Speaker
Alexandra Reeve Givens
Explanation
This addresses the need to balance user empowerment with systemic protections, learning from the limitations of cookie consent models.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

