Day 0 Event #59 How to Develop Trustworthy Products and Policies

23 Jun 2025 09:00h - 10:00h

Day 0 Event #59 How to Develop Trustworthy Products and Policies

Session at a glance

Summary

This discussion was a workshop session at IGF 2020 titled “How to Develop Trustworthy Products and Policies,” nicknamed “Project Manager for a Day” by Google. The session was moderated by Jim Prendergast and featured Google speakers Will Carter (AI policy expert) and Nadja Blagojevic (trust manager) who aimed to give participants insight into the role of product managers at Google and the challenges they face when launching products.


Nadja began by explaining that product managers identify problems to solve, develop vision and strategy, create roadmaps, and coordinate with teams including user experience (UX) designers and engineers. She emphasized the importance of iterative design and validation at different fidelity levels, noting that small changes in language and design can significantly impact product adoption. The speakers presented two case studies: AI Overviews, which uses generative AI to provide comprehensive responses to complex search queries with high-quality sources, and About This Image, a tool that helps users understand the context and credibility of images online, including detection of AI-generated content through SynthID watermarking.


Following the presentations, participants broke into groups to brainstorm product ideas focusing on information quality, news credibility, and privacy. The in-person groups developed concepts for flagging AI-generated or false news content in search results, while the online group, led by Hassan Al-Mahmoud from Kuwait’s telecommunications authority, proposed an AI-powered system to automate domain name registration verification using document recognition and validation. All groups emphasized the need for collaboration between engineering, UX, legal teams, and subject matter experts, while considering cultural competency and building user trust. The session highlighted the complex considerations involved in product development, particularly around information quality and trustworthiness in the digital age.


Keypoints

## Major Discussion Points:


– **Product Management at Google**: Overview of how product managers identify problems, develop vision and strategy, create roadmaps, and coordinate with UX designers and engineers to deliver features that solve user needs


– **AI-powered Features and Trust**: Case studies of Google’s AI Overviews and “About This Image” feature, demonstrating how the company approaches building trustworthy AI products with quality controls, source verification, and transparency tools


– **Information Quality and News Credibility**: Multiple breakout groups focused on developing features to help users identify reliable news sources, detect AI-generated content, and provide context about information credibility through visual indicators and fact-checking partnerships


– **Domain Registration Automation**: Presentation of a real-world case study from Kuwait’s domain authority (.kw) exploring how AI tools could streamline government processes for validating commercial entity documentation and domain name registration


– **Cross-sector Collaboration Needs**: Discussion of how addressing online trust and information quality requires partnerships between private companies, government agencies, fact-checking organizations, and civil society groups


## Overall Purpose:


The discussion was designed as an interactive workshop called “Project Manager for a Day” to give participants hands-on experience with product management challenges at Google, specifically focusing on how to develop trustworthy products and policies while balancing various stakeholder needs and technical constraints.


## Overall Tone:


The tone was educational and collaborative throughout, beginning formally with structured presentations but becoming increasingly interactive and engaged during the breakout sessions. Participants showed genuine enthusiasm for tackling real-world problems, and the facilitators maintained an encouraging, supportive atmosphere while acknowledging the complexity of the challenges being discussed. The session ended on a positive note with appreciation for the collaborative dialogue between different sectors.


Speakers

– **Will Carter** – AI policy expert with extensive experience in shaping government policies and regulations on AI; currently works on leading AI policy in the knowledge and information team at Google, where he leads engagement on AI policy and regulatory standards with senior policy makers around the world; previously worked at the Center for Strategic and International Studies focusing on international technology policy issues


– **Jim Prendergast** – Works with the Galway Strategy Group; serves as moderator for the session


– **Nadja Blagojevic** – Knowledge and information trust manager at Google with over 15 years of experience in the tech industry; expert in online safety and digital literacy; based in London; has held various leadership positions at Google including leading work across Europe on family safety and content responsibility


– **Hassan Al-Mahmid** – From Kuwait, works at the Communication and Information Technology Regulatory Authority (CITRA); in charge of the .kw domain space; responsible for domain name registrations and policy making for Kuwait’s country code top-level domain


– **Audience** – Multiple audience members participated in discussions and breakout sessions


**Additional speakers:**


– **Nidhi** – Joining from India; academic doing PhD work that lies between tech and public policy in various areas of ethics


– **Abdar** – From India; works as an internet governance intern at National Internet Exchange of India, working between tech and policy


– **Oliver** – Appears to be event staff managing time and logistics (mentioned as giving time signals from the back of the room)


Full session report

# Workshop Report: “How to Develop Trustworthy Products and Policies”


## Executive Summary


This report summarizes the “Project Manager for a Day” workshop session held during IGF, titled “How to Develop Trustworthy Products and Policies.” The one-hour interactive session (9-10 AM on day zero) was designed as an educational experience led by Google representatives to give participants hands-on insight into product management challenges, particularly focusing on developing trustworthy products and policies in the digital age.


The workshop engaged both in-person and online participants in collaborative problem-solving exercises, resulting in three concrete product proposals addressing news credibility, government process automation, and information quality. The session successfully demonstrated the complexities of product development while providing practical experience in collaborative problem-solving.


## Session Structure and Participants


### Facilitators and Speakers


The session was moderated by **Jim Prendergast** from the Galway Strategy Group. The primary speakers were **Nadja Blagojevic**, Google’s Knowledge and Information Trust Manager based in London (joining remotely), and **Will Carter**, an AI policy expert from Google.


Key participants included **Hassan Al-Mahmid** from Kuwait’s Communication and Information Technology Regulatory Authority (CITRA), **Nidhi**, a PhD researcher from India working on tech and public policy ethics, and **Abdar**, an internet governance intern at the National Internet Exchange of India.


### Workshop Format


The session followed a structured approach:


1. Introductions and product management fundamentals


2. Case studies of Google’s AI-powered features


3. Collaborative breakout sessions (15-20 minutes)


4. Final presentations (2-3 minutes each)


Technical challenges with remote participation were noted, with some audio difficulties for online participants.


## Product Management Fundamentals


Nadja Blagojevic explained that product managers at Google are responsible for identifying problems to solve, developing vision and strategy, creating roadmaps, and coordinating with cross-functional teams. She emphasized the collaborative nature of product development, noting that product managers work closely with UX designers and engineers throughout the development process.


The iterative design process was highlighted as crucial, with products validated at different fidelity levels throughout development. Blagojevic noted that seemingly minor changes in language and design can significantly impact product adoption.


She distinguished between obvious improvements and less obvious innovations that solve problems users don’t realize they have, using Google Street View as an example of addressing a latent need for location visualization.


## Case Studies: Google’s AI Features


### AI Overviews


Nadja presented AI Overviews as an example of how Google approaches trustworthy AI implementation. This feature uses generative AI to provide comprehensive responses to complex search queries, appearing only when they add value beyond regular search results. The feature is designed to show only information supported by high-quality results and includes safeguards against hallucination.


### About This Image


Will Carter presented “About This Image,” a tool designed to help users understand the context and credibility of images online, including detection of AI-generated content. The tool provides contextual information about image sources and authenticity.


Central to this tool is SynthID, Google’s digital watermarking technology that embeds detectable markers in AI-generated images. These watermarks remain identifiable even after alterations such as cropping or resizing. Carter noted that all images created with Google’s consumer AI tools are marked with SynthID.


## Breakout Session Outcomes


### In-Person Groups: News Credibility Solutions


The physical room was divided into two groups that focused on news credibility and information quality challenges. Their proposals included:


1. **Visual credibility indicators**: Adding flags to Google search results to indicate whether news articles are false or AI-generated


2. **News classification system**: Rating content on a spectrum from neutral to sensationalist to help users make informed decisions


The groups recognized that implementing such systems would require collaboration with cultural competency experts and appropriate legal frameworks to understand news sources across different contexts.


### Online Group: Government Process Automation


Hassan Al-Mahmid led the online group in developing a proposal for improving Kuwait’s .kw domain registration process through AI automation. Currently, the process requires manual document verification and takes 48 hours to complete. The proposed solution would use AI image recognition to validate trade licenses and match domain names to business names, potentially reducing processing time to minutes.


The system would also suggest alternative domain names when conflicts arise and could integrate with other government entities to streamline verification processes. Al-Mahmid acknowledged that implementation would require consultation with legal departments regarding confidential data handling and determining acceptable documentation standards.


The project timeline was estimated at six months, though government integration requirements might extend this timeframe.


## Key Themes and Approaches


### User Empowerment Through Transparency


Participants agreed that providing context to users represents an effective approach to information quality, rather than making unilateral content decisions. This philosophy emphasizes user empowerment through transparency, allowing individuals to make informed decisions based on comprehensive information about sources and credibility indicators.


### AI as Enhancement Tool


There was consensus on the role of AI as a tool for verification and enhancement rather than replacement of human judgment. AI was positioned as augmenting human decision-making capabilities rather than supplanting human oversight entirely.


### Multi-Stakeholder Collaboration


All speakers recognized that addressing information quality challenges requires collaboration between the public sector, private sector, academia, and civil society.


## Practical Outcomes


### Concrete Proposals


The session generated three specific product proposals:


1. **News Article Credibility System**: Visual indicators and classification systems for search results to inform users about news article reliability


2. **AI-Powered Domain Registration**: Automated system for validating commercial entity documentation in government processes


3. **Contextual Information Tools**: Systems that provide users with background information to make informed decisions about content credibility


### Commitments


Hassan Al-Mahmid agreed to present Kuwait’s domain registration AI automation project as a detailed case study. Will Carter committed to remaining available throughout IGF week for follow-up questions and discussions.


## Challenges Identified


The discussion highlighted several ongoing challenges:


– **Cultural competency**: Developing information quality systems that work across different political and cultural environments


– **Implementation complexity**: Balancing innovation with regulatory compliance, particularly in government contexts


– **Success measurement**: Establishing metrics for evaluating information quality initiatives


– **Automation oversight**: Determining appropriate balance between automated systems and human oversight


## Conclusion


The workshop successfully demonstrated the complexity of developing trustworthy products and policies while providing participants with practical experience in collaborative problem-solving. The session revealed common ground around user empowerment through transparency, multi-stakeholder collaboration, and AI as a verification enhancement tool.


The three concrete proposals developed during the workshop provide starting points for addressing information quality challenges, while the collaborative approach modeled during the session offers a framework for future multi-stakeholder engagement in digital governance challenges.


Session transcript

Jim Prendergast: Patients, as we kick off the IGF 2020, it’s always a challenge with day zero, 9 a.m., for everybody to find the room, find their way around the venue, get through security, and as you see, get rid of some of the tech gremlins that we have sometimes. My name’s Jim Prendergast. I’m with the Galway Strategy Group. I’m gonna sort of moderate this session for you. Officially, it’s titled How to Develop Trustworthy Products and Policies. But the folks at Google sort of have an internal nickname for it. It’s called Project Manager for a Day. So what we essentially wanna do is give you an overview of what it’s like to be a product manager at Google. How do you balance all the different challenges when it comes to launching a product into the marketplace? All the different factors that these folks have to take into consideration before you actually see a product and some of the different feedback cycles that it goes through and some of the challenges that, frankly, you face on a day-to-day basis. What I’m gonna do is I’m gonna introduce our two speakers. We have one speaker here in person and then one speaker online. And then they’re gonna give a quick overview, some case studies to sort of show you what they deal with on a regular basis. And they’ll discuss some of the different considerations that do go into the product development. And then next what we’ll do is we’re gonna do essentially two breakout groups. One will be the in-person participation, folks here in the room. Will’s gonna work with you through some tabletop exercises for about 20, 25 minutes. And then Nadja’s gonna, fingers crossed, work with the online participants to accomplish the same. From a technical standpoint, I think the easiest way to not hear the people talking to each other online is for all of us to take our headsets off. That seems to be the shortest way to solve that tech issue with the online and the offline participants. during remote participation, which of course is an important aspect of the IGF. So let me get going here and do some introductions. First, we have Will Carter. Will’s an AI policy expert with extensive experience shaping government policies and regulations on AI, working with product teams to develop and deploy AI responsibly in real world applications. Currently works on leading AI policy in the knowledge and information team at Google, where he leads engagement on AI policy and regulatory standards with senior policy makers around the world. He’s advised senior leadership and C-suite executives on AI policy strategy and implementation, and developed and implemented AI policies and governance across the company. Prior to joining Google, he was with the Center for Strategic and International Studies, where he focused his research on international technology policy issues, including emerging technologies and artificial intelligence. So if you’ve got a question about AI, this is your guy. Joining us remotely is Nadja Blagojevic. She’s based in London. She is a knowledge and information trust manager at Google with over 15 years of experience in the tech industry. She’s an expert in online safety and digital literacy, and she’s held various leadership positions at Google, including leading work across Europe on family safety and content responsibility. So with that, what I’m gonna do is throw it over to Nadia to kick us off with the case studies to help set the stage for us. Nadia?


Nadja Blagojevic: Great, thanks very much, Jim, and thank you very much, everyone, for being with us here this morning. So without further ado, we will jump right in. I’m very excited to be talking with you all about what product managers do at a company like Google, and as with most jobs, there’s no one right way to do it, if you ask. A hundred people, you’ll probably get a hundred different answers, but there are some common elements that we will talk about today. So you can think of a product manager as the person who’s responsible for figuring out at its core what the problem is that needs to be solved. Sometimes it’s very easy to identify what a problem is. For example, once word processors were built, it was fairly obvious that a spell checker would be an improvement. But some things can be less obvious. For example, with Google Street View, when we first launched, it wasn’t clear to the degree to which seeing a location before a drive or a trip or contemplating a move could be. This feature was a less obvious addition to an online map, and it solved a problem that most people didn’t even realize that they had. So the PM focuses on identifying that problem and then building out a vision, a strategy, and a roadmap. The vision should really be informed by the problem that you’re trying to solve. It should be a stable, long-term, high-level overview of what that problem is and really how you’re going to tackle it. The strategy helps you navigate and leverage the technology and the ecosystem factors that will be playing out over the lifetime of your product. Your strategy should be relatively stable, and your roadmap is really thinking about how you sequence what you’re going to do to build your specific feature and move towards your vision. Your roadmap usually changes pretty frequently. In consumer tech, if you build a roadmap and it’s accurate for a year, you’re very lucky. PMs partner really closely to coordinate teams and deliver the right features, right data, users, sales, marketing at all the right times in the product development lifecycle. And we really try to make sure that we are also the ultimate champions of our products, both inside the company and externally. And the goal is really to… to make sure that we’re building something of value so that our broader teams and stakeholders can evangelize what we build as well. As product managers, we work really closely with our colleagues in user experience, which is sometimes abbreviated as UX, to iteratively design and validate what we’re building at progressively higher levels of fidelity. It’s very expensive to change something that’s fully developed, but it’s very inexpensive to put a wireframe or a rough sketch of a product in front of someone that we want to use the product and ask questions like, would you use this? What will you use it for? What doesn’t make sense? What’s missing? It can be really amazing, but these small changes in language and wording and also insights can lead to huge impacts in adoption. And lastly, but certainly not least, our engineer counterparts. Engineers build and maintain products. They make them work reliably and quickly for users. And both UX and Eng are included when we do our roadmapping and strategy setting. We build better plans of roadmaps when we have all three functions working together from the get-go to sort of build out that roadmap and set the strategy and vision. So as Jim mentioned, we’ll go through a couple of quick case studies to give you a sense of how we approach product development, walking through a couple of features that we’ve developed here at Google. So talking now about AI overviews. Not yet. Could I just interrupt real quick? To the guys in the back, can we display the slides in the Zoom and on the screen? Is that possible? There we go. Great, thanks very much. And if we could just advance to the next slide, please. We’ll just go right into our AI overviews case study. Great. So building on our years of innovation and leadership and search, AI overviews are part of Google’s approach to provide helpful responses to queries from people around the world. They use generative AI to provide key information about a topic or a question. And they were really designed to show up on queries where they can add additional benefit beyond what people might already be getting from search, where we have high confidence in the overall quality of the responses. So for example, if you look on the query to the right of the screen, you can see that AI overviews let you ask more complex questions. This query is asking for help on how to stand out on a first time apartment application. And you can see you get a really nuanced answer. You get corroborating links here and additional resources to dive in and learn more. And you get that kind of information and extra help in a very digestible way. You can see here the user experience elements and the design with the bullet points, for example, or the placement of the links in this response. And on the next slide, talking a little bit about that sort of bar of high quality. For AI overviews, we’ve designed it to only show information that’s supported by high quality results from across the web, meaning that generally AI overviews don’t hallucinate in the ways that other LLM experiences might. We think this is especially, this is important kind of across the board, but also especially important for queries that might be particularly sensitive for a given reason. And for these kinds of queries, whether they’re about something maybe health-related or finance-related or seeking certain types of advice, we have an even higher quality bar for showing information from reliable sources. We also have built into the product that for these queries, AI overviews will inform people when it’s important to seek out expert advice or to verify the information that’s being presented. And then finally here, we also have a set of links and a display panel here on the right-hand side with more additional resources for relevant web pages right within the text of the AI overviews. And we’ve seen really positive results showing these links to supporting pages directly within AI overviews is driving higher traffic to publisher sites. And because of AI overviews, we’re seeing that people are asking longer questions, diving more deeply into complex subjects, and uncovering new perspectives, which means more opportunities for people to discover content from publishers, from businesses, and from creators. I’ll hand over now to Will to talk about About This Image.


Will Carter: Thanks, Nadia, and thank you all for coming today. I’m going to talk a little bit about another feature that we launched in 2023 called About This Image. Google Search has built-in tools that really are designed to help users find high-quality information, but also to make sense of the information that they’re interacting with online. And About This Image and SynthID are designed to help users understand the context and the credibility of images they’re interacting with online, including understanding if those tools have been generated or if those images have been generated by Google’s AI tools. So with Google Image Search results, you can click on the three dots above the image, and that will show you the image’s history, which includes. Jim Prendergast, Mevan Babakar, Jim Prendergast, Jim Prendergast, Jim Prendergast, Jim Prendergast, other sites that accurately describe the original context and origin of the image. And it allows you to really understand the evidence and perspectives across a variety of sources related to the image. And finally it allows you to see the image’s metadata. So increasingly, publishers, content creators, and others are adding metadata, tags that provide additional information and context about an image that can provide a variety of information including whether or not it’s been generated, enhanced, or manipulated by AI. Which is increasingly important to understand as powerful image generation and image alteration engines are widely available. So one of the key ways that we do this is using a tool called SynthID. Which is a tool for watermarking and identifying AI generated content. Basically what this does is it embeds a digital watermark directly into the pixels of an image generated by Google’s AI image generation tools. That’s important because even when the image has been altered, for example by cropping it or screenshotting it, or resizing or recoloring or flipping the image, those watermarks can still be detected, making it more robust to adversarial behavior. And all images made with Google’s consumer AI tools are marked with SynthID. And that means that if you encounter an image through Google search, that is generated by a Google AI tool, you will be able to see that in About This Image. So this last GIF here shows how we’ve recently integrated About This Image into one of our other products, Circle to Search. So Circle to Search allows you to select something on the screen and access additional information about it. In this case, you can circle an image and get About This Image information to get context about images that you interact with online, which can be a really powerful way, again, to really understand that context and make sure that the image that you’re interacting with is being used in the way that was intended with appropriate context and accurately. So I’ll pass back to Jim for our activity.


Jim Prendergast: Yeah, sure. So thanks, Will. So, you know, sort of just give you a high level of all the different things that product managers have to consider working with their teams, the privacy rights, some of the metadata you talked about with the image. So what we’re gonna do now, I realize it’s early, hopefully you’ve all had your coffee and are ready to be a little interactive, is we’re gonna break out into two, maybe three breakout groups. I’d figure two in the physical room and one in the online room, just based upon how many folks we have. And what we’re gonna do is we’re gonna ask you to think a little bit for about 15 minutes or so, come up with some ideas. There’ll be some instructions on the next slide that Will’s gonna walk you through. And then what we’ll do is we’ll come back and share some ideas and thoughts for the final 15 minutes or so. So Will, why don’t you show them what they’re working with?


Will Carter: All right. So basically we’re going to have you break out into groups and nominate one PM. That’s going to be the person who’s kind of leading and presenting on behalf of your group. You pick an area of focus and we have a couple of options for you, but you’re welcome to pick something else. if you prefer, but info quality, news and privacy are some of the areas that we are actively working on every day. So the idea is, come up with an idea. Come up with a feature that you think we could add to Google search to address one of these issues. Or make up your own product. Then you’ll pitch your ideas to your VPs, that’s us, and argue for resources based on what you need in order to make this real. What you think the return on investment that you could generate from this product. And that doesn’t necessarily just mean how do you make money from it, but also how do you add value for the user, address a specific problem that our users are encountering in the way that they engage with our products. And don’t forget about the various things that you’re going to need to make this a reality. So that’s that UXR and support that Nadia was talking about earlier. But also, what is your go to market strategy? What are your success metrics? What is a realistic timeline or roadmap? You’ll have about 15 or 20 minutes to do this activity and we’ll be, Nadia and I will be engaging with your groups to help you work through this exercise. So good luck and maybe, what do you think? We can, yep. Okay, maybe we can divide right about here. So in the red, right there. You to this side, everyone else to that side. We can have our two groups in the room.


Jim Prendergast: All right, and Will’s gonna come down and prime the creative engine for everybody. And then Nadia’s got the online folks as well. So we’ll come back in 15 minutes and share experiences. And I know there was a question that we had in the chat room and we’ll answer that when we come back from the breakout as well. Thanks.


Nadja Blagojevic: Great, and so for everyone online, could you please try coming off mute and saying good morning?


Hassan Al-Mahmid: Thank you. Hello and good morning, everyone. Hello. Basically, we are in Norway right now, but we arrived early in the morning, we couldn’t attend the session.


Nadja Blagojevic: Ah, I see.


Hassan Al-Mahmid: And then we attend afternoon sessions in person. We’re from Kuwait, we’re from the Communication and Information Technology Regulatory Authority, etc. My name is Hassan Al-Mahmid, and I’m in charge of the cctlz.kw.


Nadja Blagojevic: Wonderful, it’s wonderful to have you with us. Are others able to come off of mute?


Audience: Hi Nadia, can you hear me? Yes, I can hear you. Hi, this is Nidhi, and I’m joining in from India, so hello. I am an accommodation, and I’m doing my PhD, and somewhere lies between tech and public policy, and various areas of ethics, so I’m very happy to be here. Good to see you.


Nadja Blagojevic: Wonderful, great to see you as well. All right, wonderful, it’s good to know that everyone’s able to come off mute. At this point, I’d like to ask everyone to please unmute yourself, because for the next few minutes we’ll be having a group discussion. Which I will not be leading, that will fall to you all. So as Will and Jim mentioned, for this next session in the breakout, we will be, rather you will be, brainstorming an idea as product managers. And it can be related to Google search, it can be related to another Google product, or just any technology idea that you think solves a problem. Can everyone please come off mute?


Audience: Yeah, please just confirm if you can hear me. Yes. Yeah, I’m Abdar. I’m from India. So I’m working as an internet governance intern at National Internet Exchange of India. So I work somewhere in between tech and policy.


Nadja Blagojevic: Wonderful. Yeah. And I’ll pose the question to the group. When you think about a product that you would like to build or a problem that you would like to solve, what springs to mind? And this is open to the entire group, please.


Hassan Al-Mahmid: Well, I do really have a lot of real case scenarios and like some projects undergoing right now. I can share some information with you and maybe if you guys are interested to help us develop the appropriate policies or get insights from you for the upcoming products in the .kw domain space. If you’re interested, I can pitch the idea for you guys and move with it. Or otherwise, I’m really open to work with the other team, other team members on other ideas. And then it’s all going to benefit us all on the way of how we’re going to think of building the policies and what aspects we need to consider when making strong and cohesive policies.


Nadja Blagojevic: Great. Other thoughts from the group?


Audience: I think if I heard Hassan correctly, that he has an idea and probably would like to share that with us and we can. sort of stitch that together, is that correct?


Hassan Al-Mahmid: Yes, that’s correct. I do have like some ideas from our day job, you know, I can share with you. For example, since we are in charge of the .kw domain space, we are thinking of implementing AI tools to help us make the registration process for domain names in Kuwait, the faster and easy process with the benefit of AI, we can like process the domain request almost immediately without wait for someone to look up the documents and make all the choices. So just I’ll give you a brief of how the domain space works in Kuwait. We do have two zones to register. For example, if you would like to register name.com.kw, since we have the extension .com.kw, it represents a commercial entity in Kuwait. So there are some set of requirements for that entity to register, such as having a valid trade license in Kuwait, they have to have a representative in Kuwait, someone either is going to be a Kuwaiti citizen or someone with a work permit in Kuwait. So these kinds of documentations are being like right now, manually uploaded throughout the portal. And then it has to be checked by a person to validate all the information and making sure that the domain registration request is valid. But we are thinking of implementing right now AI tools and some sort of integration between the government entities. So to make the process seamless, and we can have like the domain up and running. within minutes instead of, for example, 48 hours right now. Great. And when you think about building out this AI tool, what kind of resources do you think you would need to be able to develop it? And this is sort of a question for the group. I can give them a hint, basically. Yeah. The process is gonna be similar somehow, like the client who would like to register a domain name, they will need at the moment to upload their trade license. Okay? Once this is uploaded, we can use an image recognition tool to validate the document and make sure it’s not a fraudulent document. One of the regulations and the policies we have in Kuwait that the domain name is being registered for the commercial entity, it has to be matching the name of the entity in the commercial trade license. So we can, with that image recognition tool or text recognition tool, it can match the requested domain name, for example, with the name of the trade license. And if it finds a conflict, it shouldn’t reject the request, it should like pop up some sort of suggestions for the client to pick names from. That’s one example.


Nadja Blagojevic: And what kinds of sort of internal partnerships, which departments do you think, whether that’s UXR or engineering, would you need to work with legal departments? Who would you need to work with to be able to have the tool be able to do what you’ve just described?


Hassan Al-Mahmid: Well, we enjoy at our department, that’s just a one man show. Basically, we do set the policies. and we do have control of the technical aspects of the whole registration process. But we do seek some help from the legal department, that’s for sure, because we have to set some sort of guideline when uploading these documents, and we need to check with the legal department what kind of documents we should accept and how to handle this information, and sensitive where is it going to be, confidential data, it can be shared, what kind of level of confidentiality with these documents being uploaded, how to be handled and whether we can share them with the third parties or not. Yes, great. I mean data privacy and data security seem like they’d be very essential for the product development process. When you think about timeline, do you have an estimated time frame for how long something like this might take to develop? Usually these sort of tools are, the beauty of it, there are a lot of out-of-shelf solutions ready to be picked up and integrated. So we are expecting around six months to be honest, this is the time frame to have it done, in technical aspects, but since we are working with governmental entities here and maybe we need some governmental integration, you know how the government sometimes the time might extend to more than six months. Six months is more optimistic. I like that very much. We always encourage optimism, even though the entire repetition of government work that takes a lot of time, we always push for more. Efficiency and faster time, even though.


Nadja Blagojevic: All right. I think this is great. I think maybe we have a hand raised.


Audience: Yeah, so I had an opinion on that. Sure, go ahead. Yeah, so basically what Hasan is saying is, what I’m understanding is right. He’s saying there needs to be a capacity building, making the public servants familiar with this and integrating this AI into their framework. Right? Is that right, what I’m understanding?


Hassan Al-Mahmid: Yeah, that’s correct. Yes.


Audience: You’ll have to train the public servants on how to use these tools. Basically, there needs to be a capacity building.


Hassan Al-Mahmid: Yeah, there has to be some sort of training on how to use these tools. Yeah, that’s absolutely correct.


Nadja Blagojevic: Hey, everybody. Does anyone on the call have ideas about what we should ask our vice presidents for in terms of resources to develop this kind of capacity building?


Audience: We should tell them to be patient. I agree with that. The process takes time and you’ll have to be patient. Hasan, if you are looking into some global case studies, then you can look into Argentina. They also have some similar program to this.


Hassan Al-Mahmid: Thank you for the insight. We have a couple of success stories in the region. mostly in the United Arab Emirates. They do have implemented some AI tools and I believe also Qatar also they have that sort of tools. We are in talks with them at the moment to benefit from their experience. Since we are like the GCC countries in the Middle East, the Gulf countries, we almost share the same policies and we have also the same structure for domain names. So it’s much easier to get experience from these countries who are more advanced and they’re being very helpful but definitely we are looking to Argentina and we have also looked into Australia also. They have a really great content for domain names, very beneficial.


Nadja Blagojevic: I think we’ll be rejoining the group in about two minutes and so when we go back into the main group, Hassan, would you like to present as the product manager?


Hassan Al-Mahmid: Yeah, definitely, but I would also would love to. Hassan is our representative.


Nadja Blagojevic: Any final thoughts from anyone else on the call or questions or points that we think should be made as Hassan pitches this idea?


Audience: You should communicate what you’re doing to the public because since it’s a public sector, you’ll have to… communicate with them, even the failures as well. So, you know, to build trust.


Nadja Blagojevic: All right, Akhtar, do you have suggestions of how to do that?


Audience: No, as you’re doing, you can just give out small press briefings and something like that, even on your website.


Hassan Al-Mahmid: Yeah, definitely. We do usually have some press releases and briefs sometimes whenever we enable new features in .kw namespace. For example, last year in September when we released the roadmap for registering second-level domain names, that means your name .kw direct without .com or .org and I think it’s just going to be your name and .kw. We have released the roadmap on how you’re going to register these domain names and what are the places it’s going to be released on. Basically, yeah, we do regular press releases whenever we have new features. And this is one of the best ways to communicate with the public aside from social media.


Audience: Because they’re the ultimate users, so you’ll also need their interaction and their feedback. So if there’s no interaction, we’ll not get proper feedback.


Hassan Al-Mahmid: Yeah, and one thing that came to my mind, we are in the process of releasing a dispute resolution policy for domain names in Kuwait and it’s a national dispute resolution policy. When we released that policy, we seeked public consultation. We have the brief on the website. and we gave participants around 60 days to participate and give their idea on what are the policies and what has to be changed or improved. And we have received really good feedback from the public.


Audience: That’s really nice to hear. And 60 days is a good time frame.


Hassan Al-Mahmid: Yes, and this is the approach we’re doing in Sitra, Kuwait. Sitra, Kuwait basically is the TRA, Regulatory Authority for Information and Communication. So right now, whenever we release a new policy, we push it to the public consultation to get feedback. And then we analyze, get the feedback and improve. And then we release the final version.


Audience: Good to hear.


Nadja Blagojevic: Great, so it sounds like we will be rejoining the main group in just a second. And so Hassan will be our representative presenting the product idea. And we’ll also hear from the other two groups that have been workshopping their product ideas in person at IGF.


Audience: Hassan, make us proud.


Jim Prendergast: I hate to break up the creative process, especially at this hour since it’s going. But we do need to come back because they are going to throw us out at 10 o’clock. I promised all of you.


Audience: We’re only like 10 minutes away from the forum, by the way.


Hassan Al-Mahmid: Well, it’s now raining. And then after this session, yeah, we will join you guys on the floor, inshallah.


Jim Prendergast: Okay. Hello, everybody.


Nadja Blagojevic: Great chance to meet you up in person.


Jim Prendergast: Can you all hear us in the online world?


Audience: I am not from India, so I’m not lucky.


Jim Prendergast: Okay. Nadia, can you hear us from where you are? Yes. Okay, great. Well, I was listening actually to all three groups and I was impressed that the creative juices got flowing at this hour in particular with all the jet lag and everything else. So congratulations to everybody who partook. Will, you want to share some insights before we ask? Actually, let me ask you the question while the other groups get organized and prepared to read out to us. So the question that did come in after, before the break was, how do you scrape high quality content and what are the parameters of what you call high quality? And while Will is answering that, each group spokesperson get ready to give us like a two to three minute readout from your deliberations. Thanks.


Will Carter: I wish there was a simple answer to this question. This is something that we struggle with every day and that remains an area of significant innovation and investment for Google. There are a few approaches that we are taking currently and like I said, they just continue to evolve all the time as we try to figure out how to do this better and better. One way is to work with fact-checking organizations around the world that can validate information for us and do additional research and those partnerships are really key. Another way is to identify news sources that are consistently providing high quality information that are independent and that are generally reliable and validated by fact-checkers. And, but really at the end of the day, I think the most important thing that we do is provide context to our users as much as we can about where the information that they’re interacting with came from. So that’s providing additional links, providing counter arguments, providing access to metadata and additional information because there is no one


Jim Prendergast: first, and then we’ll go to the online group, and then the group to the right. So did you nominate a spokesperson, or? OK, great. There should be a mobile microphone, right? I put it on the table. There you go.


Audience: Can you hear me? OK, great. You said two minutes? OK. So in our group, we discussed a feature that would be added to Google search results that include news articles. And the goal of the feature is to give users information about the validity of the news article, some kind of flag or visual signal to show them if they’re looking at something trustworthy. We specifically talked about identifying news that can be known to be false or known to be generated by AI. And we would, if we are able to determine that, add a flag to show that to users that they are looking at something that is AI-generated. And they can still view it, but it would just be kind of a visual cue. We discussed some of the ways to kind of generate this information using fact-checking organizations that are credible and based on the country or location where they’re. reviewing information. We talked a bit about some of the resources needed to do this. Of course, you need an engineering and UX team, but we also talked about kind of cultural competency and having a group or some type of experts on knowing news sources and what kind of the cultural dialogue is in different contexts and also kind of the legal and legal framework to know that. And yeah, talking about the the ROI of this feature, we talked about why a company like Google should incorporate this feature and the ROI would be increasing trust in the product, giving users insight into the information they’re looking at, which is something they’re seeking and would be a unique value that would bring them to use Google search as opposed to other search engines and generally increasing the trust in the product and making the user more able to rely on the information they’re getting would encourage usage and expected roadmap. We didn’t really get that far, but this is the idea we came up with.


Nadja Blagojevic: Great. No, that’s it. You covered a lot of territory in a short period of time, especially with a cold start, so appreciate that. So I’m not sure who was nominated to represent the online participants, but we will unmute you if you try and talk or Nadia, do you recall who was your spokesperson? Yes, that would be Hassan. Hassan, are you able to come off mute?


Hassan Al-Mahmid: Hello and good morning, everyone.


Jim Prendergast: Good morning.


Hassan Al-Mahmid: My name is Hassan Al-Mahmoud. I’m from Sitra, Kuwait, which is basically the TRA for the country. I represent the .kw domain space in Kuwait. I’m in charge for the domain name registrations and the policy making. And with my colleagues in the online session, we have discussed a feature that would be added for .kw domain name registrations. For example, the current process right now, we do have two zones. We have a restricted zone for registration and unrestricted zone. What we mean by restricted, the third level domain names such as .com.kw, yourname.com.kw, it represents a commercial entity. So in order to, for example, to register a domain under .com.kw, you will have to fulfill some requirements, which are, you have to be a commercial, an official commercial entity in Kuwait with a valid trade license and has to be registered by someone who’s actually in Kuwait, based in Kuwait, either a Kuwaiti citizen or someone with a work permit. So the process right now is semi-manual, we would say, because whoever need to register a domain name, they need to upload some sort of documents like the trade license, their civil ID, for example. And these are being checked manually by one of the employees of the .kw domain space. And then we can grant that domain name registration. But we are looking into some other solutions that might make the process much faster, much easier. We are thinking of implementing AI tools to do this sort of scrubbing and checks. Because one condition, if you’d like to register a .com.kw domain name, the domain you select, it has to be matching with your trade license. or your trademark license. So, instead of doing that manually checking, we can have some sort of scrubbing that will check the name of the license or the name of the trademark and then it will process the request almost immediately. And in case of, for example, whoever is in jurisdiction.com.kw is selecting a name that doesn’t match the trademark or the license, we can, using the AI tool, would give them suggestions what are the appropriate domain names that can be registered.


Jim Prendergast: Great. Thanks, Hasan. We are short on time. I’m getting the clock ticking down sign from Oliver in the back. So, real briefly to our folks in the room on the right.


Audience: Yeah, I’ll be very brief, seeing as that we’re building on the product that was mentioned earlier, but on a public news classification. So, to what Will was saying about creating an informed audience, we would want to build on the, right now when you go on Google Search, you have three dots when you come up with a news article that provides you some context about the news outlet. This feature isn’t right now in the news aggregator tab when you go to Google News. So, we’d like to build on that to have a classification where based on a little spectrum of either being neutral contact or sensationalist content, we would give users the information that they would need to make an informed decision on what they think is credible and trust. That’s really hard to define internally and externally. Just again, building on the other team, we would work with UX and engineering, but also leveraging subject matter expertise at Google, especially with the Google News Initiative team and also just Google News, to ensure that they’re helping us build a framework that can then be taken to product. And in terms of ROI, well, of course, we want to drive user engagement and by providing additional context and other links that within the Google ecosystem, they’re able to continue staying on the platform. continuing to engage with the content that Google would provide. But also, at the end of the day, it’s about providing more context and building information quality online. Again, subject to their own understanding of what they are being the users, but also what quality looks like in different political contexts. So yeah, I think we’re all interested in news credibility.


Jim Prendergast: Yeah, no, that is definitely a common theme. And this being the beginning of the IGF, I’m sure that’s a theme that will carry on for the next several days. Well, I’m impressed. I mean, some really good ideas, some really good thoughts.


Will Carter: Definitely.


Jim Prendergast: Do you want to react? And maybe between you and Nadia, it closes up in the next 90 seconds or so?


Will Carter: Sure, I’ll keep it brief and then kick it over to Nadia. I think that there’s a reason that these issues are top of mind. These are things that I think we’re all struggling with on a day-to-day basis, whether it’s companies like Google that are trying to solve these problems or users on the web that are trying to understand all this information that’s inundating us every day and how to make sense of it and how to understand what is and isn’t credible. You guys have come up with some really great ideas. And I think this gives you a sense of how, when you think of a problem that you interact with every day, how do you actually start to translate that into a product division, identify your needs, turn it into something that can actually work and solve that problem day-to-day? This is what we do at Google. This is exactly what our workday looks like. So I’m really excited to have you all participate in this process. Nadia?


Nadja Blagojevic: Yes, just fully agree with Will. It is wonderful to be with you and hear everyone’s ideas. And these are all topics that we care very deeply about internally at Google. And we’re very grateful for the opportunity to be here and be in dialogue with you all. , to hear your points of view, to learn from you, and to share what we’re doing, not only in terms of how we think about product development and design, and how we’ve approached some of these issues within our own suite of products, but also, you know, to sort of share and be in exchange when it comes to, you know, our philosophies, and, you know, ultimately these topics will need robust collaboration between public, private sector, academia, civil society. So thank you very much for being with us right from the very beginning of day zero, and very much hope you enjoy the rest of your IGF.


Jim Prendergast: Great. Thanks, Nadia. And speaking of collaboration, I’m getting the hook from Oliver in the back of the room. So thanks, everybody, for participating both online and in person. Joel will be here for the rest of the week. So if you have any questions, track him down. That’s how these IGFs work if you’ve never been. So thanks, everybody, and have a great meeting. Bye-bye.


Will Carter:


N

Nadja Blagojevic

Speech speed

150 words per minute

Speech length

1781 words

Speech time

709 seconds

Product managers identify problems to solve, build vision/strategy/roadmap, and coordinate teams to deliver features

Explanation

Product managers are responsible for figuring out what problems need to be solved, which can range from obvious improvements like spell checkers to less obvious features like Google Street View. They focus on building a stable long-term vision, strategy to navigate technology factors, and roadmaps that sequence feature development.


Evidence

Examples provided include spell checker as an obvious improvement to word processors, and Google Street View as a less obvious feature that solved problems people didn’t realize they had


Major discussion point

Product Management at Google


Topics

Digital business models


Product managers work closely with UX teams to iteratively design and validate products at different fidelity levels

Explanation

Product managers collaborate with user experience teams to design and validate products progressively, starting with wireframes and rough sketches before full development. This approach is cost-effective since it’s expensive to change fully developed products but inexpensive to test early concepts with users.


Evidence

Mentioned that small changes in language, wording, and insights from early testing can lead to huge impacts in adoption


Major discussion point

Product Management at Google


Topics

Digital business models


Agreed with

– Jim Prendergast
– Audience

Agreed on

Product development requires cross-functional collaboration and user-centered design


Product managers collaborate with engineers who build and maintain products, with all three functions working together from the beginning

Explanation

Engineers are responsible for building and maintaining products to work reliably and quickly for users. Both UX and engineering teams are included in roadmapping and strategy setting from the start, as better plans emerge when all three functions collaborate from the beginning.


Major discussion point

Product Management at Google


Topics

Digital business models


AI overviews use generative AI to provide key information and show up on queries where they add benefit beyond regular search results

Explanation

AI overviews are part of Google’s approach to provide helpful responses using generative AI, designed to appear on queries where they can add additional benefit beyond standard search results. They allow users to ask more complex questions and receive nuanced answers with corroborating links.


Evidence

Example provided of a query asking ‘how to stand out on a first time apartment application’ which receives a nuanced answer with bullet points, links, and additional resources


Major discussion point

AI-Powered Search Features and Quality


Topics

Digital business models | Interdisciplinary approaches


AI overviews are designed to only show information supported by high-quality results and don’t hallucinate like other LLM experiences

Explanation

AI overviews have a high quality bar and only display information supported by high-quality web results, which prevents hallucination issues common in other large language model experiences. For sensitive queries about health, finance, or advice, there’s an even higher quality standard and the system informs users when expert advice should be sought.


Evidence

Mentioned that AI overviews inform people when it’s important to seek expert advice or verify information, and show links to supporting pages that drive higher traffic to publisher sites


Major discussion point

AI-Powered Search Features and Quality


Topics

Content policy | Consumer protection


Building information quality requires robust collaboration between public sector, private sector, academia, and civil society

Explanation

Addressing information quality challenges cannot be solved by any single entity alone but requires collaborative efforts across different sectors. This multi-stakeholder approach is essential for developing effective solutions to information credibility issues.


Major discussion point

News Credibility and Information Quality Solutions


Topics

Content policy | Interdisciplinary approaches


Agreed with

– Will Carter
– Audience

Agreed on

Information quality requires collaborative approaches and providing context to users


J

Jim Prendergast

Speech speed

181 words per minute

Speech length

1209 words

Speech time

399 seconds

Product development involves balancing multiple challenges and considerations before launching products into the marketplace

Explanation

Product managers at Google must balance numerous different challenges and factors when launching products, including privacy rights, metadata considerations, and various feedback cycles. The session aims to show participants what it’s like to be a product manager dealing with these day-to-day challenges.


Evidence

Mentioned privacy rights and metadata considerations as examples of factors that must be balanced


Major discussion point

Product Management at Google


Topics

Digital business models | Privacy and data protection


Agreed with

– Nadja Blagojevic
– Audience

Agreed on

Product development requires cross-functional collaboration and user-centered design


W

Will Carter

Speech speed

167 words per minute

Speech length

1148 words

Speech time

412 seconds

There is no simple answer to identifying high-quality content – it requires partnerships with fact-checking organizations and identifying reliable news sources

Explanation

Identifying high-quality content is a complex challenge that Google struggles with daily and continues to invest in solving. The approach involves working with fact-checking organizations worldwide for validation and identifying news sources that consistently provide reliable, independent information.


Evidence

Mentioned partnerships with fact-checking organizations and identifying consistently reliable and independent news sources validated by fact-checkers


Major discussion point

AI-Powered Search Features and Quality


Topics

Content policy | Freedom of the press


Disagreed with

– Audience

Disagreed on

Approach to defining and identifying high-quality content


The most important approach is providing context to users about where information came from through additional links and metadata

Explanation

Rather than trying to be the sole arbiter of information quality, Google focuses on giving users as much context as possible about information sources. This includes providing additional links, counter arguments, and access to metadata so users can make informed decisions.


Evidence

Mentioned providing additional links, counter arguments, and access to metadata as ways to give users context


Major discussion point

AI-Powered Search Features and Quality


Topics

Content policy | Freedom of expression


Agreed with

– Nadja Blagojevic
– Audience

Agreed on

Information quality requires collaborative approaches and providing context to users


Disagreed with

– Audience

Disagreed on

Approach to defining and identifying high-quality content


About This Image helps users understand context and credibility of images online, including if they were generated by AI tools

Explanation

About This Image is a feature launched in 2023 that helps users understand the context and credibility of images they encounter online. Users can click on three dots above an image to see its history, other sites that describe its original context, and metadata that may indicate if it was AI-generated.


Evidence

Feature shows image history, sites describing original context and origin, and metadata tags that can indicate if images were generated, enhanced, or manipulated by AI


Major discussion point

Image Verification and AI-Generated Content Detection


Topics

Content policy | Digital identities


SynthID embeds digital watermarks in AI-generated images that remain detectable even after alterations like cropping or resizing

Explanation

SynthID is a watermarking tool that embeds digital watermarks directly into the pixels of images generated by Google’s AI tools. These watermarks are robust and can still be detected even when images are altered through cropping, screenshotting, resizing, recoloring, or flipping.


Evidence

Watermarks remain detectable after cropping, screenshotting, resizing, recoloring, or flipping, making them robust against adversarial behavior


Major discussion point

Image Verification and AI-Generated Content Detection


Topics

Content policy | Digital identities | Intellectual property rights


Agreed with

– Hassan Al-Mahmid

Agreed on

AI tools can significantly improve efficiency in content verification and processing


All images made with Google’s consumer AI tools are marked with SynthID for identification in search results

Explanation

Google has implemented a comprehensive approach where every image generated by their consumer AI tools receives a SynthID watermark. This means users can identify AI-generated images from Google tools when they encounter them through Google search using the About This Image feature.


Evidence

Integration with Circle to Search feature allows users to circle an image and get About This Image information for context


Major discussion point

Image Verification and AI-Generated Content Detection


Topics

Content policy | Digital identities | Consumer protection


H

Hassan Al-Mahmid

Speech speed

136 words per minute

Speech length

1716 words

Speech time

753 seconds

Current .kw domain registration requires manual document verification which takes 48 hours, but AI tools could process requests immediately

Explanation

The current domain registration process in Kuwait requires manual verification of documents like trade licenses and civil IDs, taking up to 48 hours for approval. By implementing AI tools and integrating with government entities, the process could be completed within minutes instead of the current lengthy timeframe.


Evidence

Current process requires manual checking of uploaded documents by employees, while proposed AI integration could make domains ‘up and running within minutes instead of 48 hours’


Major discussion point

Domain Registration Process Improvement


Topics

Capacity development | Digital access | Alternative dispute resolution


AI image recognition could validate trade licenses and match domain names to business names, suggesting alternatives when conflicts arise

Explanation

The proposed AI system would use image and text recognition to validate uploaded trade licenses and ensure domain names match the business names on official documents. When conflicts are found, instead of rejecting requests, the system would provide suggested alternative domain names that comply with regulations.


Evidence

Example given of validating that requested domain name matches the name on trade license, and providing suggestions when conflicts are found rather than outright rejection


Major discussion point

Domain Registration Process Improvement


Topics

Digital business models | Alternative dispute resolution | Intellectual property rights


Agreed with

– Will Carter

Agreed on

AI tools can significantly improve efficiency in content verification and processing


Implementation would require legal department consultation for handling confidential data and determining acceptable documents

Explanation

The AI tool implementation requires collaboration with legal departments to establish guidelines for document handling, determine acceptable document types, and address data privacy concerns. Legal consultation is essential for determining confidentiality levels and whether documents can be shared with third parties.


Evidence

Need to check with legal department about what documents to accept, how to handle sensitive/confidential data, and whether information can be shared with third parties


Major discussion point

Domain Registration Process Improvement


Topics

Privacy and data protection | Data governance | Legal and regulatory


The project timeline is optimistically six months but may extend longer due to government integration requirements

Explanation

While the technical implementation using off-the-shelf AI solutions could be completed in six months, the involvement of governmental entities and required integrations may extend the timeline significantly. The six-month estimate represents an optimistic scenario for the technical aspects alone.


Evidence

Mentioned that ‘there are a lot of out-of-shelf solutions ready to be picked up and integrated’ but ‘since we are working with governmental entities… the time might extend to more than six months’


Major discussion point

Domain Registration Process Improvement


Topics

Capacity development | Digital business models


A

Audience

Speech speed

155 words per minute

Speech length

984 words

Speech time

379 seconds

Proposed feature would add visual flags to Google search results to indicate if news articles are false or AI-generated

Explanation

The proposed feature would provide users with visual signals or flags in Google search results to indicate the validity of news articles, specifically identifying content known to be false or generated by AI. Users could still view the content but would receive visual cues about its nature and credibility.


Evidence

Feature would use fact-checking organizations that are credible and based on country/location for validation


Major discussion point

News Credibility and Information Quality Solutions


Topics

Content policy | Freedom of the press | Consumer protection


Agreed with

– Nadja Blagojevic
– Will Carter

Agreed on

Information quality requires collaborative approaches and providing context to users


Disagreed with

– Will Carter

Disagreed on

Approach to defining and identifying high-quality content


Solution would require cultural competency experts and legal frameworks to understand news sources in different contexts

Explanation

Implementing news credibility features requires more than just technical resources – it needs cultural competency experts who understand news sources and cultural dialogue in different contexts, as well as appropriate legal frameworks. This recognizes that news credibility varies across different cultural and legal environments.


Evidence

Mentioned need for ‘cultural competency and having a group or some type of experts on knowing news sources and what kind of the cultural dialogue is in different contexts’


Major discussion point

News Credibility and Information Quality Solutions


Topics

Content policy | Cultural diversity | Legal and regulatory


Agreed with

– Nadja Blagojevic
– Jim Prendergast

Agreed on

Product development requires cross-functional collaboration and user-centered design


Proposed news classification system would rate content on a spectrum from neutral to sensationalist to help users make informed decisions

Explanation

The proposed system would classify news content on a spectrum ranging from neutral to sensationalist, building on existing Google features that provide context about news outlets. This classification would help users make informed decisions about content credibility while acknowledging that trust is difficult to define both internally and externally.


Evidence

Would build on existing three-dot feature in Google Search that provides context about news outlets, extending it to Google News aggregator tab


Major discussion point

News Credibility and Information Quality Solutions


Topics

Content policy | Freedom of the press | Consumer protection


Disagreed with

– Will Carter

Disagreed on

Approach to defining and identifying high-quality content


Agreements

Agreement points

Information quality requires collaborative approaches and providing context to users

Speakers

– Nadja Blagojevic
– Will Carter
– Audience

Arguments

Building information quality requires robust collaboration between public sector, private sector, academia, and civil society


The most important approach is providing context to users about where information came from through additional links and metadata


Proposed feature would add visual flags to Google search results to indicate if news articles are false or AI-generated


Summary

All speakers agreed that addressing information quality challenges requires multi-stakeholder collaboration and providing users with contextual information rather than making unilateral content decisions. This includes partnerships with fact-checking organizations and giving users tools to make informed decisions.


Topics

Content policy | Interdisciplinary approaches | Freedom of expression


AI tools can significantly improve efficiency in content verification and processing

Speakers

– Will Carter
– Hassan Al-Mahmid

Arguments

SynthID embeds digital watermarks in AI-generated images that remain detectable even after alterations like cropping or resizing


AI image recognition could validate trade licenses and match domain names to business names, suggesting alternatives when conflicts arise


Summary

Both speakers demonstrated how AI tools can automate and improve verification processes – Carter with image authenticity verification through SynthID, and Al-Mahmid with document verification for domain registration. Both emphasized AI’s ability to process and validate content more efficiently than manual methods.


Topics

Digital business models | Content policy | Digital identities


Product development requires cross-functional collaboration and user-centered design

Speakers

– Nadja Blagojevic
– Jim Prendergast
– Audience

Arguments

Product managers work closely with UX teams to iteratively design and validate products at different fidelity levels


Product development involves balancing multiple challenges and considerations before launching products into the marketplace


Solution would require cultural competency experts and legal frameworks to understand news sources in different contexts


Summary

All speakers recognized that successful product development requires collaboration across multiple disciplines including UX, engineering, legal, and cultural expertise. They emphasized the importance of iterative design, user validation, and considering diverse stakeholder needs.


Topics

Digital business models | Cultural diversity | Legal and regulatory


Similar viewpoints

Both emphasized the importance of providing users with contextual information and classification systems to help them evaluate content credibility, whether for images or news articles. They shared the philosophy of empowering users with information rather than making decisions for them.

Speakers

– Will Carter
– Audience

Arguments

About This Image helps users understand context and credibility of images online, including if they were generated by AI tools


Proposed news classification system would rate content on a spectrum from neutral to sensationalist to help users make informed decisions


Topics

Content policy | Consumer protection | Freedom of expression


Both recognized that technical solutions must be accompanied by appropriate legal frameworks and expertise. They understood that implementing AI-powered systems requires careful consideration of legal, cultural, and regulatory contexts.

Speakers

– Hassan Al-Mahmid
– Audience

Arguments

Implementation would require legal department consultation for handling confidential data and determining acceptable documents


Solution would require cultural competency experts and legal frameworks to understand news sources in different contexts


Topics

Legal and regulatory | Privacy and data protection | Cultural diversity


Unexpected consensus

Transparency and user empowerment over content control

Speakers

– Will Carter
– Audience
– Nadja Blagojevic

Arguments

The most important approach is providing context to users about where information came from through additional links and metadata


Proposed feature would add visual flags to Google search results to indicate if news articles are false or AI-generated


Building information quality requires robust collaboration between public sector, private sector, academia, and civil society


Explanation

It was unexpected that both Google representatives and audience members converged on the philosophy of transparency and user empowerment rather than platform-controlled content moderation. Instead of advocating for removing or blocking questionable content, all parties favored providing users with tools and context to make their own informed decisions.


Topics

Content policy | Freedom of expression | Consumer protection


AI as a tool for verification rather than replacement of human judgment

Speakers

– Will Carter
– Hassan Al-Mahmid
– Audience

Arguments

All images made with Google’s consumer AI tools are marked with SynthID for identification in search results


AI image recognition could validate trade licenses and match domain names to business names, suggesting alternatives when conflicts arise


Proposed news classification system would rate content on a spectrum from neutral to sensationalist to help users make informed decisions


Explanation

There was unexpected consensus that AI should augment rather than replace human decision-making. All speakers viewed AI as a tool for providing information and suggestions rather than making final determinations about content validity or user choices.


Topics

Digital business models | Content policy | Consumer protection


Overall assessment

Summary

The discussion revealed strong consensus around user empowerment through transparency, multi-stakeholder collaboration for information quality, and AI as a verification tool rather than decision-maker. Speakers agreed on the importance of cross-functional product development and providing contextual information to users.


Consensus level

High level of consensus with significant implications for content policy and platform governance. The agreement suggests a shift toward transparency-based approaches rather than top-down content control, emphasizing user agency and collaborative solutions to information quality challenges.


Differences

Different viewpoints

Approach to defining and identifying high-quality content

Speakers

– Will Carter
– Audience

Arguments

There is no simple answer to identifying high-quality content – it requires partnerships with fact-checking organizations and identifying reliable news sources


The most important approach is providing context to users about where information came from through additional links and metadata


Proposed feature would add visual flags to Google search results to indicate if news articles are false or AI-generated


Proposed news classification system would rate content on a spectrum from neutral to sensationalist to help users make informed decisions


Summary

Will Carter emphasizes providing context and partnerships with fact-checkers rather than making definitive quality judgments, while audience members propose more direct classification systems with visual flags and spectrum-based ratings to guide users


Topics

Content policy | Freedom of the press | Consumer protection


Unexpected differences

Overall assessment

Summary

The main area of disagreement centers on content quality assessment approaches – whether to provide context for user decision-making versus implementing direct classification systems


Disagreement level

Low to moderate disagreement with significant implications for content policy approaches. The disagreement reflects fundamental tensions between platform neutrality and active content curation, which has broader implications for how information quality challenges should be addressed in search and news platforms


Partial agreements

Partial agreements

Similar viewpoints

Both emphasized the importance of providing users with contextual information and classification systems to help them evaluate content credibility, whether for images or news articles. They shared the philosophy of empowering users with information rather than making decisions for them.

Speakers

– Will Carter
– Audience

Arguments

About This Image helps users understand context and credibility of images online, including if they were generated by AI tools


Proposed news classification system would rate content on a spectrum from neutral to sensationalist to help users make informed decisions


Topics

Content policy | Consumer protection | Freedom of expression


Both recognized that technical solutions must be accompanied by appropriate legal frameworks and expertise. They understood that implementing AI-powered systems requires careful consideration of legal, cultural, and regulatory contexts.

Speakers

– Hassan Al-Mahmid
– Audience

Arguments

Implementation would require legal department consultation for handling confidential data and determining acceptable documents


Solution would require cultural competency experts and legal frameworks to understand news sources in different contexts


Topics

Legal and regulatory | Privacy and data protection | Cultural diversity


Takeaways

Key takeaways

Product management at Google involves identifying problems, building vision/strategy/roadmap, and coordinating cross-functional teams including UX and engineering from the beginning


High-quality content identification has no simple solution and requires partnerships with fact-checking organizations, identifying reliable sources, and most importantly providing context to users through metadata and additional links


AI-powered features like AI overviews and About This Image are designed to help users understand information credibility and context, with built-in safeguards against hallucination


SynthID watermarking technology allows detection of AI-generated images even after alterations, with all Google AI-generated images being marked


Government domain registration processes can be significantly improved through AI automation, reducing processing time from 48 hours to minutes


News credibility solutions require cultural competency, legal frameworks, and classification systems to help users make informed decisions about information quality


Building trustworthy information systems requires robust collaboration between public sector, private sector, academia, and civil society


Resolutions and action items

Hassan Al-Mahmid will present Kuwait’s .kw domain registration AI automation project as a case study, with optimistic 6-month timeline for implementation


Participants developed three concrete product proposals: news article credibility flags, AI-powered domain registration automation, and news classification spectrum system


Will Carter committed to being available throughout the IGF week for follow-up questions and discussions


Unresolved issues

No definitive solution provided for identifying high-quality content – remains an ongoing challenge requiring continuous innovation


Cultural competency and legal framework requirements for news credibility systems were identified but not fully addressed


Timeline uncertainties for government integration projects due to bureaucratic processes


How to balance automated AI decision-making with human oversight in sensitive areas like domain registration and news credibility


Specific metrics for measuring success of information quality initiatives were not established


Suggested compromises

Providing context and metadata to users rather than making definitive quality judgments about information


Using visual flags and classification systems that inform users rather than censoring content


Implementing AI automation while maintaining human oversight for sensitive decisions


Seeking public consultation periods (like Kuwait’s 60-day feedback process) when implementing new policies


Leveraging existing partnerships with fact-checking organizations rather than building internal validation systems from scratch


Thought provoking comments

There is no one right way to do it, if you ask a hundred people, you’ll probably get a hundred different answers, but there are some common elements… Sometimes it’s very easy to identify what a problem is. For example, once word processors were built, it was fairly obvious that a spell checker would be an improvement. But some things can be less obvious. For example, with Google Street View, when we first launched, it wasn’t clear to the degree to which seeing a location before a drive or a trip or contemplating a move could be… This feature was a less obvious addition to an online map, and it solved a problem that most people didn’t even realize that they had.

Speaker

Nadja Blagojevic


Reason

This comment is insightful because it introduces the fundamental challenge of product management – identifying problems that users don’t even know they have. It demonstrates the difference between obvious improvements and innovative solutions that create new value propositions.


Impact

This comment set the conceptual foundation for the entire discussion by establishing that product management involves both solving known problems and discovering latent needs. It primed participants to think beyond obvious solutions in their breakout exercises.


I wish there was a simple answer to this question. This is something that we struggle with every day and that remains an area of significant innovation and investment for Google… but really at the end of the day, I think the most important thing that we do is provide context to our users as much as we can about where the information that they’re interacting with came from.

Speaker

Will Carter


Reason

This comment is thought-provoking because it acknowledges the complexity and ongoing challenges in content quality assessment, while pivoting to transparency as a practical solution. It shows intellectual honesty about limitations while offering a constructive approach.


Impact

This response validated the difficulty of the problem participants were grappling with and shifted the focus from perfect solutions to transparency-based approaches. It influenced all three breakout groups to incorporate context and transparency elements in their proposed solutions.


You should communicate what you’re doing to the public because since it’s a public sector, you’ll have to communicate with them, even the failures as well. So, you know, to build trust… Because they’re the ultimate users, so you’ll also need their interaction and their feedback. So if there’s no interaction, we’ll not get proper feedback.

Speaker

Audience member (Akhtar)


Reason

This comment is insightful because it introduces the critical dimension of public accountability and transparency in government technology projects. It emphasizes that trust-building requires communicating both successes and failures, which is often overlooked in product development discussions.


Impact

This comment elevated the discussion from technical implementation to governance and public trust considerations. It prompted Hassan to elaborate on Kuwait’s public consultation processes and demonstrated how different sectors (public vs. private) have different stakeholder accountability requirements.


We are thinking of implementing AI tools to help us make the registration process for domain names in Kuwait, the faster and easy process… So these kinds of documentations are being like right now, manually uploaded throughout the portal. And then it has to be checked by a person to validate all the information… But we are thinking of implementing right now AI tools and some sort of integration between the government entities.

Speaker

Hassan Al-Mahmid


Reason

This comment is thought-provoking because it presents a real-world case study of AI implementation in government services, highlighting the practical challenges of balancing automation with regulatory compliance and fraud prevention.


Impact

This concrete example grounded the theoretical discussion in practical reality and shifted the online breakout group’s focus to a specific, implementable solution. It demonstrated how product management principles apply across different sectors and regulatory environments.


There’s a reason that these issues are top of mind. These are things that I think we’re all struggling with on a day-to-day basis, whether it’s companies like Google that are trying to solve these problems or users on the web that are trying to understand all this information that’s inundating us every day and how to make sense of it.

Speaker

Will Carter


Reason

This comment is insightful because it acknowledges the universal nature of information quality challenges, creating common ground between tech companies and users. It validates that these aren’t just corporate problems but societal challenges affecting everyone.


Impact

This comment provided validation for the participants’ concerns and created a sense of shared purpose. It reinforced that the breakout exercise wasn’t just theoretical but addressed real problems that affect all stakeholders in the information ecosystem.


Overall assessment

These key comments shaped the discussion by establishing a framework that moved from theoretical product management concepts to practical, real-world applications with societal implications. Nadja’s opening comment about solving unknown problems set an innovative mindset, while Will’s honest acknowledgment of ongoing challenges with content quality created space for nuanced solutions rather than perfect answers. The audience contributions, particularly around public accountability and the Kuwait domain registration case study, grounded the discussion in practical governance considerations and demonstrated how product management principles apply across sectors. The convergence on information credibility and transparency across all breakout groups shows how these foundational comments successfully oriented participants toward addressing fundamental trust and quality challenges in digital products. The discussion evolved from a product management tutorial into a collaborative exploration of how technology can serve public trust and information integrity.


Follow-up questions

How do you scrape high quality content and what are the parameters of what you call high quality?

Speaker

Audience member (via chat)


Explanation

This is a fundamental question about Google’s content quality assessment methods that was asked but only partially answered, indicating need for more detailed exploration of quality parameters and scraping methodologies


What kind of resources would be needed to develop AI tools for document validation and domain registration processes?

Speaker

Nadja Blagojevic


Explanation

This question was posed to help Hassan think through the practical requirements for implementing AI in government processes, but requires further detailed analysis of technical, legal, and human resources


What kinds of internal partnerships and departments would be needed for AI tool development in government settings?

Speaker

Nadja Blagojevic


Explanation

This explores the organizational structure and collaboration requirements for implementing AI in public sector, which needs more comprehensive mapping of stakeholder involvement


How to effectively communicate AI implementation progress and failures to the public in government projects?

Speaker

Audience member (Akhtar)


Explanation

This addresses the critical need for transparency and trust-building in public sector AI implementations, requiring development of communication strategies and frameworks


What are effective methods for public consultation on new technology policies?

Speaker

Hassan Al-Mahmid (implicitly through discussion of 60-day consultation periods)


Explanation

While Hassan shared their approach, this raises broader questions about best practices for engaging public input on technology policy development across different contexts


How to define and implement cultural competency in news credibility assessment across different contexts?

Speaker

First breakout group


Explanation

The group identified the need for cultural expertise in determining news credibility, but this requires deeper research into how cultural context affects information assessment


How to create effective classification systems for news content (neutral vs sensationalist) across different political contexts?

Speaker

Third breakout group


Explanation

This group proposed a news classification system but acknowledged the challenge of defining quality across different political contexts, requiring further research into objective classification methodologies


What are the best practices for capacity building and training public servants on AI tools?

Speaker

Audience member discussing Hassan’s project


Explanation

This was identified as a critical need for Hassan’s project but requires systematic research into effective training methodologies for government AI adoption


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.