Foundations of AI & Cloud Policy for Parliamentarians and Public Officials: an AI Sprinters workshop
25 Jun 2025 09:00h - 11:00h
Foundations of AI & Cloud Policy for Parliamentarians and Public Officials: an AI Sprinters workshop
Session at a glance
Summary
This discussion was a comprehensive AI training session led by Aleksi Paavola, who has ten years of experience in AI and works with governments and companies to implement AI solutions. The session focused on helping participants understand the capabilities of large language models (LLMs) and generative AI through both theoretical explanations and hands-on demonstrations. Paavola began by explaining the fundamental relationship between data, cloud computing, and AI, emphasizing that data serves as the crucial building block for training AI models, while cloud infrastructure provides the necessary computational power and security.
The presentation distinguished between predictive AI, which analyzes historical data to forecast future trends, and generative AI, which creates new content such as text, images, and documents. Paavola demonstrated practical government applications including Armenia’s tax administration assistant, Estonia’s bureaucrat chatbot, and various document automation tools. He provided hands-on training with Google’s tools, particularly Gemini for general AI interactions and Notebook LM for document analysis and research, highlighting features like audio overviews that can generate podcasts from uploaded documents.
The session addressed common AI misconceptions, explaining that AI systems are statistical models without emotions or consciousness, and discussed the importance of prompt engineering for effective AI interaction. Paavola emphasized the complementary nature of human and AI capabilities, noting that humans excel in emotional intelligence, creativity, and ethical reasoning, while AI excels in speed, efficiency, and data processing. The discussion also covered challenges including connectivity issues, data localization policies, potential misuse, and the digital divide.
During the Q&A session, participants raised important concerns about AI transparency, bias, and the political implications of big tech companies controlling AI systems, particularly regarding data privacy and potential military applications. Google representative Olga Reis addressed these concerns by explaining the company’s efforts to ensure diverse datasets, provide free access to AI tools, and implement safety measures like SynthID for content labeling. The session concluded with an emphasis on the transformative potential of AI in education, healthcare, and culture, while acknowledging the need for responsible implementation and continued dialogue about ethical AI governance.
Keypoints
## Major Discussion Points:
– **Fundamentals of AI, Data, and Cloud Computing**: Explanation of how data serves as the building block for AI training, cloud computing provides accessible infrastructure, and AI acts as the engine that processes information to solve problems typically requiring human intelligence.
– **Practical AI Applications for Government**: Demonstration of specific use cases including routine task automation (like Armenia’s tax administration assistant), data analysis, personal assistants (like Estonia’s Bureaucrat Chat), predictive analytics, policy formulation, and public engagement tools.
– **Hands-on Experience with AI Tools**: Interactive sessions with Google’s Notebook LM and Gemini, including demonstrations of document summarization, prompt writing techniques, and the four-step approach to effective prompting (persona, task, context, format).
– **AI Implementation Challenges and Ethical Concerns**: Discussion of obstacles including poor connectivity, strict data localization policies, potential misuse, economic divides, and the “data gap” affecting developing countries’ ability to leverage AI effectively.
– **Transparency, Bias, and Political Concerns**: Audience questions about AI transparency in policy recommendations, historical bias in AI systems, data privacy guarantees, and concerns about big tech companies’ role in geopolitical conflicts and content moderation.
## Overall Purpose:
The discussion aimed to provide government officials and policymakers with practical knowledge about AI capabilities, hands-on experience with AI tools, and guidance on ethical implementation strategies. The session focused on demystifying AI technology while addressing real-world concerns about adoption, governance, and responsible use in government contexts.
## Overall Tone:
The discussion maintained an educational and encouraging tone throughout, with the presenter emphasizing AI’s positive potential while acknowledging legitimate concerns. The tone became more serious and politically charged during the Q&A session when participants raised questions about bias, transparency, and geopolitical issues. Despite addressing sensitive topics, the overall atmosphere remained constructive and focused on practical solutions and responsible AI adoption.
Speakers
– **Aleksi Paavola**: AI expert and presenter, started his journey in AI around ten years ago, works with the platform of Influential Peer Rector for Infrastructure, mission is to help companies and governments take advantage of AI and understand LLMs and generative AI capabilities
– **Google representative**: Public policy representative for emerging markets at Google (specifically identified as Olga Reis during the session)
– **Audience**: Various parliamentarians and government officials attending the session, including:
– A politician working on AI policy and regulations
– A parliamentarian from Egypt who introduced the first draft bill on AI governance to the Egyptian parliament
Additional speakers:
– **Olga Reis**: Google public policy representative for emerging markets, based in Dubai, covers the GCC region
Full session report
# Comprehensive AI Training Session for Government Officials: Bridging Technology and Policy
## Executive Summary
This hands-on AI training session, led by AI expert Aleksi Paavola with participation from Google’s public policy representative Olga Reis, provided government officials and parliamentarians with practical knowledge about artificial intelligence capabilities and governance challenges. The session combined foundational AI concepts with extensive demonstrations of Google’s AI tools, particularly Gemini and Notebook LM, while addressing critical concerns about transparency, bias, data sovereignty, and the geopolitical implications of AI adoption.
The discussion evolved from basic AI concepts to sophisticated policy considerations, with participants raising pointed questions about AI governance. The session demonstrated a mature approach to AI discourse, acknowledging both opportunities and limitations while fostering constructive dialogue between technology providers and policymakers through practical, hands-on learning.
## Session Structure and Participants
### Key Speakers
**Aleksi Paavola** served as the primary presenter, bringing ten years of AI experience in helping governments and companies implement AI solutions. He focused specifically on assisting organizations in understanding and leveraging large language models (LLMs) and generative AI capabilities.
**Olga Reis**, Google’s public policy representative for emerging markets based in Dubai, provided industry perspective and addressed specific concerns about Google’s AI tools and policies. Her coverage of the GCC region brought relevant regional context to the discussion.
**The audience** comprised various parliamentarians and government officials, including a politician actively working on AI policy and regulations, and notably, a parliamentarian from Egypt who had introduced the first draft bill on AI governance to the Egyptian parliament.
## Foundational Concepts: The Three Pillars of Modern AI
Paavola established the fundamental relationship between three critical components that enable modern AI implementation:
**Data** serves as the crucial building block for training AI models. He emphasized that “AI systems are only as good as the data they are trained on,” noting that quality and representativeness directly impact system performance.
**Cloud computing** provides accessible infrastructure that democratizes access to powerful computational resources. Paavola highlighted how cloud technology enables small governments and individuals to access supercomputer capabilities similar to those available to large corporations, effectively leveling the playing field for AI adoption.
**AI** itself processes information to solve problems typically requiring human intelligence. Paavola distinguished between **predictive AI**, which analyzes historical data to forecast future trends, and **generative AI**, which creates new content such as text, images, and documents.
## Hands-On Training with AI Tools
### Practical Demonstrations
The session included extensive practical training with Google’s AI tools. Paavola demonstrated **Gemini 2.5 Pro** for general AI interactions and **Notebook LM** for document analysis and research. A particularly impressive demonstration involved Notebook LM’s ability to generate audio overviews that create podcast-style discussions from uploaded documents.
Using a document about “AS printers” as an example, Paavola showed how Notebook LM allows users to hover over references to see sources, significantly reducing the hallucination problem that plagued earlier AI systems. He noted that hallucinations are “much, much less common” in modern systems compared to earlier versions.
### The Four-Step Prompting Framework
Participants learned Paavola’s systematic approach to effective AI prompting:
1. **Persona** – Define the role the AI should assume
2. **Task** – Specify what needs to be accomplished
3. **Context** – Provide relevant background information
4. **Format** – Determine how the output should be structured
### Personal Efficiency Examples
Paavola shared concrete examples of AI’s practical impact, noting that his email writing time decreased from 2 hours to 15 minutes daily through AI assistance. He encouraged participants to try “learning a new topic with Gemini as your personal teacher,” emphasizing the tool’s educational potential.
## Government Applications and Success Stories
### Real-World Implementations
Paavola demonstrated several successful government AI implementations:
**Armenia’s tax administration assistant** automates routine tasks such as form filling and email writing, freeing up human capacity for more complex work.
**Estonia’s Bureaucrat Chat** serves as a personal assistant for government-related queries, providing 24/7 availability and consistent responses to common questions.
Additional applications included document automation for policy drafting, data analysis for evidence-based decision-making, and predictive analytics for resource planning. Paavola mentioned that AI’s contribution to the digital economy is estimated at $7.4 billion by 2033.
## Human-AI Collaboration Framework
Rather than positioning AI as a replacement for human workers, Paavola emphasized complementary capabilities. **Humans excel** in emotional intelligence, creativity, ethical reasoning, and complex decision-making. **AI excels** in speed, efficiency, scalability, and consistency.
Paavola argued that “AI will not replace human jobs but will free up human capacity to focus on more creative and interesting work,” reframing AI adoption as human enhancement rather than job displacement.
## Critical Governance Challenges
### Transparency and Accountability Concerns
An audience member emphasized the need for AI systems to be “more transparent in how they reach conclusions, especially for policy recommendations.” This concern reflects fundamental requirements for democratic accountability in government decision-making processes.
The discussion revealed tension between public sector needs for full algorithmic transparency and private sector concerns about protecting proprietary technology.
### Bias and Historical Discrimination
A particularly pointed question asked whether “Google AI technologies mirror historical patterns of racial segregation.” This addressed one of the most critical ethical concerns in AI development – the perpetuation of historical biases through algorithmic systems.
Reis explained Google’s efforts to ensure representative datasets and implement bias detection measures, but acknowledged this remains an ongoing challenge requiring continuous attention.
### Geopolitical and Security Concerns
The Egyptian parliamentarian raised questions about data sovereignty and potential military applications, asking “How can the big data and the AI not used in wars? And how can we guarantee the bias?… what guarantees the privacy of the servers?”
These concerns reflect broader anxieties about the concentration of AI capabilities in US-based technology companies and implications for national sovereignty.
## Industry Response and Accessibility Efforts
### Google’s Mitigation Strategies
Reis outlined several measures Google has implemented to address ethical concerns:
– Technology to label AI-generated content, addressing concerns about deepfakes and misinformation
– Representative dataset initiatives to ensure AI systems reflect diverse global populations
– Free access programs making AI tools available to NGOs in 65 countries, with plans to expand to 100 additional countries
### Language and Cultural Adaptation
Google’s efforts to support multiple languages and dialects, including Arabic dialects, demonstrate recognition of the need for culturally and linguistically appropriate AI systems.
## Implementation Challenges
### Technical and Infrastructure Barriers
The discussion identified significant challenges to AI adoption in government contexts:
– **Poor connectivity** remains a fundamental barrier, particularly in developing countries
– **Strict data localization policies** can prevent access to cloud-based AI capabilities
– **Economic divides** pose risks, with Paavola noting: “If everyone can get access to these AI tools, we are going to have a really amazing future… But on the other hand, if only some of the people get access to these models, then we are going to see this huge widening of the economic [divide]”
### The Data Gap Challenge
Developing countries face particular challenges in leveraging AI effectively due to limited availability of relevant local data and insufficient infrastructure for data collection, storage, and processing.
## Key Outcomes and Future Directions
### Practical Next Steps
Participants were encouraged to continue experimenting with AI tools, particularly Google’s Notebook LM and Gemini platforms. Paavola invited continued questions and engagement beyond the session, emphasizing the importance of hands-on learning.
The session emphasized building technical literacy among decision-makers as critical for effective AI governance and policy development.
### Unresolved Governance Questions
Several critical issues remain unaddressed:
– **Data sovereignty concerns** about server hosting and privacy guarantees
– **Comprehensive bias elimination** strategies requiring further development
– **Global equity** in AI access and the persistent digital divide
– **International cooperation mechanisms** for AI governance
## Conclusion
This comprehensive training session successfully bridged technical capabilities with policy requirements, providing government officials with both practical AI skills and critical awareness of governance challenges. The hands-on approach, combined with frank discussions of limitations and concerns, demonstrated a mature and responsible approach to AI adoption.
The constructive engagement between technology providers and government officials, despite significant areas of disagreement, established a foundation for continued dialogue. The session’s emphasis on practical experimentation, combined with critical evaluation of ethical and governance challenges, provides a model for responsible AI adoption that balances transformative potential with legitimate concerns about transparency, accountability, and equity.
The unresolved tensions around these issues highlight the ongoing nature of AI governance challenges and the need for sustained attention as AI capabilities continue advancing. Success will depend on maintaining the balance between embracing technological capabilities and ensuring democratic accountability demonstrated in this session.
Session transcript
Aleksi Paavola: All right. My name is Aleksi Paavola, and I started my journey in AI around ten years ago and with the platform of Influential Peer Rector for Infrastructure, and my mission has always been to allow companies and governments to take advantage of AI, and help them understand what are the capabilities of LLMs and generated AI, and this is something we’re going to deep dive today. And I think this is going to be really helpful for everyone, because we’re going to really take hands-on approach, so that we’re going to take a look at the different tools we have available, and then I’m also going to let you guys to get hands-on with a few of the tools. So let’s get started. So what we’re really going to do today, so there’s a few chapters we’re going to go through, and so first, we’re going to take a look at the basis of AI data and the cloud. So we’re going to take a look at the basics of AI data, and then we’re going to take a look at the basics of AI data, and then we’re going to take a look at the data, because as you all know, the media, it’s so full of this AI hype, and it’s really difficult to keep up, and sometimes we can be quite overwhelmed with all the information. So we look at the cloud and AI, and how these are related. After that, we go more hands-on. We’re going to look at some examples of how we can use these tools, what’s the difference between predictive and generative AI, and we’re really trying to get a good understanding of how we can take advantage of AI. Then we’re going to go even more hands-on, and we’re going to look at some of the tools such as Google Gemini and Notebook LM, and then we’re going to try them out and see what they are capable of. And finally, we’re going to discuss some of the challenges and strategies related to AI. And we’re going to have some time in the end so that we can discuss about AI, about principles, about regulation, and then we also have a Q&A, so whatever you have in mind in terms of AI and generative AI, there’s a great place to ask questions. So that’s roughly the timetable for today. So first we go through these phases, and then after that, we are going to have this discussion and then this Q&A session. But now let’s dive into today’s topic, and let’s really look at how the AI, the cloud, and data, how they are related. Let’s first define some terminology. So what is data? So data is basically just digital information. But why is it so important when it comes to AI and the cloud? And the reason is that the data is what we use to train the AI models. So without the data, we cannot train the models. So data is crucially important in order for us to build these AI systems. All right. How about the cloud? Or actually, let’s take a few examples. Before we move into cloud, let’s take a few examples of different types of data. So I think this is actually a really nice way to look at the data. the different types we have. And this is, especially from the government perspective, something that is really important. So first of all, we have personal data, which is something like names and addresses and ID numbers and so on. Then another class of data, a subset of personal data, is the sensitive data. So especially when it comes to governments, it’s really important to think about these things because this sensitive health data, for example, is something we really want to secure that. And this is something that these cloud providers can help us with. Then we have the non-personal data. So data that we cannot connect back to any individual. And here, one noteworthy thing is that you really want to make sure that the link between the persons and the data, that they are so interrelated that it’s not possible to link the data back to individuals. Something to really keep in mind when designing these kinds of systems that take advantage, for example, non-personal data. And then, of course, we have the public data, so data that is freely available. And I think the public data is something that holds massive opportunities because if we give people access and companies access to a lot of high quality public data, the companies, they will take advantage of that and build amazing applications and AI systems on top of the data. All right. Now, I was eagerly jumping to cloud. So let’s do it now. And what’s the role of the cloud? And what is cloud? So we can think of the cloud as just a huge amount of remote computers. And the nice thing about the cloud is that with cloud, we can take advantage of the most capable computers in the world. So it’s quite amazing to think about that me as an individual, I have similar capabilities compared to huge corporations because of the cloud and access to supercomputers. So cloud is this network of remote computers, and we can use it to process the data. All right. Then we come to AI. So what is AI, and how should we think about AI, and how these things are related? So first of all, the definition I like is that AI is something that is mimicking human problem solving. So with AI, we can solve problems that we usually think that would require human kind of intelligence. So I think that’s quite a nice definition. And the way to think about AI. OK, so how are all these related? So we can think about the data. It’s necessary like the building block. So without the data, we need the data in order to train the AI. But then for us, in order to train the AI, we also need some place where to host the data. So this is where the cloud comes in. So then the cloud is like the infrastructure layer we can use to hold the data securely. And then we have the engine, the AI. We can think of the AI as an engine on top of the cloud. So these are all really crucial elements. And I think these are the main building blocks of AI, and maybe something we often don’t think about enough. So I think it’s really good to remind ourselves that when we look at the applications of AI, like chatbots, we often forget the building blocks. So really nice to remind ourselves that in order for us to have these amazing chat applications, for example, we need the building blocks to be in place. How cloud is enabling AI is that with cloud, as I mentioned, the amazing thing about the cloud is that it enables me as an individual or some small government in some distant country to have amazing capabilities, similar capabilities to the world’s largest corporations. And one of the nice features of the cloud is that they are very efficient. And what we can do with the cloud is that they scale. So when we don’t, let’s say we want to take advantage of some supercomputer, we can use that for one day. But for the next day, we can not use it, and there will be no costs associated with that. So I think that’s the amazing thing in terms of cloud computing, that we can really pay only for what we need. And another great benefit is that if we were to have these, let’s say we build systems related to medical, for example, or we have some kind of systems that are really sensitive in terms of the data they have. So if we were to build our own servers, we would need a lot of expertise in terms of security, for example. But with cloud, we get that security out of the shell. So also a really, really nice feature. So accessibility, efficiency, you only pay what you need. So I think cloud is something we should be maybe more enthusiastic about. Everyone is talking about AI, but I think there’s still a lot we can accomplish with the cloud. All right, let’s now switch gears a bit, and let’s talk about AI. So as I mentioned, the whole media thing around AI is, I think it has caused a lot of false beliefs. So let’s do some demystifying. OK, myth number one, AI will replace human jobs. And what I think the reality is that AI, what it’s really doing is it’s freeing up the capacity of humans to do more interesting stuff. So with AI, when AI is automating the boring stuff, we can focus on more creative problem solving. So I think that’s quite wonderful. And I think the future of human work is looking really bright. OK, when we look at the movies related to AI, I think we often get this feeling that, OK, these AI systems, they are human-like. And in the movies, they think and feel like humans. But the reality is that AI systems, they are just statistical models. And they don’t have any feelings. And they don’t have any emotions. And often, AI systems, they are not very capable of detecting our emotions. So that’s still what we need to do. The third one is that AI systems are always right. And that’s totally not true. So AI systems can make mistakes. And now that we looked into the data, oftentimes, the mistakes of AI actually come back to data. So our AI models are only as good as our data. So if we have some data that is not representing what we are trying to achieve or there’s some biases in the data, they will end up in our AI systems. So our systems are only as good as the data. And then we also, when it comes to generative AI, we also have the hallucinations. And we are going to talk about hallucinations more later on. But what they mean is that AI can sometimes produce plausible but incorrect outcomes. So you ask something to AI chat, and it sounds about right. But actually, when you dig deeper, you figure out, OK, this is not correct at all. I think what we should do here, and what I really encourage you all to do, is to discuss about AI principles. And this is what we are seeing happening now across the globe. For example, in the European Union, in a lot of private sector companies, there’s a lot of discussion going on in terms of AI principle. And I think that is great. And for example, I have an example here. So these are the AI principles of Google. So let’s take a quick look at those. So here are the AI principles of Google. So bold innovation, responsible development and deployment, and collaborative progress. And I think I used Google’s AI tools daily for a few years. And I really think Google has done a nice job at applying those principles. So I think that’s something that is also really important, that we not only just write the principles down, but we really live up to those principles. All right, let’s switch gears and move on to the second phase of this presentation. So streamlining your work with AI. So what kind of things we could automate with AI? Here’s a few examples. So how AI and cloud could help you in your role is that they can help you in routine task automation, such as filling forms, writing emails, data analysis. Especially the latest reasoning models from AI labs, they are amazing at doing data analysis. So then we can use these as personal assistants. We can do predictive analytics with more traditional machine learning. We can do policy formulation and evaluation. And then we can also use AI for public engagement. And we have a few examples of those. So let’s take a closer look. Cool. All right, so automation of routine tasks is something the current generative AI models are really, really good at. And an example of this is the Armenia’s tax administration assistant. So what they’ve done is that they use AI to fill the forms. So repetitive, boring human job is automated with generative AI. And then, of course, you can write emails. You can create documents, do research. There’s a lot of different kind of routine tasks you could automate. Then we have the data analysis and insights. It’s been something that it’s been possible for quite some time. So with machine learning, we’ve been able to do data analysis and gather insights from different kind of transactional data for four decades. But the recent advancements, they’ve made it more accessible. So now it’s quite feasible to any small government or even a small company to take advantage of the data analysis capabilities of these models. An example of this is that these models are actually really good at analyzing images. For example, ray scan images. AI is actually, in most cases, it’s as good as humans, and sometimes even better than humans at doing this kind of image analysis. So amazing progress there. And also, the nice feature of this is that nowadays, you don’t have to necessarily train your own models, but you can just use the cloud and pre-trained models. So these are quite accessible for everyone. All right, then we have the personal assistant. I think something a lot of you are quite familiar with. So you can chat with AI and ask questions. And really nice example of that is Estonia’s Bureaucrat Chat, which is basically a Q&A chat platform for everything government-related, whether you need a new ID or you need to file some government-related information. You can ask for the chat, OK, how do I do that? And the chat, it’s always available. So nice example of personal assistant. Then we have. the predictive analytics. So now there’s a lot of fast and interest to our generative AI. But it’s good to remember that we have also these predictive analytics sites. And there’s a lot we can do with machine learning and predictive analytics, such as planning how to better plan corpse. So we’ve got some nice video from India where they are optimizing planning the crops with AI. Then we have the policy formulation and evaluation. This is more related to recent developments. So now with these generative AI tools, what we can do is that we can create different scenarios. So if we were to implement this kind of policy X, give me three different scenarios, what could happen? How could this affect? So quite amazing, we can use AI as this kind of strategy discussing and brainstorming partner. And then we have an example of enhanced public engagement. So what we can do with AI, we can analyze, for example, the news, and we can analyze the media related to, for example, some policy. And then instead of just guessing, OK, how people might feel about this, or instead of doing some questionnaire that would require a lot of labor, we can use AI more easily to detect, OK, this is how people are feeling about this new policy. All right, now that we’ve gone through the examples, let’s do this little comparison of humans and AI. So I want you to think about this. OK, what could be our strengths, and what kind of strengths the AI could have? And I think maybe you can already guess most of what I’m about to say. For humans, emotional intelligence, this is a huge one. So machines, they don’t have, as we now all know, they don’t have emotions. So therefore, they can mimic, and we can sometimes think that they can pretend to understand human emotions. But really, they can’t. So this is something that, at least currently, is only a skill of humans. Then another huge one, creativity and innovation. So the models, even though they are extremely capable, and we can use them to brainstorm the ideas, I still strongly believe that the best ideas and the real creativity, it comes from us, humans. So don’t rely too much on AI when it comes to innovation. Then we have the ethical and moral reasoning, something you really don’t want to give these statistical models. But you want to keep it yourselves. Then also, adaptability and flexibility. So even though the gen-AI systems are very capable, they are not as flexible as humans. Complex decision making, also another area where you still want to rely a lot of human intelligence. Interpersonal actions, strategic planning, ethical governance, these are also kinds of areas where you really want to put a lot of emphasis on humans. And you want to rely on humans to make these kinds of decisions, something you don’t want to outsource to an LLM. All right, now that we have taken a look at where the humans shine, let’s take a look at what are the greatest strengths of AI. So the first huge one, speed and efficiency. So it’s amazing how fast these models are. A great example of this is something I highly recommend you to try out is Google’s Deep Research, which is this new kind of tool that you can do research with. And what’s amazing about it is that if you were to do that same amount of research yourself, I would say that it would often take you a few days, at least a day, a full day of work. But with Google Deep Research, you can go through more than 100 resources in as short time as 10 to 15 minutes. And it will write your research report sometimes as long as 50 pages. So speed and efficiency is something where the AI really, really shines. So another example of this is the AI is really good at going through huge amounts of data. For AI, it really doesn’t matter whether you have a gigabit or terabyte of data. It really doesn’t matter. Scalability, really nice feature of AI, as mentioned. Doesn’t matter whether you have huge amounts of data or just a bit of data. Usually, you can get the outcome in quite similar times. Then we have consistency and precision. So really nice feature of especially these AI systems that are using more traditional machine learning is that they are very predictable. And there’s a lot of consistency. So if you run it multiple times, you get the same answer. Then we have the availability, as mentioned in the Estonia’s bureaucrat chat example. So instead of just asking, or there’s the few persons in some office taking calls and responding to messages, the bureaucrat, it’s always open. So 24-7 availability. AI is very good at detecting patterns from data. It’s good at automation of routine tasks, as we’ve seen. Then we have the predictive analytics capabilities. And now, since the rise of generative AI and LLMs, we also have these natural language capabilities that are quite amazing. All right. I think now that we have gone through the fundamentals, so we look at how data, cloud, and AI are related. And then we also look at some examples of how different governments are taking advantage of AI. And also look at what are the greatest strengths of AI systems, and what are the greatest strengths of us humans. I think now it’s a great time to move on and get really hands-on with AI. So in this section, what we’re going to do is that we are going to look in more detail about the differences of predictive AI and generative AI. And then you will also, I will give you. some time to try out Google Gemini and also another tool from Google called Notebook LM. And then we are also gonna discuss about, okay, how, what is a good prompt and what are the efficient ways of using these tools. So let’s dive in. Okay, what is predictive AI and how is it different from generative AI? So predictive AI, it’s the more traditional machine learning, something I started with 10 years ago. And I think it’s really important to keep the capabilities of predictive AI in mind. Because now there’s so much hype around generative AI that I see in a lot of companies that they are not taking the full advantage of AI, because they are neglecting the predictive AI part. So what is predictive AI? It’s when we have transactional numerical data. With predictive AI we can predict, okay, what will happen next. So one classical example of this is the weather prediction. So we have a lot of data, how the weather was in the past and based on that we can predict, okay, how is the weather going to be tomorrow. Really difficult task, but we made a lot of progress with that in the recent years. But important to keep in mind, this has nothing to do with the generative AI. So it’s a different form. And how you could use and when you should use predictive AI versus generative AI is something really important to think about when you start applying AI in your processes. Okay, so predictive AI is all about predicting what the future will be when we have the transactional data. Okay, how about generative AI? Generative AI, as the name implies, it’s about creating something new. So generating. So with generative AI, we are generating, it could be text, oftentimes maybe the most familiar example is these chatbots that are highly capable of generating text. But it can also be images, it can be code, it can be PDF files, it can be transactional data files, anything basically you can imagine, you can generate with these generative AI models. I think one word you’ve heard often is LLMs, so large language models. How are they related to generative AI? And what are they? So LLMs, large language models, they are specific AI systems built on top of huge neural networks that are trained basically with the whole data of the open internet. And they are really capable of predicting the next word. So when you, for example, go to basically any of these chat AI applications, what you are interactive with there is these LLMs, so large language models. But yeah, that’s what is often behind the generative AI, the technology behind generative AI, most often LLMs, large language models. Just to recap once again the differences. So predictive AI, we try to anticipate the future trends. So what will happen? We have some, we want to derive some data-driven insight. So we have a lot of, if you have a lot of transactional data, no matter what transactional data, but if you have a lot of numeric data, what you want to use is predictive analytics. So examples, policy forecasting, resource allocation, infrastructure planning, stuff like that. Okay, how about generative AI? So when you want to apply it is when you want to create new content. Its emphasis is on creativity and innovation. So you can create emails, you can fill out forms, you can generate images and so on. All right, now that there is so much focus and hype around generative AI, this presentation wouldn’t be a fully one, if we wouldn’t dive somewhat deep into gen AI. So let’s do that now and let’s look some of the use cases of gen AI more closely. So with gen AI, you can really generate anything as we’ve already discussed and now we are gonna dive deeper into some examples. So one of which is the new search. It’s quite remarkable how generative AI and LLMs are changing the way we gather information. So now the generative AI is really helping us. I would say it’s not only about speed, but it’s more about the nuance and the details. So with this, I think really the key benefit of this generative AI search is that now instead of the ten blue links, what you get is you get a brief overview of what you have asked. In addition, we have the links. So then if we want to dig deeper, then we can like and what we often should do is that we should really take a look at the resources and verify ourselves that, okay this is exactly what I need. But yeah, this is really fascinating and I think it has made a huge impact on the way we can found information and gather information. Okay, another example is the email writing. So in Gmail, this is I think it’s especially a lifesaver for us non-native English speakers. So before gen AI, I think probably like two hours of my working days went when I was writing emails. Because and I was always somewhat worried, okay whether this is this appropriate enough or is my language like suitable for this kind of discuss. Now, I think my time management in terms of writing emails, it has gone from like two hours to maybe 15 minutes. So something, if you don’t do this, I really recommend you to try it out. It’s a huge, huge lifesaver. So and how does it work? So and how do I use it is that, I just quickly type out what I mean and then I let the AI to be my like English assistant, so that the AI may make my emails more formalized and make sure they are grammarly correct and perfect. AI can also help help you write documents and slides. Just a quick question about the email. How can we embed it inside of Gmail? Or is it another tool? Great question. We are going to have a different Q&A session in the end, but I will repeat that and take it now. So the question was that, okay, how can we take advantage of this tool and is it already, is it a separate tool or is it already in Gmail? And at least in my Gmail, it’s already there. I think the amazing people from Google will address this later today, whether it’s globally available or is it still in beta. Maybe we have a more clear answer here. Yeah, just very quickly. Yeah, we’re rolling this out from region to region and I think, Your Excellency, you are from Egypt. What we are also doing is that specifically for Arabic, which is a language of many dialects, we are making sure that our system, they speak different dialects and I think right now we are covering about 20 dialects in Arabic and actually we see that Arabic is one of the highest, is one of the languages that are most actively used in our systems that already can perform those kind of tasks in about 45 languages. So just a short note, but it should be embedded in your Gmail already. If not yet, this is coming. Thank you so much. All right. Yeah, we can take advantage of AI in writing documents and slides. This is something you can do inside, in your Google documents and slides or you can use Gemini chat, which is what I currently prefer. But that’s something we’re gonna try soon. Then something I’m really excited is that the AI’s ability to help us in generating these Google sheets or Excel formulas. So it’s simply amazing how good these tools are and I find it difficult, when I’m giving these kind of training shops related to writing these sheets or Excel formulas, I find it difficult to find examples that the participants cannot immediately solve. So this is also something, if you have maybe shy away from Google sheets or Excel, so numbers are really not your thing. Now it’s great time to re-evaluate and try it out. So it’s amazing how you can just describe the function you need and AI will take care of it. All right and then we have the notebook LM. So something we are also gonna try out today. So this is amazing tool for more deeper work and when you have the need to upload a lot of your own resources, it’s very helpful and then inside the notebook LM, there’s also a really nice property of this called audio overviews, which is something I really encourage you to try out. It’s quite amazing that what you can do with it, you can upload your own document or research paper and a notebook LM, it will generate a podcast for that document and it’s the quality of the podcasts, it’s totally amazing. Okay, then something that we always have to think about and something I get a lot of questions is, how about the privacy and confidentiality and yeah, the nice property of Google is that they’ve really taken that into account and for example, when it comes to notebook LM, Google is not using your data to train the models. So you can and they are not reviewed by any human. So it’s a really something that I really, really value when using this. You can also connect your Google Workspace to notebook LM, something I also really encourage you to try out. What it enables is that after you have made this connection, you can basically ask and chat with all the documents you have in your Google Workspace. So, if especially if you have a lot of data, it’s amazing how much efficiency gains this feature can bring. All right, it’s time to test out the notebook LM. So you can scan the link or you can go directly to notebooklm.google.com and there you will find the notebook LM. Before you start or if you are really eager, you can start already and if you’re very familiar, but what I’m gonna do now is we’re gonna switch gears and I’m gonna show you a quick demo of how the notebook LM works. All right, so this is when I went to notebooklm.google.com. This is how it looks for me. I’m already signed in and here I have a few notebooks already. But what we are gonna do here is that I have this document. It’s called AS printers. So this is in the document form what we are going through today and what we’re gonna do here is I’m gonna create a new. I’m gonna select here the create new and we are gonna upload this AS printers document to notebooklm and it’s gonna take a while and while we wait. Okay, I think we’re done. So now we have the document uploaded and what we’re gonna do next is that we’re gonna ask a question. So the question we’re gonna ask. I’m not typing it because it’s gonna take too long. So I copy pasted the question how can governments use the report to ethically implement AI in their department. So that’s the question we are gonna ask and now if you ask this same question to any AI chatbot, it’s gonna respond based on maybe a search or based on the knowledge in the parameters of the model. But now the great thing about notebooklm is that now it’s gonna answer based on the document we upload it and we will see how it looks in a minute and it’s also so we also get a reference to the document and we can really dive deep into the sources. So this is a really nice feature. So when we hover over the sources, we can see okay this part of the answer is actually coming from this part of the source document. But I will give you now a few minutes to try it out. You can use it with your app or if you have a computer you can use it with your computer. There’s also an app you can download from iOS or Android store. But yeah, please, please try it. Yeah, it’s working. One of the, you know, special features about this product is that it can also understand at the same time documents written in different languages. For example, if you’re representing a country where there is more than one language spoken or more than one language in which you receive, you know, requests from your constituency, you can upload directly in one notebook documents in different languages and then ask questions or ask to summarize these documents in the language that you use. And the system will be able to understand these different languages and give you output in the language that you prefer. Something that I wanted to highlight, knowing that we do have lots of people from countries where more than one language is in use. Okay, let’s do one more minute and here just to show you the podcast I mentioned here in my screen on the right side, you can see this deep dive conversation here. If I were to click generate, it would generate a podcast from the uploaded report. And that’s something I really think you should try. You will be quite amazed. Okay, can you come again? Yes, great question. Is the notebook LM able to summarize documents? And the answer is yes, you can upload a document and then instead of asking a specific question, you can ask, okay, please provide me a short summary of the attached document. Yes, you can do that there as well. All right. Can we please switch back to slides? All right. Thank you so much. Let’s continue. Then after the session, you can continue playing with notebook LM. All right. Now, when we’re talking about generative AI, I think it’s really important to think about how we interact with it. So, the responses we get are only going to be as good as the prompts or the instructions we write. And what I’ve seen in the trainings and in the recent, let’s say a year or two when I’ve worked with a lot of people and with this generative AI systems, I see a huge difference between what you can get out of these AI systems from people to people. And what’s the difference is that, this is really a skill. It’s a skill you need to master. So, when you start interacting with these generative AI systems, you will not get everything out of them immediately. But it’s a skill, it takes a lot of practice. But with practice, you can become very good and efficient with these generative AI systems. So, and I think this is probably one of the most important skills you can have at this time. So, we’re going to briefly look at that. So, prompts, the instructions we give to generative AI models. So, let’s discuss this and what is a good prompt. So, one good way to approach it is that we split the prompt into these four steps. So, in our prompt, we give persona, we give a task, we give context and we give the format. So, for example, you are a public engagement officer or you are a experienced financial director or marketing director or whatever you need. So, first you give the persona, then you give the task. Okay, write an email, create a summary and so on. And then you give some context. So, write a summary. So, you are a marketing director of a large company, write a summary of, and then you give context, right, of the attached PDF document. And then the last thing, you give the output format. So, I want the output to be a really nice business email or I want bullet points or you want Python code. It can be anything. But it’s a good way to approach it. So, when you are writing prompts, always think about that way. So, okay, you give the persona, you give the task, you give the context and then what is the output format you’re looking for. Then some more prompt writing tips. Use natural language. I’ve seen that, you know, maybe some of you are really good at doing traditional Google searches. And this can be somewhat difficult for you because you’ve used to interact with Google with this, you know, you’re very good at figuring out the key terms. But when we are using generative AI, we want to use natural language. So, think about it like talking to your friend or colleague. Then also be specific. So, be very specific. What is it you are trying to do? So, if your prompts are really short, probably you’re not specific enough. And then iterate. So, it’s no matter how good you are with these systems, it’s often the case that you need to like try a few times or multiple times and iterate. Avoid complexity. So, think about it like how would you give some task to your colleague, for example. And make it conversation. So remember that you don’t have to get it exactly right in the first try. But then you can make corrections. Similarly, you could correct your colleague when he’s responding in some way you are not looking for. And remember this, that this is probably the thing I see the most, or something that people are not taking fully advantage when it comes to Gemini is that they are not providing their own documents. So try to provide your own documents. And the quality of the answers will be completely different, a lot better. All right, now we are going to try Gemini. So if you head over to gemini.google.com or scan the QR code, and can we switch the screen? Thank you so much. So here you can see my Gemini account. And here, if you have a paid subscription, you can select the model. We are going to use the 2.5 Pro. This is the model I really like to use, really capable model for different kinds of tasks. And then here is our example question. So what we are going to do is we are going to just paste this question in to the chat. So I would like to write a report on the best use of AI in medical supply logistics in Kenya. Please write an outline for this report. Let’s hit Enter and let the Gemini work its magic. And it’s going to provide us a nice outline for this AI in medical supply logistics in Kenya. Actually, what it did, because I have this advanced subscription, it’s now actually suggesting us to do a deep research. This will take like 10 minutes, so I’m not going to run it here, but something I really encourage you all to try. But what you could do now is just go to Gemini.com and take a new chat and ask anything, and get a little feel about how Gemini works. And remember the four suggestions about how to write good prompts. So maybe that’s something you can implement when you try it out. But yeah, just head over to Gemini.google.com and take it for a spin. And I’m going to give you a few minutes to try it out. And I’m going to give you a few minutes to try it out. So I’m going to give you a few minutes to try it out. And I’m going to give you a few minutes to try it out. Thank you. All right, can we please have a switch? Thank you. This is something that you can also continue after the presentation. Yeah, but let’s move forward. And a few words about hallucinations. So probably a term you come across somewhere, it’s something that is often discussed in the media when it comes to Gen AI. And what it basically means is that the AI, generative AI, is giving you plausible-sounding answers which are not correct. And it’s really good to keep in mind that this can happen. And the models, we’ve come a long way. Let’s say a few years ago, the issue related to hallucination was much, much, much bigger. And now it’s actually quite rare with these modern systems to see this. But it’s still something that it’s good to keep in mind. And there’s ways to mitigate that. So when you give up your own files, your own context, this is probably one of the greatest ways to mitigate hallucinations. And then another thing is when the model uses search. So that’s also a way to ground the answers for AI so that instead of answering based on the knowledge in the parameters of the model, it answers based on what it found on Google. But yeah, something to keep in mind. Hopefully, we’ll get rid of these someday. And maybe just to give some perspective about this is that I don’t know about you, but when I interact with my colleagues, they are not always right. So this is something that happens with humans as well. So I figure it has happened to me multiple times that my colleague is giving me a plausible sounding answer, which is actually not true. So it happens with humans also. But nevertheless, remember to check for accuracy. So when you are using the generative AI models, always check with the human that the output is what you are looking for. You can ask the AI to provide sources. You can use multiple tools. There are a lot of ways to verify the content. And also, ensure your safety. So always use the human insight. Follow the data rules of your organization. And be aware that sometimes these AIs can produce harmful outputs. We’re going to skip this, as we just tried out the Gemini. But something you can do after this presentation is to try summarizing documents either with Gemini or with Node.js. So both of those tools highly capable of summarizing documents. Let’s switch gears to last section. So let’s talk briefly about challenges and strategies when it comes to AI. We have a question coming from the audience. Yes. I want to know what’s the main difference for you Do you want to summarize something that’s better being generic or the other one? Or I don’t know. Yeah, we have a question related which tool I would recommend. Should you use Gemini for summarizing or should you use Notebook LM? I would say you can use either one. I think both are very capable of doing summarization. So I would say it doesn’t really matter. You can use either one. But is there a thing, for example, one that is better in doing something than the other one? Sorry. Yeah, I can repeat the question. So yeah, the question was, when should I use Notebook LM and when should I go to Gemini? And that’s actually a great question. I would say that when you want to provide a lot of your own resources, then the Notebook LM is the right tool. So let’s say you have lots of government documents you have to work with. Then you can upload the documents to Notebook LM. So for this kind of deeper research, the Notebook LM is the right tool. But then when you quickly want to create or brainstorm, you have some idea related to, for example, we had the medical logistics in Kenya. You want to brainstorm. You want to create an outline. You want to create something quickly. And you don’t have tens of PDFs for material, then you would like to use, or I would prefer Gemini for that. But the capabilities, they are quite similar. Great, great question. Yeah, but let’s talk briefly about the challenges and strategies. So we’re going to take a look at the access to models, misuse, the data gap, and then also some infrastructure related challenges. Before we dive into challenges, let’s briefly discuss the potential. So I think the consensus of the potential of AI, it looks something like this. So it is estimated that the implications and the contribution of AI to digital economy could be $7.4 billion by 2033. So huge opportunity. But what are some of the roadblocks ahead? So let’s take a look at that. First of all, poor connectivity. So in order for us to use the AI and the capabilities, we need good connectivity. It’s a must in order to take full advantage. And this is something I think there’s still a lot of places where the connectivity is not good enough to really take advantage of AI. Then we have the strict data localization policies. I always have one suitcase filled up and ready to go if the EU decides to make the policies so strict that I cannot access the best cloud models. So let’s hope I can live in Finland in the future as well. But this is something that I’m actually quite worried, that there are places that have such strict data policies that people are unable to access the latest and greatest technology. Then we have the misuse. It’s unfortunate that this general technology is also capable of producing a lot of harmful information at a huge scale. And this is true for text and images and video. So good to keep in mind that when you are interacting with someone online, maybe you are not interacting with a human. And if you see some video footage somewhere, you cannot be fully sure that it’s something that has actually happened. Then we have the potential widening economic divide. So we already see that the potential and the leverage AI can give is enormous. And I’m somewhat worried that I see two potential pathways forward. So one of which is that if everyone can get access to these AI tools, we are going to have a really amazing future. Because the productivity of one person, we can multiply that easily. But on the other hand, if only some of the people get access to these models, then we are going to see this huge widening of the economic. So something I’m somewhat worried about. Then we have the data gap. So what it means is that, as we learned in the beginning of this session, data is the building block of AI. So without the data, without the infrastructure, we cannot train the AI models. So therefore, if we have logistic difficulties, infrastructure challenges, limited financial resources, political instability, or lack of standardized methods for collecting data, we cannot take full advantage of AI. OK, what are some of the roadblocks of collecting and managing data efficiently? So when we talk about collecting data, we need to have the infrastructure in place. And we also need to be ethical about the collection of data. Then we have the processing of the data. There we also need the infrastructure. We need a lot of skilled employees. So we need technical expertise. We need training. We need education. OK, how can we then build strong data infrastructure? So what are the building blocks? First of all, we need to establish the whole of government commitment to really take advantage and use the data. And we want to make the data publicly available. I’ve seen amazing things happen when governments make the data publicly available and let entrepreneurs build and use that data to train, for example, AI models. It’s also important to facilitate that the data flows before different organizations to avoid silos. And probably the most critical thing is to improve the data infrastructure so that everyone can have access to high-speed internet and these extremely capable models that are becoming better and better. OK, so important things, regulatory oversight, continuous monitoring, stakeholder engagement. Before we dive into discussions and Q&A, let’s quickly take a look at the few areas where I think AI is going to have a huge impact. Culture is one. So it’s amazing what we can do with AI in terms of analyzing the data. the culture-related data. Also, another great recent development is the language capabilities of the models. So we have a lot of very small languages. And now, with AI, we can really understand and access these small languages and all the regions of their culture. So I think AI is going to have a great, great positive impact on culture. Then a huge one, education. We are at the place. It’s amazing that we are basically, in terms of technologies, we are at the place where everyone can have their own teacher. And what I really recommend you to try out is try to learn a new topic, for example, with Gemini. So use Gemini as your personal teacher. And you will be quite amazed about how efficiently you can learn with AI. So this is really something I am really hoping that everyone in this world would have their own AI teacher. OK, in health care, this is another huge one. I have a few of my friends work as a doctor. And they use generative AI models at their daily work. And they are really impressed about the capabilities. And they’ve told me numerous times how great it is that you have these kind of sparring partner when it comes to diagnosis, for example. So yeah, hopefully, really amazing things happening in health care. And I think AI can enable every one of us to have a doctor available at all times. AI is also going to bring us a lot of new jobs. So there’s a lot of new kinds of roles emerging. So we are going to need people who are capable of working with AI and prompting AI. And there’s a lot of positive estimates about how AI will affect the job creation. And I wanted to bring this because in the media, we see a lot of negative news related to the job creation. And it’s unfortunate because I think there’s also this positive side that we know that there are jobs that AI will probably replace. But then there are a lot of jobs that AI will also create. So therefore, I really wanted to end with this positive note that AI will bring us a lot of good things. All right, let’s discuss. So what I would like to do next is let’s share your thoughts, what is happening in your country, how might cultural or regulatory factors influence AI adoption in your country, or what ethical or practical concerns should government consider when implementing AI tools. Feel free to share your comments. Or if you have any questions, I’m more than happy to answer any questions. There’s actually, yeah, we have a mic coming. Yeah, that’s excellent. Yes.
Audience: Thank you so much. Congratulations. And this has been a great learning experience for me. As a politician, it’s difficult to come up with some sort of policy or regulations on this area without having a solid basic understanding of the technology. So this session is very, very important for people like myself. But I have some concerns. You mentioned about one of the main pillars of Google is responsible AI development and deployment. Would this include an effort to make your country more to make your machines more transparent? For instance, how they achieve their conclusions, especially when it comes to policy suggestions, so that we know that the suggestions, the recommendations are solid based on some sort of logic. So I think that’s one. The other more practical question would be, from the ranges of AI applications and other Google resources, would this be available in one single gadget? For instance, if I buy Google Chromebook, would it be possible for me to have access and other facilities as well, such as Google for small businesses, enterprises, and others? So that kind of discussion is really important for people from my background from developing countries. So we would love to access all of these facilities, but then it’s kind of scattered around. So centralized access would also help people to enjoy these technologies. To end, I would like to congratulate one more time. This has been an amazing session. Thank you.
Aleksi Paavola: You want to take that?
Google representative: Yeah. Hello, everyone. Just maybe a brief introduction, because not all of you were in the room in the very beginning. I understand it was an early morning session. My name is Olga Reis, and I cover public policy for emerging markets at Google. So let me cover your question related to how people can access Google technologies and what we as a company do, especially given that users in different countries have different purchase power. First of all, many of our products, such as Gemini, they are available for free. So you just need to have access to the internet. And really, with the latest updates, we are constantly, practically on a quarterly basis, updating our underlying models. Gemini capacity and capability of a free version is really great. I personally, for my personal capacity, don’t use, actually, paid Gemini. Of course, I do have access to it as a Google employee. But for example, when I do some of the stuff for me personally, I don’t use my work account. And for me, free version is just enough. For example, I learn a new language, which is Turkish. And I use Gemini to help me check my homework, understand why I made mistakes. And it’s just enough. The free version is already big. And we as a company have always been really focusing on giving access to information. And Gemini is one of the tools of how we give access to information. And we also do, we have special programs and offerings for some, for example, for NGOs. We do understand that very often civil society and NGOs are low resourced, let’s say, actors in the field. And so with a Google for Nonprofit program, we actually give access to most powerful AI technologies for NGOs that are officially registered for free. And we actually just announced two weeks ago that we’re expanding this program to 100 more countries. So it was available in 65 countries. And now 100 more countries will come and be supported in the next quarter of this year. So that’s on, let’s say, on the commercial side of things. Maybe I will quickly comment and then give it back to Alexis on, in terms of transparency of our models. This is something we understand that there is a huge expense. expectations from companies like Google, but also from our peers to make our models more transparent. And one of the ways of how we do that is we publish the so-called models scorecards where we practically, you know, in brief terms discuss what the models are capable of, how they were trained. And this is also part of our commitment and what we are supposed to do as a company that signed up for G7 Hiroshima process and as a company that is a member of frontline models forum, kind of international self-regulatory organizations, not organizations, that’s really forums, where we as a company are a member of and where we are disclosing as much as we can while protecting our commercial interests, because of course model is a commercial interest. What we are doing, what these models are capable of, how we tested them, including red teaming. So that’s something that we are doing and it’s a process. It’s not it’s a process because we we and our industry peers constantly updating our approach to making ourselves more transparent and definitely, you know, if we have this conversation in one year, probably the picture will be slightly different.
Aleksi Paavola: Yeah, I think they were next. Before the question, just a quick reminder. I would really, really, really appreciate if you could scan this QR code and leave some feedback. That would be awesome, if you could do that. But yeah, let’s continue with questions.
Audience: Thank you so much. Does the Google AI technologies mirror historical patterns of racial segregation? Thank you.
Aleksi Paavola: Okay, let me ask. Yeah, you want to comment on that? No, I have a question. Okay, let me address that first. So I think that’s a great question. Unfortunately, it’s something I don’t consider myself an expert enough on that area to respond to that.
Google representative: If I may comment quickly on this point. The way, as discussed earlier on in our session, the quality of the model and the output that you can get from the model really depends on the data set. And what we as a company are doing is to ensure that our data sets, they are comprehensive and they represent different views, including this historical content. So we really invest a lot of resources, both in terms of, you know, human resources, but also financial resources, to acquire the best data, to make sure that such cultural and political and historical contexts are taken into account while we train into the data sets on which we base our model. So that’s how, that’s a fundamental, I would say, in terms of how we should approach this very, very important point. And thanks for raising it.
Audience: Yeah, thank you very much for the very insightful presentation. I’m a parliamentarian who introduced to Egypt the parliament the first draft bill on AI governance, and it is very much around whatever is ethical when it comes to the usage of the big data. When it comes to Google, actually, and politically, I have very much of a concern on how the big data all the time could be used at war times. And the situation at Gaza lately was very clear when it comes to you terminating employees who were against the NEMPES project when it comes to the servers which serves the IDF. So a big question here is the big techs, which are actually mostly USA-based big techs. How can this couldn’t be politicized? How can the big data and the AI not used in wars? And how can we guarantee the bias? One of the main things which I think the governments of many African countries think twice about it, is to get the servers in, to host the servers in their home countries, given that what guarantees the privacy of the servers? So, for example, I’m talking about the other side of the coin, which is still for any parliamentarian or for any politician, is quite considerable. So, if at Google, you have, and definitely you have discussed thoroughly, how can you as big tech companies guarantee that the bias is minimal, at least, or that it’s not used for war, it’s not used for political purposes, it’s not used for fake propaganda, it’s not altering elections at a country or another? These are other questions which I think are very important. Let me take this question as a Google representative. The points that you raised, they are very important. And one of the questions, which I was trying to ask you, is that how can we guarantee that the bias is minimal, at least, or that it’s not used for war, it’s not used for political purposes, it’s not used for fake propaganda, it’s not altering elections at a country or another? These are other side of the coin of AI usage, which I’m very much concerned with. Thank you.
Google representative: Let me take this question as a Google representative. The points that you raised, they are very important. And one of the ways how we address such concerns is, again, ensuring that our data sets are representative and they take different points of view. I, myself, I am based out of Dubai, so I do live for four years now in the GCC region, right, in the Gulf region. And I can share, you know, in terms of the discussion and what we did also to, you know, to mitigate and navigate this very, very challenging situation that we are all facing in the Middle East. We have many employees, hundreds of employees, of course, representing both, you know, the Arab culture of Palestinian descent, working side by side, you know, and making sure that we support our users in Palestine and in all the areas that are affected by what is going on in the Middle East. So, this is something that we are taking very seriously. We do extend our support to local NGOs, supporting communities, and this goes on both sides. But this is definitely very challenging and stressful. I saw it myself, first-hand, and someone who is based in our headquarters in the Middle East, and company takes this very, very seriously and will continue doing so. Hopefully, peace will come to this region. Yeah, not only in the Middle East. Yes. One of the things that I wanted to mention, because this is where AI comes, you know, interplaced with content, right? And this is definitely something that we address very seriously, two ways how we ensure that AI systems are not misused and do not, are not used to, you know, produce fake content. First of all, we were one of the first companies to introduce what we call SynthID, which is basically labelling, virtual labelling of content that was produced with AI. So, you can actually, as a user, check and see whether this is something that was produced with the use of AI and output. That’s one thing. So, really, technically, okay, we need to finish. We need to, and will continue working to ensure that we address such concerns with the technology. And secondly, really using AI at scale to detect and delete, remove content that is harmful content. And I can share with you some of the data, how we do that, but this is something that we ramp up internally and we take this very seriously. And I think with this, we need to finish this session, right? The organizers just reminded me, but I will stay here, colleagues, and we’ll be happy to answer your question. Do we have any more time or not, please? No.
Aleksi Paavola: Yeah, unfortunately, we’re out of time. Thank you so much for everyone attending. Go try out Google Notebook LM and Gemini, and I will be here. If you have any questions, you can come to talk to me. Thank you so much.
Aleksi Paavola
Speech speed
100 words per minute
Speech length
8669 words
Speech time
5175 seconds
Data is the crucial building block for training AI models, without which AI systems cannot be built
Explanation
Paavola emphasizes that data is fundamentally necessary for AI development, as it is used to train AI models. Without adequate data, it is impossible to build effective AI systems.
Evidence
He explains that data is digital information that serves as the foundation for training AI models, making it crucially important for building AI systems.
Major discussion point
AI Fundamentals and Infrastructure
Topics
Development | Infrastructure | Legal and regulatory
Cloud computing provides accessible infrastructure that enables small governments and individuals to access supercomputer capabilities similar to large corporations
Explanation
Paavola argues that cloud computing democratizes access to powerful computing resources. Through cloud services, individuals and small organizations can leverage the same high-performance computing capabilities that were previously only available to large corporations.
Evidence
He mentions that cloud allows access to the most capable computers in the world and that individuals can have similar capabilities to huge corporations because of cloud access to supercomputers. Cloud services are efficient, scalable, and you only pay for what you need.
Major discussion point
AI Fundamentals and Infrastructure
Topics
Development | Infrastructure | Economic
Agreed with
– Google representative
Agreed on
AI accessibility and democratization
AI should be defined as technology that mimics human problem-solving capabilities
Explanation
Paavola provides a clear definition of AI as technology that can solve problems typically requiring human intelligence. This definition helps demystify AI by focusing on its problem-solving function rather than more complex technical aspects.
Evidence
He states that AI is something that mimics human problem solving and can solve problems that usually require human kind of intelligence.
Major discussion point
AI Fundamentals and Infrastructure
Topics
Development | Sociocultural
AI will not replace human jobs but will free up human capacity to focus on more creative and interesting work
Explanation
Paavola counters the common fear that AI will eliminate jobs by arguing that AI will instead automate boring, repetitive tasks. This automation will allow humans to redirect their efforts toward more creative and engaging work.
Evidence
He explains that when AI automates boring stuff, humans can focus on more creative problem solving, and believes the future of human work is looking really bright.
Major discussion point
AI Myths and Realities
Topics
Economic | Development | Sociocultural
AI systems are statistical models without emotions or feelings, unlike their portrayal in movies
Explanation
Paavola clarifies that AI systems are fundamentally different from their Hollywood portrayals. Real AI systems are statistical models that lack emotions, feelings, or human-like consciousness.
Evidence
He contrasts movie portrayals where AI systems think and feel like humans with the reality that AI systems are just statistical models without feelings or emotions.
Major discussion point
AI Myths and Realities
Topics
Sociocultural | Development
AI systems can make mistakes and are only as good as the data they are trained on
Explanation
Paavola emphasizes that AI systems are fallible and their quality directly depends on the quality of their training data. Poor or biased data will result in poor or biased AI systems, and AI can also produce plausible but incorrect outputs through hallucinations.
Evidence
He explains that if data has biases or doesn’t represent what we’re trying to achieve, these problems will end up in AI systems. He also mentions hallucinations where AI can produce plausible but incorrect outcomes.
Major discussion point
AI Myths and Realities
Topics
Legal and regulatory | Human rights | Development
Agreed with
– Google representative
Agreed on
Importance of data quality and representation in AI systems
Humans excel in emotional intelligence, creativity, ethical reasoning, and complex decision-making
Explanation
Paavola identifies key areas where humans maintain significant advantages over AI systems. These include understanding and managing emotions, generating truly creative ideas, making ethical judgments, and handling complex decision-making scenarios.
Evidence
He lists specific human strengths including emotional intelligence, creativity and innovation, ethical and moral reasoning, adaptability and flexibility, complex decision making, interpersonal interactions, strategic planning, and ethical governance.
Major discussion point
Human vs AI Capabilities
Topics
Human rights | Sociocultural | Development
AI excels in speed, efficiency, scalability, consistency, and 24/7 availability
Explanation
Paavola outlines the key strengths of AI systems, particularly their ability to process information rapidly, maintain consistent performance, and operate continuously without breaks. These capabilities make AI particularly valuable for certain types of tasks.
Evidence
He provides examples like Google’s Deep Research tool that can process over 100 resources in 10-15 minutes and write 50-page reports, compared to humans who would need at least a full day. He also mentions AI’s ability to handle gigabits or terabytes of data equally well and provide 24/7 availability like Estonia’s bureaucrat chat.
Major discussion point
Human vs AI Capabilities
Topics
Development | Economic | Infrastructure
AI can automate routine tasks like form filling and email writing, as demonstrated by Armenia’s tax administration assistant
Explanation
Paavola argues that AI is particularly effective at automating repetitive, mundane tasks that consume significant human time and effort. This automation can free up human workers to focus on more valuable activities.
Evidence
He cites Armenia’s tax administration assistant that uses AI to fill forms, automating repetitive, boring human jobs. He also mentions his personal experience with email writing, reducing time spent from two hours to 15 minutes daily.
Major discussion point
AI Applications for Governments
Topics
Development | Economic | Legal and regulatory
AI can serve as personal assistants for government services, like Estonia’s Bureaucrat Chat for government-related queries
Explanation
Paavola demonstrates how AI can improve government service delivery by providing citizens with 24/7 access to information and assistance. This reduces the burden on human staff while improving service accessibility.
Evidence
He describes Estonia’s Bureaucrat Chat as a Q&A platform for government-related questions, whether citizens need a new ID or need to file government information, providing always-available assistance.
Major discussion point
AI Applications for Governments
Topics
Development | Legal and regulatory | Infrastructure
AI can enhance public engagement by analyzing media sentiment about policies
Explanation
Paavola suggests that AI can help governments better understand public opinion by analyzing media coverage and sentiment around policies. This provides a more efficient alternative to traditional surveys or manual analysis.
Evidence
He explains that instead of guessing how people feel about policies or conducting labor-intensive questionnaires, AI can analyze news and media to detect public sentiment about new policies.
Major discussion point
AI Applications for Governments
Topics
Development | Sociocultural | Legal and regulatory
Notebook LM allows users to upload documents and ask questions based on their content, with Google not using the data for model training
Explanation
Paavola highlights a specific AI tool that enables users to interact with their own documents while maintaining privacy. The tool can generate insights, summaries, and even podcasts from uploaded materials without compromising user data.
Evidence
He demonstrates Notebook LM’s capability to answer questions based on uploaded documents, mentions it can generate podcasts from documents, and emphasizes that Google doesn’t use the data to train models and documents aren’t reviewed by humans.
Major discussion point
Generative AI Tools and Features
Topics
Human rights | Legal and regulatory | Development
Poor connectivity and strict data localization policies can prevent access to AI capabilities
Explanation
Paavola identifies infrastructure and regulatory barriers that can limit AI adoption. Poor internet connectivity makes it difficult to access cloud-based AI services, while overly restrictive data policies can block access to advanced AI technologies.
Evidence
He mentions having a suitcase ready in case EU policies become so strict that he cannot access the best cloud models, and expresses worry about places with such strict data policies that people cannot access the latest technology.
Major discussion point
AI Challenges and Concerns
Topics
Infrastructure | Legal and regulatory | Development
AI could potentially widen economic divides if access is not equitable
Explanation
Paavola warns about the risk of AI creating greater inequality if access to AI tools is not democratized. He sees two potential futures: one where everyone benefits from AI productivity gains, and another where only some people have access, leading to increased economic disparity.
Evidence
He describes two potential pathways: if everyone gets access to AI tools, we’ll have an amazing future with multiplied productivity, but if only some people get access, we’ll see a huge widening of economic divide.
Major discussion point
AI Challenges and Concerns
Topics
Economic | Development | Human rights
Google representative
Speech speed
141 words per minute
Speech length
1188 words
Speech time
502 seconds
Google’s AI tools are being rolled out region by region and support multiple languages including 20 Arabic dialects
Explanation
The Google representative explains that AI tools are being gradually deployed across different regions with attention to linguistic diversity. Special focus is given to supporting multiple dialects of languages like Arabic to ensure broader accessibility.
Evidence
The representative mentions they are rolling out from region to region and specifically for Arabic, they are covering about 20 dialects, with Arabic being one of the most actively used languages in their systems that can perform tasks in about 45 languages.
Major discussion point
Generative AI Tools and Features
Topics
Development | Sociocultural | Infrastructure
Many Google AI products like Gemini are available for free, with special programs for NGOs in over 165 countries
Explanation
The representative emphasizes Google’s commitment to making AI accessible regardless of economic circumstances. They offer free versions of powerful AI tools and have expanded special programs for resource-constrained organizations like NGOs.
Evidence
The representative states that many products like Gemini are available for free, mentions the Google for Nonprofit program that gives access to powerful AI technologies for officially registered NGOs for free, and notes they just expanded this program to 100 more countries, now covering 165 countries total.
Major discussion point
Generative AI Tools and Features
Topics
Development | Economic | Human rights
Agreed with
– Aleksi Paavola
Agreed on
AI accessibility and democratization
Google publishes model scorecards and participates in international forums to increase transparency while protecting commercial interests
Explanation
The representative addresses transparency concerns by explaining Google’s efforts to provide information about their AI models through published scorecards and participation in international regulatory forums. They balance transparency with the need to protect proprietary technology.
Evidence
The representative mentions publishing model scorecards that discuss what models are capable of and how they were trained, participation in G7 Hiroshima process and frontline models forum, and disclosure of testing including red teaming while protecting commercial interests.
Major discussion point
Transparency and Bias Issues
Topics
Legal and regulatory | Human rights | Development
Agreed with
– Audience
Agreed on
Need for transparency in AI systems
Disagreed with
– Audience
Disagreed on
Transparency vs Commercial Protection in AI Systems
Google ensures representative datasets and uses AI to detect harmful content while implementing SynthID to label AI-generated content
Explanation
The representative explains Google’s multi-faceted approach to addressing bias and harmful content. They invest in comprehensive datasets, use AI systems to detect problematic content, and have developed technology to identify AI-generated material.
Evidence
The representative mentions investing resources to acquire comprehensive datasets representing different views and historical contexts, being one of the first companies to introduce SynthID for labeling AI-produced content, and using AI at scale to detect and remove harmful content.
Major discussion point
Political and Ethical Concerns
Topics
Human rights | Legal and regulatory | Sociocultural
Agreed with
– Aleksi Paavola
Agreed on
Importance of data quality and representation in AI systems
Disagreed with
– Audience
Disagreed on
Political Neutrality and Military Applications of AI
Audience
Speech speed
133 words per minute
Speech length
627 words
Speech time
281 seconds
There is a need for AI systems to be more transparent in how they reach conclusions, especially for policy recommendations
Explanation
An audience member, identifying as a politician, emphasizes the importance of understanding how AI systems arrive at their recommendations, particularly when these systems might influence policy decisions. This transparency is crucial for building trust and ensuring accountability in government use of AI.
Evidence
The audience member mentions being a politician who finds it difficult to create policy without solid understanding of the technology, and specifically asks about making machines more transparent in how they achieve conclusions, especially for policy suggestions.
Major discussion point
Transparency and Bias Issues
Topics
Legal and regulatory | Human rights | Development
Agreed with
– Google representative
Agreed on
Need for transparency in AI systems
Disagreed with
– Google representative
Disagreed on
Transparency vs Commercial Protection in AI Systems
Questions exist about whether AI technologies mirror historical patterns of discrimination
Explanation
An audience member raises concerns about whether AI systems perpetuate or reflect historical biases and discriminatory patterns. This question addresses fundamental issues about fairness and equity in AI development and deployment.
Evidence
The audience member directly asks whether Google AI technologies mirror historical patterns of racial segregation.
Major discussion point
Transparency and Bias Issues
Topics
Human rights | Sociocultural | Legal and regulatory
There are concerns about big tech companies’ political involvement and use of AI in warfare contexts
Explanation
An audience member, who is a parliamentarian working on AI governance, raises serious concerns about the political neutrality of big tech companies and their potential involvement in military applications. They specifically reference concerns about data privacy and the use of AI technologies in conflict situations.
Evidence
The audience member mentions introducing AI governance legislation, references the NEMPES project and Google’s servers serving the IDF, employee terminations related to Gaza protests, and asks about guarantees against political use and warfare applications.
Major discussion point
Political and Ethical Concerns
Topics
Human rights | Legal and regulatory | Cybersecurity
Disagreed with
– Google representative
Disagreed on
Political Neutrality and Military Applications of AI
Agreements
Agreement points
AI accessibility and democratization
Speakers
– Aleksi Paavola
– Google representative
Arguments
Cloud computing provides accessible infrastructure that enables small governments and individuals to access supercomputer capabilities similar to large corporations
Many Google AI products like Gemini are available for free, with special programs for NGOs in over 165 countries
Summary
Both speakers emphasize the importance of making AI technologies accessible to users regardless of their economic circumstances or organizational size. Paavola highlights how cloud computing democratizes access to powerful computing resources, while the Google representative reinforces this by explaining their free offerings and special programs for resource-constrained organizations.
Topics
Development | Economic | Infrastructure
Importance of data quality and representation in AI systems
Speakers
– Aleksi Paavola
– Google representative
Arguments
AI systems can make mistakes and are only as good as the data they are trained on
Google ensures representative datasets and uses AI to detect harmful content while implementing SynthID to label AI-generated content
Summary
Both speakers acknowledge that the quality and representativeness of training data is fundamental to AI system performance. Paavola emphasizes that biased or poor data leads to biased AI systems, while the Google representative explains their efforts to ensure comprehensive, representative datasets.
Topics
Human rights | Legal and regulatory | Development
Need for transparency in AI systems
Speakers
– Google representative
– Audience
Arguments
Google publishes model scorecards and participates in international forums to increase transparency while protecting commercial interests
There is a need for AI systems to be more transparent in how they reach conclusions, especially for policy recommendations
Summary
Both the Google representative and audience members recognize the critical importance of AI transparency, particularly for policy-making contexts. The Google representative outlines their transparency efforts through scorecards and international participation, while the audience member emphasizes the necessity of understanding AI decision-making processes for effective governance.
Topics
Legal and regulatory | Human rights | Development
Similar viewpoints
Both speakers present an optimistic view of AI’s impact on society and work. Paavola argues that AI will enhance rather than replace human capabilities, while the Google representative demonstrates commitment to ensuring broad access to AI benefits through free and subsidized programs.
Speakers
– Aleksi Paavola
– Google representative
Arguments
AI will not replace human jobs but will free up human capacity to focus on more creative and interesting work
Many Google AI products like Gemini are available for free, with special programs for NGOs in over 165 countries
Topics
Economic | Development | Sociocultural
Both speakers recognize the continued importance of human oversight and judgment in AI systems. Paavola identifies specific areas where humans maintain advantages, while the Google representative describes technical measures that still require human ethical judgment and oversight.
Speakers
– Aleksi Paavola
– Google representative
Arguments
Humans excel in emotional intelligence, creativity, ethical reasoning, and complex decision-making
Google ensures representative datasets and uses AI to detect harmful content while implementing SynthID to label AI-generated content
Topics
Human rights | Sociocultural | Development
Unexpected consensus
Acknowledgment of AI limitations and potential harms
Speakers
– Aleksi Paavola
– Google representative
Arguments
AI systems can make mistakes and are only as good as the data they are trained on
Google ensures representative datasets and uses AI to detect harmful content while implementing SynthID to label AI-generated content
Explanation
It is somewhat unexpected that both the AI educator and the Google representative openly acknowledge significant limitations and potential harms of AI systems. Rather than presenting only benefits, both speakers candidly discuss issues like hallucinations, bias, and the potential for misuse, showing a mature and responsible approach to AI discourse.
Topics
Human rights | Legal and regulatory | Development
Recognition of infrastructure and policy barriers to AI adoption
Speakers
– Aleksi Paavola
– Audience
Arguments
Poor connectivity and strict data localization policies can prevent access to AI capabilities
There are concerns about big tech companies’ political involvement and use of AI in warfare contexts
Explanation
There is unexpected alignment between the AI advocate and critical audience members on the existence of significant barriers to AI adoption. While coming from different perspectives, both acknowledge that infrastructure limitations and regulatory concerns can impede AI access and implementation.
Topics
Infrastructure | Legal and regulatory | Development
Overall assessment
Summary
The discussion shows strong consensus on fundamental AI principles including the importance of accessibility, data quality, transparency, and human oversight. All speakers agree that AI should be democratically accessible, that data quality is crucial for system performance, and that transparency is essential for trust and accountability.
Consensus level
High level of consensus on core principles with constructive engagement on challenges. The agreement spans technical, ethical, and policy dimensions, suggesting a mature understanding of AI’s potential and limitations. This consensus provides a solid foundation for responsible AI development and deployment, though implementation details and specific safeguards remain areas for continued dialogue.
Differences
Different viewpoints
Transparency vs Commercial Protection in AI Systems
Speakers
– Audience
– Google representative
Arguments
There is a need for AI systems to be more transparent in how they reach conclusions, especially for policy recommendations
Google publishes model scorecards and participates in international forums to increase transparency while protecting commercial interests
Summary
The audience member demands full transparency in AI decision-making processes for policy applications, while Google’s representative emphasizes their efforts to balance transparency with protecting proprietary commercial technology through limited disclosure methods.
Topics
Legal and regulatory | Human rights | Development
Political Neutrality and Military Applications of AI
Speakers
– Audience
– Google representative
Arguments
There are concerns about big tech companies’ political involvement and use of AI in warfare contexts
Google ensures representative datasets and uses AI to detect harmful content while implementing SynthID to label AI-generated content
Summary
The audience member directly challenges Google’s political neutrality and involvement in military projects, while Google’s representative focuses on technical measures to prevent misuse without directly addressing the political neutrality concerns.
Topics
Human rights | Legal and regulatory | Cybersecurity
Unexpected differences
Adequacy of Current Transparency Measures
Speakers
– Audience
– Google representative
Arguments
There is a need for AI systems to be more transparent in how they reach conclusions, especially for policy recommendations
Google publishes model scorecards and participates in international forums to increase transparency while protecting commercial interests
Explanation
This disagreement is unexpected because it reveals a fundamental gap between what policymakers need (full algorithmic transparency for policy decisions) and what tech companies are willing to provide (limited transparency that protects commercial interests). The Google representative’s response suggests they view their current transparency measures as sufficient, while the audience member clearly finds them inadequate for policy-making purposes.
Topics
Legal and regulatory | Human rights | Development
Overall assessment
Summary
The main disagreements center around transparency requirements, political neutrality of tech companies, and adequacy of current bias mitigation measures. While there is general agreement on the importance of addressing AI bias and harmful content, there are significant differences in expectations and approaches.
Disagreement level
Moderate to high disagreement with significant implications. The disagreements reveal fundamental tensions between public sector needs for accountability and transparency versus private sector concerns about commercial protection. These disagreements could impact AI governance policies, public trust in AI systems, and the development of regulatory frameworks for AI deployment in government contexts.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers present an optimistic view of AI’s impact on society and work. Paavola argues that AI will enhance rather than replace human capabilities, while the Google representative demonstrates commitment to ensuring broad access to AI benefits through free and subsidized programs.
Speakers
– Aleksi Paavola
– Google representative
Arguments
AI will not replace human jobs but will free up human capacity to focus on more creative and interesting work
Many Google AI products like Gemini are available for free, with special programs for NGOs in over 165 countries
Topics
Economic | Development | Sociocultural
Both speakers recognize the continued importance of human oversight and judgment in AI systems. Paavola identifies specific areas where humans maintain advantages, while the Google representative describes technical measures that still require human ethical judgment and oversight.
Speakers
– Aleksi Paavola
– Google representative
Arguments
Humans excel in emotional intelligence, creativity, ethical reasoning, and complex decision-making
Google ensures representative datasets and uses AI to detect harmful content while implementing SynthID to label AI-generated content
Topics
Human rights | Sociocultural | Development
Takeaways
Key takeaways
AI systems require three fundamental building blocks: data (for training models), cloud infrastructure (for accessible computing power), and the AI engine itself
AI should be viewed as a tool that augments human capabilities rather than replacing them – humans excel in creativity, emotional intelligence, and ethical reasoning while AI excels in speed, efficiency, and scalability
Governments can leverage AI for routine task automation, data analysis, personal assistants for citizen services, predictive analytics, and policy formulation scenarios
Effective AI implementation requires developing prompting skills using the four-step approach: persona, task, context, and format
Google’s AI tools like Gemini and Notebook LM are available for free with special programs for NGOs, supporting multiple languages and dialects
AI adoption faces significant challenges including poor connectivity, strict data localization policies, potential economic divides, and misuse for harmful content
Data quality is crucial – AI systems are only as good as the data they are trained on, and biased data leads to biased AI outcomes
Building strong data infrastructure requires government commitment, public data availability, cross-organizational data flows, and improved internet access
Resolutions and action items
Participants were encouraged to try Google Notebook LM and Gemini tools hands-on during and after the session
Attendees were asked to provide feedback by scanning a QR code
Google announced expansion of their nonprofit program to 100 additional countries in the next quarter
Participants were encouraged to experiment with AI as personal teachers for learning new topics
Unresolved issues
How to guarantee minimal bias in AI systems and prevent their use in warfare or political manipulation
Concerns about big tech companies’ political involvement and data privacy guarantees, especially regarding server hosting in home countries
Questions about AI transparency in policy recommendations and how conclusions are reached
Whether Google AI technologies mirror historical patterns of racial segregation – presenter acknowledged lack of expertise to fully address this
Specific technical details about global availability of Gmail AI features and regional rollout timelines
Long-term solutions for addressing the digital divide and ensuring equitable AI access globally
Suggested compromises
Google’s approach of publishing model scorecards to balance transparency with commercial interests
Using representative datasets and investing in comprehensive data acquisition to address bias concerns
Implementing SynthID technology to label AI-generated content while allowing AI capabilities
Providing free versions of AI tools alongside paid subscriptions to address accessibility concerns
Participating in international self-regulatory forums like G7 Hiroshima process while maintaining commercial viability
Thought provoking comments
AI will replace human jobs… what I think the reality is that AI, what it’s really doing is it’s freeing up the capacity of humans to do more interesting stuff. So with AI, when AI is automating the boring stuff, we can focus on more creative problem solving.
Speaker
Aleksi Paavola
Reason
This comment directly challenges one of the most pervasive fears about AI in society. Rather than accepting the common narrative of job displacement, Paavola reframes AI as a tool for human enhancement and liberation from mundane tasks. This perspective is particularly insightful because it shifts the focus from competition between humans and AI to collaboration.
Impact
This reframing set a positive, collaborative tone for the entire discussion and likely made participants more receptive to exploring AI applications rather than focusing on threats. It established a foundation for the later detailed exploration of human vs. AI strengths.
Does the Google AI technologies mirror historical patterns of racial segregation?
Speaker
Audience member
Reason
This question cuts to the heart of one of the most critical ethical concerns in AI development – the perpetuation of historical biases and systemic inequalities. It’s particularly thought-provoking because it connects AI technology to broader patterns of social injustice, forcing consideration of AI’s role in either perpetuating or addressing historical wrongs.
Impact
This question shifted the discussion from technical capabilities to fundamental ethical concerns. It forced both presenters to address bias mitigation strategies and demonstrated that the audience was thinking critically about AI’s societal implications beyond just its practical applications.
How can the big data and the AI not used in wars? And how can we guarantee the bias?… what guarantees the privacy of the servers? So, for example, I’m talking about the other side of the coin, which is still for any parliamentarian or for any politician, is quite considerable.
Speaker
Egyptian parliamentarian
Reason
This comment is exceptionally insightful because it brings real-world geopolitical concerns into the AI discussion. The speaker, having introduced AI governance legislation, raises critical questions about data sovereignty, military applications, and the concentration of AI power in US-based companies. The reference to specific events (Gaza, employee terminations) grounds abstract ethical concerns in concrete realities.
Impact
This comment fundamentally shifted the discussion from technical implementation to geopolitical power dynamics and sovereignty concerns. It forced acknowledgment that AI adoption isn’t just a technical decision but a strategic one with national security implications. The Google representative’s response showed the complexity of these issues and the ongoing challenges in addressing them.
As a politician, it’s difficult to come up with some sort of policy or regulations on this area without having a solid basic understanding of the technology. So this session is very, very important for people like myself.
Speaker
Audience member (politician)
Reason
This comment highlights a critical gap in AI governance – the disconnect between those making policy decisions and technical understanding. It’s insightful because it acknowledges the vulnerability of policymakers and the importance of education in responsible AI governance.
Impact
This comment validated the educational approach of the session and emphasized the importance of bridging the technical-policy divide. It reinforced the session’s value and likely encouraged other participants to engage more actively with the technical content.
I think there’s still a lot we can accomplish with the cloud. All right, let’s now switch gears a bit, and let’s talk about AI. So as I mentioned, the whole media thing around AI is, I think it has caused a lot of false beliefs.
Speaker
Aleksi Paavola
Reason
This observation about media-driven misconceptions is insightful because it acknowledges how public discourse shapes understanding of technology. By identifying media hype as a source of confusion, Paavola positions himself as providing a more balanced, realistic perspective.
Impact
This comment established credibility and set up the ‘myth-busting’ section of the presentation. It prepared the audience to reconsider their preconceptions about AI and created space for more nuanced discussion of AI capabilities and limitations.
I’m somewhat worried that I see two potential pathways forward. So one of which is that if everyone can get access to these AI tools, we are going to have a really amazing future… But on the other hand, if only some of the people get access to these models, then we are going to see this huge widening of the economic [divide].
Speaker
Aleksi Paavola
Reason
This comment is particularly thought-provoking because it presents AI development as being at a critical juncture with dramatically different possible futures. It moves beyond technical capabilities to consider AI’s role in either democratizing opportunity or exacerbating inequality.
Impact
This observation added urgency to the discussion about AI accessibility and policy decisions. It helped frame the technical discussion within broader questions of social justice and economic development, making the stakes of AI governance decisions more apparent to the government officials in the audience.
Overall assessment
These key comments transformed what could have been a purely technical presentation into a rich discussion about AI’s societal implications. The progression from myth-busting and capability demonstration to serious ethical and geopolitical concerns shows how the audience moved from basic understanding to sophisticated policy thinking. The Egyptian parliamentarian’s pointed questions about bias, warfare, and data sovereignty particularly elevated the discussion, forcing acknowledgment that AI adoption involves complex tradeoffs between capability and sovereignty. The comments collectively demonstrate that effective AI governance requires not just technical understanding but also consideration of power dynamics, historical context, and geopolitical realities. The discussion evolved from ‘how to use AI’ to ‘how to use AI responsibly while maintaining national interests and values.’
Follow-up questions
How can we embed AI email writing assistance inside Gmail, and is it globally available or still in beta?
Speaker
Audience member
Explanation
This is a practical implementation question about accessing AI tools within existing workflows that affects user adoption and availability across different regions
How can big tech companies guarantee minimal bias in AI systems and prevent their use in wars, political manipulation, fake propaganda, and election interference?
Speaker
Egyptian parliamentarian
Explanation
This addresses critical concerns about AI governance, ethical use, and the potential misuse of AI technology for harmful political purposes
Do Google AI technologies mirror historical patterns of racial segregation?
Speaker
Audience member
Explanation
This question raises important concerns about whether AI systems perpetuate historical biases and discrimination patterns
How can machines be made more transparent in how they achieve their conclusions, especially for policy suggestions?
Speaker
Politician audience member
Explanation
This is crucial for policy makers who need to understand the reasoning behind AI recommendations to make informed decisions
Can all Google AI applications and resources be accessed through a single device like a Chromebook, providing centralized access for developing countries?
Speaker
Politician audience member
Explanation
This addresses accessibility and practical implementation challenges for users in developing countries with limited resources
What guarantees exist for data privacy when hosting servers in home countries, and how can governments ensure their data won’t be used for political purposes?
Speaker
Egyptian parliamentarian
Explanation
This relates to data sovereignty concerns and the need for governments to protect sensitive national data from potential misuse
How can AI principles be effectively implemented and lived up to, not just written down?
Speaker
Aleksi Paavola (implied)
Explanation
This addresses the gap between stated AI principles and their practical implementation in real-world applications
How can we ensure equitable access to AI tools to prevent widening economic divides?
Speaker
Aleksi Paavola
Explanation
This addresses concerns about AI creating or exacerbating inequality if access is limited to certain populations or regions
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
