Open Forum: Empowering Bytes / DAVOS 2025

21 Jan 2025 08:30h - 09:45h

Open Forum: Empowering Bytes / DAVOS 2025

Session at a Glance

Summary

The panel discussion focused on digital safety and empowerment in the age of AI and data proliferation. Participants explored the challenges and opportunities presented by increasing data collection and use, emphasizing the need for greater awareness and education about digital safety. The conversation highlighted concerns about data privacy, consent, and the potential misuse of personal information, particularly for vulnerable groups like children and indigenous communities.


Panelists discussed various approaches to addressing these issues, including policy reforms, legislation, and multi-stakeholder dialogues. They emphasized the importance of balancing the benefits of technological advancements with the need to protect individual rights and promote data equity. The discussion touched on the environmental impact of AI and data centers, noting both the energy consumption concerns and the potential for AI to optimize resource use in other sectors.


The panel stressed the need for a more sophisticated conversation around data usage, considering different types of data and their appropriate applications. They highlighted the importance of incorporating diverse perspectives, including indigenous knowledge, in developing data governance frameworks. The discussion also explored the role of businesses, governments, and international organizations in promoting digital safety and responsible AI development.


Participants expressed hope for the future, predicting continued rapid development in AI technologies while emphasizing the need for holistic approaches to security and safety. They called for greater education and awareness among users, as well as more robust systems for digital identity and cybersecurity. The panel concluded by underscoring the importance of public engagement and voice in shaping the future of digital safety and empowerment.


Keypoints

Major discussion points:


– The importance of digital safety and data protection, especially for children and vulnerable groups


– The need for greater awareness and education around data privacy and digital literacy


– Balancing the benefits of data sharing and AI with potential risks and environmental impacts


– The role of policy, legislation, and multi-stakeholder cooperation in addressing digital safety challenges


– Empowering communities and incorporating diverse perspectives in technological development


Overall purpose/goal:


The discussion aimed to explore ways to ensure data and technology are used responsibly to benefit society and promote social justice, while mitigating potential harms and risks.


Tone:


The overall tone was thoughtful and solution-oriented. Speakers acknowledged both the promises and perils of digital technologies. The tone became more hopeful towards the end as panelists shared predictions and aspirations for positive developments by 2025. There was an emphasis on the need for continued dialogue and cooperation to address challenges.


Speakers

– Helena Leurent: Moderator, consumer rights activist based in Geneva


– Peter Lucas Kaaka Jones: CEO of Tehiku Media, works on teaching computers to speak Māori


– Amanda Graf: Student at Swiss Alpine Middle School


– Lauren Woodman: CEO of DataKind, focuses on using data science to combat global issues


– Bilel Jamoussi: Deputy Director at the International Telecommunications Union


Additional speakers:


– Audience members: Various unnamed audience members who asked questions


Full session report

Digital Safety and Empowerment in the Age of AI: A Comprehensive Discussion


This panel discussion, moderated by consumer rights activist Helena Leurent, brought together experts from diverse backgrounds to explore the challenges and opportunities presented by increasing data collection and use in the age of artificial intelligence (AI). The conversation centered on digital safety, data privacy, and the empowerment of individuals and communities in an increasingly data-driven world.


Key Themes and Discussions


1. Digital Safety and Data Protection


A primary focus of the discussion was the critical need for enhanced digital safety and data protection, particularly for vulnerable groups such as children and indigenous communities. Amanda Graf, a student at Swiss Alpine Middle School, emphasized the importance of educating young people about data usage and digital literacy. She shared insights on smartphone usage among her peers and stressed the need for better education on responsible digital media use.


Peter Lucas Kaaka Jones, CEO of Tehiku Media, highlighted the unique challenges faced by indigenous communities regarding data governance and consent. He stressed the importance of community ownership and control over data, asserting that “Māori data must be subject to Māori governance.” Jones also discussed Te Hiku Media’s groundbreaking work on teaching computers to speak Māori, demonstrating how AI can be used to preserve indigenous languages and knowledge.


Bilel Jamoussi, Deputy Director at the International Telecommunications Union (ITU), presented a nuanced view of data sharing as a spectrum, ranging from complete privacy to openness. He suggested that individuals should be able to choose their level of data sharing based on personal preferences and desired services. Jamoussi also introduced the ITU’s AI for Good platform, which aims to harness AI for achieving the UN Sustainable Development Goals.


Lauren Woodman, CEO of DataKind, noted a growing sensitivity to personal data usage and protection. She called for a more sophisticated conversation around data types, usage, consent, and agency, highlighting the complexity of data issues and the need for nuanced approaches. Woodman also emphasized the importance of data equity, ensuring that the benefits of AI and data analysis are accessible to all communities.


2. Environmental Impact of Technology


The discussion touched upon the environmental implications of AI and data centers. Bilel Jamoussi highlighted the potential of AI to optimize energy use across various sectors. Lauren Woodman emphasized the need to balance the environmental costs of data centers with the benefits they provide. Peter Lucas Kaaka Jones suggested harnessing traditional knowledge for sustainable tech infrastructure, bridging indigenous wisdom with modern technology.


3. AI and Data for Social Good


Panelists explored the potential of AI and data to address global challenges. Bilel Jamoussi highlighted applications in healthcare, agriculture, and disaster management. Lauren Woodman stressed the importance of including diverse voices and data in AI development to ensure equitable outcomes. Peter Lucas Kaaka Jones discussed using technology to preserve indigenous languages and knowledge, demonstrating how AI can serve cultural preservation goals.


4. Policy and Legislation for Digital Safety


The need for robust policy frameworks and legislation to address digital safety challenges was a recurring theme. Bilel Jamoussi called for multi-stakeholder dialogue on AI and data standards, emphasizing the role of international organizations like the ITU in improving cybersecurity. Peter Lucas Kaaka Jones highlighted the importance of balancing policy and legislation as AI use grows, particularly in the context of indigenous rights and data sovereignty. The panelists also discussed the role of businesses in protecting user data and the need for international cooperation in establishing digital safety standards.


5. Future of Digital Safety and AI


Looking towards the future, panelists shared both hopes and concerns. Bilel Jamoussi predicted growth in digital identity solutions for online protection and mentioned the upcoming International Year of Quantum in 2025. Lauren Woodman anticipated continued rapid development of AI models, emphasizing the need for a holistic security approach. Amanda Graf expressed hope for more benefits and education around digital media usage for younger generations.


Areas of Agreement and Disagreement


There was broad consensus among panelists on the need for increased awareness and education about data usage and digital safety. All speakers recognized the importance of balancing the benefits of AI and data usage with potential risks, including environmental impacts.


However, differences emerged in approaches to data governance and ownership. While Peter Lucas Kaaka Jones emphasized community ownership and consent, particularly for indigenous communities, Bilel Jamoussi described data sharing as a spectrum of individual choice. This highlights the tension between collective and individual approaches to data rights.


Thought-Provoking Insights


Several comments stood out for their ability to shift the conversation and introduce new perspectives:


1. Peter Lucas Kaaka Jones challenged the notion that modernization equates to Westernization, stating, “Being modern doesn’t mean being Western.” This comment broadened the discussion to consider non-Western approaches to technological progress and digital rights.


2. Lauren Woodman reframed digital safety as an enabler of personal goals, saying, “Part of being digitally safe is having an environment in which we can use these tools to accomplish what it is that our own respective personal goals may be.”


3. Bilel Jamoussi demystified AI by breaking it down into its core components of data and algorithms, facilitating a more structured discussion of its applications and implications.


4. Peter Lucas Kaaka Jones emphasized the importance of the “do no harm” principle in AI development, particularly in the context of indigenous communities.


Unresolved Issues and Future Directions


The discussion highlighted several unresolved issues requiring further attention:


1. Balancing rapid AI development with adequate safety measures and regulations


2. Addressing the digital divide and ensuring equitable access to technology benefits


3. Determining appropriate levels of data sharing and privacy for different contexts and cultures


4. Measuring and mitigating the full environmental impact of AI and data center growth


5. Establishing clear accountability and governance structures for AI decision-making systems


The panel suggested several action items, including developing international standards for digital safety, increasing education efforts around digital literacy, incorporating diverse voices in AI development, and exploring quantum computing for improved energy efficiency in data processing. The upcoming Open Wallet Forum was mentioned as an initiative to address digital identity and data sharing issues.


Conclusion


This comprehensive discussion underscored the complex interplay between digital safety, data privacy, and technological advancement. It highlighted the need for a nuanced, culturally sensitive approach to data governance and AI development that balances innovation with protection of individual and community rights. As we move forward, continued dialogue, education, and international cooperation will be crucial in shaping a digital future that is safe, inclusive, and empowering for all.


The interactive nature of the discussion was evident through audience questions, which touched on topics such as the role of businesses in data protection and the potential of AI in various sectors. This engagement underscored the widespread interest and concern surrounding digital safety and AI development across different stakeholder groups.


Session Transcript

Helena Leurent: amazing is my mic on now you can all hear me wonderful thank you so much for being with us here this morning absolute pleasure to have you in this session, which is called Empowering Bytes. So the focus will be on the word empowering, because we’re trying to find those models that help make sure that the data we put out there into the marketplace is used for our benefit and for social justice, and not for all of the other dark and dubious ways that it might be and is being used at the moment. My name’s Helena, Helena Leurent. I’m based in Geneva. And in my day job, I’m a consumer rights activist. We’re a network of consumer rights organizations all over the world. And although we focus a lot on food and finance and energy, digital safety is one of the biggest issues that our network fights for. And so in my day job, I work with organizations that point out that data is being misused from our smart devices. Like, this is Which in the UK. In Brazil, IDEC fights for legislation on privacy. In the US, Consumer Reports builds tools that help you understand where your data is being used. There’s this great tool called Permission Slip. So in my day job, I worry about this a lot. And so I’m sharing with you my bias, so that on this panel, you can challenge me with my biases, and we can build together and fact check each other, because that’s a bit of a theme at the moment. I’m really excited to understand, well, how do we think, in this room, we should be building that? But of course, this is live streamed as well. So hello out there in the audience. And I’m asked by the forum staff to, if you’re going to share anything, please use a hashtag, hashtag WEF or hashtag OpenForum25. The way we’re going to run this is we’re going to hear from amazing experts who can really help us and help me think through this topic. and we’ll have about 20 minutes where, and even more if I can get to that, of questions from the audience. So do please, do please take advantage of that and let’s build together. I do want to say, it’s a bit dark out there, I can’t quite see, but I’d love to see who’s in the audience because I know we have, on the stage with me, I have, we’re here in the Alpine Middle, the Swiss Alpine Middle School in Davos, and I know that Amanda, who is here representing on the stage, also has some classmates in the room. Can we say, hear from them? You’re here in the front rows, I think. Hello, way, congratulations. I understand you might be missing a maths class at the moment, is that right? With apologies to your maths teacher and thank you for letting these wonderful students be part of the conversation. I think we may also have representatives who are residents of Davos, if you’d like to wave your hands, and if not, and I can’t see you, thank you very much for your hospitality this week to everybody coming to the World Economic Forum in Davos. I know it’s a bit of a disruption in your magical, beautiful town, and I hope anybody who’s coming in from outside Davos visits around the year and during the summer as well, when it’s just amazing. So, with that, let me introduce those on the panel. To my left, Amanda Graf, a student and excellent debater also from the Swiss Alpine Middle School. Please welcome her to the stage. To her left is Bilel Jamoussi. He is the Deputy Director at the International Telecommunications Union, which is a global organization that really thinks about the standards for our digital safety. So welcome, thank you so much, Bilel. To his left, Lauren Woodman is CEO of DataKind, which is an organization that really thinks about how we use data science to combat global issues or to build better. So thank you very much for joining us. And to her left, Peter Lucas-Jones has joined us from pretty much the other side of the world in New Zealand. He is the CEO of Tehiku Media, which is an amazing organization, and we’ll hear about how he’s built that to empower people. So thank you for joining us. Fantastic. Okay. So I want to sort of do a quick round, because this session, you know, it started out with that point of, you know, how many of us are on the internet now. Of course, 2.4 billion people aren’t on the internet, so, you know, we need to have a global conversation here. But when we are on the internet, we’re spending six hours on average. I was seeing stats, you know, you pick up your phone between 50 and 150 times per day. What else do you do that frequently? You know, maybe I don’t want to know that, but, you know, it’s sort of like how much of our lives, and it’s not just there, right, it’s smart devices, it’s your car, it’s when you shop online. You know, when you think about digital safety, it’s all around. And this year, I actually, persuaded by my family, actually, my kids, I turned grayscale, so I don’t actually go on that much. I’ve gotten far more sort of conscious about geolocation. I’m just, you know, this notion that then our data is then used, sold on. We really don’t know how that’s used. And the level of scams, you know, in particular that we’re seeing, the level of inequality that that exacerbates, that that cements into our society has really sort of, it struck me this year. So I don’t know, I wonder if on the panel, you know, have any of you become more aware of how you use your data, and what have you done? Lauren, perhaps, can I come to you first?


Lauren Woodman: Sure, thank you and thank you all for joining us today. It’s one of those topics I think we could spend lots of time on and don’t spend enough time on. I think like you, I’ve gotten a lot more sensitive to how my data is used and that’s a conversation that I think if you had asked me five or six years ago, I probably wouldn’t have had the same reaction. I think I’m very conscious now of rejecting all cookies, rejecting third-party cookies, routinely clear those things out. I got on a bit of a kick last year and wrote letters to many companies saying, you no longer have the right to use my data, please strip my data out, because I got really frustrated with a bunch of phone calls I was getting to, I don’t know, tile my bathroom or something, but it came from an online inquiry form and I got on a bit of a kick about it. I think part of that has also been watching my teenagers, I have teenage children, watching them really be very sensitive around how their data is used and how their friends use their data. And that I think has spurred a bit of a conversation. And I think a lot of us are coming to that realization that it’s very personal, we all make different choices, there are good sides and bad sides to data usage, but really learning to balance that and knowing enough to make those choices personally, I think is really important. Yeah, and when you sort of then link your own use of data to what’s happening in the world, what are the sort of trends or the dangers or the groups that perhaps you feel, well, this is why it’s so important that I care? Yeah, I think one of the things that I’ve spent a lot of time thinking about is this question of data equity. And I think we’ve all been part of the conversation that a lot of the underlying data that is being used to train large language models is not representative of the global community. It’s not really even representative of the internet using community, much less the global community. And thinking about that and thinking about how do we take steps so that… the right data is used in order to get the right outcomes becomes really important. And I just, from a personal level, I wanna see a sort of up-level the way that we think about data usage and the equity questions, and not only in the data that’s used to train models and the data that’s used to make decisions, but also how those outcomes are applied equitably across the board.


Helena Leurent: Perfect, thank you. Anybody else on the panel, would you like to talk about sort of how you think about your data and protect your data and how it’s used? Peter, Peter Lucas.


Peter Lucas Kaaka Jones: I like to think about it, first of all, by depersonalizing it and recognizing that as a person, I’m a member of a whanau or a family. And as a collection of families, we are members in Aotearoa of sub-tribes or hapu. Those hapu or sub-tribes form part of larger tribes. And collectively, data belongs to us rather than we owning that data. So the way we look at things, I think, is really important because modernizing our perspective of data usage, data collection, and of course, data governance, means that we have to recognize that being modern does not necessarily mean being Western. So when we start to think about including others, including everyone, sometimes it means that we have to open up our mind. And so what I’ve found in the work that I do, teaching computers how to speak Maori, an indigenous language, is it’s a way of unlocking culture. It’s a way of unlocking traditional knowledge and applying those principles, values, concepts, and those types of things to the way that we make decisions, but also the way that we develop resilience strategies and provide new and meaningful ways of looking at the world. So when I think about how our data is being collected with our. our permission, I think it’s something that can be akin to the privatisation of land. And when indigenous peoples were colonised, for example in Aotearoa, we saw many of our people become landless. And if we do not take on board those lessons of the past, it’ll be difficult to understand fully what the digital world could hold for us. I see there’s an opportunity to bridge the gap between those that have and those that have not, but for us to do that and be part of a cooperative approach to making things better, I think it’s important that we understand and learn that there needs to be greater boundaries, there needs to be greater perspective on what the policy nationally and internationally around gathering data without people’s consent means now and into the future. Do we opt in or do we give people the option to opt out immediately? Kia ora.


Helena Leurent: Yeah, that’s beautiful, thank you. Amanda, coming to you, how do you think about your data? Do you and your classmates think the approach to how our information is used or owned or the thinking, is that evolving?


Amanda Graf: Well, I think me myself could be much more cautious with my data. I think there is a big problem with the education to it because we don’t really get taught what it means with our data online and when we accept the cookie what that really means. We just do it because we’re used to it, but I think if people would educate us more and our generation, I think there could be a lot of problems and risk online be avoided with it.


Helena Leurent: Thank you. And Bilal, are you seeing that, folks? Well, I want you to share, you know, everybody is sharing their own sort of personal story here as well. I mean, I’d love to hear how you’d respond.


Bilel Jamoussi: I want to pick up on what Amanda is saying. It’s really a spectrum. On one hand, you can completely… be out of the virtual world and not be connected to any social media or any of your data being available on the internet. Or you will not have any gatekeeping in terms of what data you’re willing to share and what transparency mechanisms that exist in sharing your data. So in that spectrum, from completely being open to completely locked out, you need to find that sweet spot, which is based on knowledge and awareness and transparency in how your data is going to be used. So awareness of the user is a critical part of this conversation.


Helena Leurent: If we can go back to the audience, how many of you, I won’t ask specific ways of doing things, but how many of you do you think have become more aware, perhaps, in the last year and taken action to do something differently about how your data is used? Would you raise your hand if you think your attitudes have changed? Yeah, that’s quite a lot of the audience. If you have any tips and tricks and things that you think you can be doing and how you get your voice out there to make change happen and call for change, we’d love to hear those. OK, so let’s dig into some of the stories and dig into some of how this works. Amanda, I want to come back to you. As you think about, you’ve already touched on there needs to be greater awareness. What would you like to see built? What is your vision of a world with better digital safety and better social justice? What might that look like?


Amanda Graf: I think we should be taught, especially in school, now what are the risks being online and on media or on social media. And I think there should also be a better communication between the users and the authorities of different kinds of media and apps and that they can be helped better with their issues. And yeah, I think really the key to a better online and media security is. education within our generation, but also older generations, just like overall, people should be more aware of what they are doing when they are online.


Helena Leurent: Okay, so it’s about awareness and then that representation into the system as well. Yeah, no, that’s crucial. Peter, Lucas, can I come to you? I’m fascinated by the way in which you’ve built an entirely new approach that sort of fits in with the values and the sort of the view of how things can be better that you described earlier. Can you tell us a bit of the story of Te Hiku, how that built up and sort of, yeah, teach us more about what happened.


Peter Lucas Kaaka Jones: Originally, we were an iwi radio station, a tribal radio station. We’re one of 21 in Aotearoa, New Zealand. We were born out of a response to the systemic language discrimination that Te Reo Māori experienced and the harm that had intergenerationally for transmission of the language. And of course, language and culture, they’re so connected. With the decline of language, so too came the decline of culture. Our station was a response to that. We wanted to hear our news, our current affairs, listen to Māori music, hear Māori voices. And so that became our data pipeline. After 30 years of growing trust in the community, we wanted to document and transcribe our archives, 30 years of native speaker, idiomatic expression, colloquialisms, the cornerstone of indigenous knowledge. And when we embarked on that journey, we realised quickly that we could not transcribe the information that we had collected, not quickly. And so we decided to teach computers how to speak our language. Our technology operates at a 92% accuracy rate, and with our philosophy of do no harm. For us, we always strive to do better. But 92% accuracy rate for our speech to text technology has been part of our journey. We’ve now embarked on developing speech synthesis and bilingual, because the speakers of our language are bilingual. They’re code switches. But we also recognise that when we save a language, we’re contributing to saving the planet. When a language dies, it’s like someone said, I can’t think of their name, but it’s like a national library has just burnt to the ground. But the world has taught us to be deaf to that. And so that has been our journey, because we understand that climate change and resilience requires us to understand the stories of the past. And so the observations that our ancestors have recorded in our language over hundreds, if not thousands of years, are things that we want to harness and share with the world. So that when we think about the Pacific and its place in the climate conversation, we are not forgotten, but together we remember.


Helena Leurent: Yeah. And what I loved, I mean, there were so many amazing things about how you’ve built this. One of them, as I was reading, was as you take the next step and start working with organisations to help you scale up or help you do this, you have looked at the licensing agreements that ensures that then that value is not stripped out, that that value comes back and that the people who are contributing that language continue to, not own, but continue to have a fair benefit from it. Can you talk a little bit about how you approach that? Because that’s going up against some big giants out there.


Peter Lucas Kaaka Jones: That’s correct. And I think for us, it’s very much been about accuracy and precision. It’s been about ensuring that benefit comes back to the community, that the language belongs to, that the knowledge belongs to, and whilst we want to share, we also want to find ways to develop capability, to develop capacity, to grow talent, and that requires us to be innovative in our approach to data governance. Our philosophy, our way, and our belief is that Māori data must be subject to Māori governance. Māori data governance. Which is very similar to the way that other indigenous people view data, and one of those things is DNA. One of those things is our knowledge of bird migration. One of those things is our understanding about how animals move in the world. Our knowledge of the stars, the way that our people traveled throughout the Pacific, navigating by the stars, wayfinding, that story connects the other languages in our language group. What we have achieved for Te Reo Māori in terms of scaling, would provide a backbone for other indigenous languages in the Pacific to benefit. They too could benefit from what we’ve created. Realistically, that would be an opportunity for those language groups, which are minority languages, that have the keys to their culture, but collectively we’re part of a bigger culture. Our methodology and the way in which we have developed our data license is about guardianship. It’s about protecting, not just our past, but our present and our future, through applying kaitiakitanga, rangatiratanga, whakapapa, those concepts that are ingrained in our society as Māori people, and the way that we make decisions. And so it’s less about who we are as individuals, and more about the people that we belong to. So it’s less about the individual, and more about the community. It’s less about our rights as individuals, and more about our responsibility. to those that we come from and those that we share whakapapa with and one of the proverbs that we have that guides us is, e hoki whakamuri, kia anga whakamua, look to the past and forge your path into the future.


Helena Leurent: It’s beautiful, thank you. Can I ask, I know you’ve been approached by a number of groups from around the world asking well how would we do the same? I mean when I looked, you started in the 1990s, there’s a vast amount of technical expertise and knowledge and you know there are a lot of factors. Could that be done though? Could you see, well anybody, any community, what would be the factors be that would enable a community to take the same approach?


Peter Lucas Kaaka Jones: Understanding fully and committing to understand more the data that we use as data scientists. Many data scientists do not understand the data that they are using to create models, whether that’s machine learning, artificial intelligence, whatever we want to call it. But when we recognize the profound ability of community members to understand their own language data, their own cultural data and interpret that and tag and label the phonetic features of the language that belongs to their culture, we recognize an ability to grow talent within our community. And I think ensuring that we balance the way that we value people, not just those that have had professional training and an education that only money can buy, but those that have experienced life and understand the data from that perspective too.


Helena Leurent: Amanda, you got it exactly right and we should all march upstairs and start that maths class right now, really. Yeah, so there we go. Thank you. Lauren, you started out sort of sharing how you have put your own voice out there. you’ve shared this sort of concept of data equity. Talk more about how would we build that and what are the examples you can see perhaps which are emerging where that practice is, where that is being put into practice?


Lauren Woodman: Yeah, I think, I mean, this is a beautiful story for lots of different reasons. But one of the things that strikes me is, you know, part of being digitally safe is having an environment in which, you know, we can use these tools. I mean, technology is just a tool, but we can use these tools, you know, to accomplish what it is that our own respective personal goals may be. And it could be studying for a maths class, it could be buying presents for the holidays, it could be learning something new, it could be any number of different things. But that environment, the internet that we want, the online experience that we want, really has to speak to the communities that comprise the internet, which is all of us. And if we don’t have those voices, you know, that really creates a scenario. I mean, I thought your comment that being modern doesn’t mean being Western, like I literally wrote it down. I was like, that encapsulates all of it. What is the technology world that we are trying to create in which we feel safe and secure to realize our own potential? We can only do that if that world represents who we are. And that means including those voices, it means including data that represents all of us in the models that we are building. It means that building in that community knowledge is really important. And there’s, you know, in my role, I’ve heard lots of these stories over the years, but we were speaking briefly yesterday, you know, there was a, it is important not to lose those local voices. And it’s important not to lose that local knowledge because we end up making decisions that are bad decisions if we don’t do so. And, you know, a colleague of mine that I’ve worked with over the last couple of years was telling me this wonderful story about, maybe terrible story. about some work that had been done in Kenya to fight desertification. And so the government had done some research in what trees grow well here. And so they had come into communities and shared seeds and trees had been planted and all of that kind of stuff. And everybody thought, yay, we’re making progress against desertification and reclaiming land, only to come back a couple of years later to find that they had all been dug up and thrown away and no one was doing this anymore. Because what local farmers knew was those trees in particular sucked a lot of water out of the water table. And so the cash crops that those communities depended on couldn’t grow and didn’t grow. And so everybody said, okay, well, we know what the problem is, let’s go fix that. It’s that kind of data exchange, that kind of local knowledge, that kind of local representation that becomes really critical, especially when decisions are being made by algorithms or in automated systems or in ways where we may not have direct personal agency. When decisions start getting made on behalf of all of us by systems, you wanna make darn sure that the data about your community and that local knowledge and your representation is part of that equation.


Helena Leurent: It feels, Bilel, I’m coming to you. It feels like decisions are being made, especially this week on our behalf without necessarily our involvement. It doesn’t feel like a very sort of good trajectory that we’re on in terms of actual representation and care and safety. Where are the points of hope perhaps in the system?


Bilel Jamoussi: Yes, I think the points of hope is that we have an international framework in which we operate. So the ITU is the UN digital agency, specialized agency for digital technologies. And cybersecurity is not a new topic. It just evolves with new technologies. And the ITU, for example, has a number of frameworks that help countries develop and improve in their cybersecurity posture and data. protection posture. For instance, we issued the global cybersecurity index. We issued one revision last year where all the member states, the 194 countries in the ITU, report on various aspects of data protection, of cybersecurity, whether there is a national cybersecurity strategy, if there is legislation in the country, if there is capacity building, data protection, computer incidents, response team, all of the tools that are necessary to protect the country’s cybersecurity space. And what we realize, and the last time we issued this cybersecurity index was in 2001, and what we have seen in the last three years is quite an improvement in a number of countries, especially in Africa. There is a lot more awareness, there is a lot more actions in terms of having more cybersecurity strategies in the country, but what we saw, which is quite interesting, is a big gap between the legislation and the strategy and the capacity to implement, because you need cybersecurity engineers, you need awareness at the user level, especially the more vulnerable communities, whether it’s children that have not yet grasped the dangers of opening up everything, or the elderly that may not have the literacy to be online. And so that cybersecurity index work that we do with all the countries raises the awareness at the national level. And another pillar that helps countries implement cybersecurity are the standards, and the ITU issues a number of standards which are actually developed by the private sector and the governments hand in hand. Which is the only standards organization, ITU is the only standards organization that has 194 member states, and it’s the only UN agency that has a thousand private sector entities and 200 universities. universities, and research institutes. So the outcome of the conversation and the guidelines and the standards are really a multi-stakeholder approach and fully representative. And from those standards, we can then implement child online protection. We have personal identifiable information standards, PII. And with those standards, then governments have the tools to implement the cybersecurity legislation, the cybersecurity agenda and strategy. So by putting all those pieces together, that’s how we can help navigate what I call the sword and shield. Because you protect, and then there is a new technology that comes up, and you have to come up with new protection. Today, we talk about quantum computers and the fact that they can break the usual hash function back to your math class to encrypt the traffic on the network. We look at AI as a new tool that could either help in cybersecurity or could be a threat in cybersecurity. So we need to continue that dialogue. It cannot be governments alone. It cannot be the private sector alone. That continuous multi-stakeholder dialogue is the one that helps us keep the framework safe and raise the awareness and ensure that the consumer is aware of what their data is going to be used for. If their connection is not secure, what are the dangers of not being secure? One example of a standard that has been around for 30 years and has been evolving is the public key infrastructure, which is the standard that encrypts the traffic on the internet. And that keeps evolving as new threats come up and new solutions are provided.


Helena Leurent: OK, there’s more we could talk about on that, because I’m horrified by the growth in scams. So yeah, we can see how that. But we have got to the time where I really want to hear from the audience. Is there anybody who would like to pose a question? Would you please raise your hand? and we can take three or so at a time. So I have one here, I have one at the back, and one just there. Would you please stand up and ask your question if you can keep it quite brief, so then we can hear from three. Yeah, this gentleman there with the blue lanyard, and I think a, or whoever is closest to the mic. Go for it. Go for it.


Audience: Ah, thanks, and I appreciate your discussion is great, and I am Kunihara from Japan. I have a question for you, and in Japan, children’s dependence on smartphone is a problem. What kind of problem is there in the AI era and in life? I’d like to hear your opinion. Thank you.


Helena Leurent: Awesome, thank you. Let’s take, there was somebody with their hand up in the middle, yes, the lady, yes, in the middle.


Audience: Yeah, hi, thank you so much. I’m an immersive technology specialist working for the UK government, and my interest is what your thoughts are on the collection of people’s biometric data off of VR and AR kind of devices, and also your thoughts on the fact that law enforcement worldwide really has no idea about what this stuff does. If you have any thoughts on that.


Helena Leurent: Yeah, lack of awareness all round, and the gentleman standing at the back, I believe, yes. Perfect, thank you.


Audience: Hi, oh, that’s a loud mic. Hi, guys, my name is Hugh. I’m from Canada, and I just have a question on just the, on more of the spectrum of, of course, this is more focused on, the overall thematic is Empowerment Bytes, and, but we’ve been talking about digital safety, so I wanted to pose the question of to, how do you. find a balance of properly engaging digital safety and incorporating those methods, but also being able to have a accessible network and a proper system towards developing nations? And is that different based on cultural backgrounds or contextualization, and how do you kind of approach those? Or is it more, are we dedicated in pursuing a more refined universal standard on digital safety? Or is it changing based on nation to nation, or country to country, or city to city? Thank you.


Helena Leurent: Awesome. Right, we could continue for the rest of the day on some of these. Let’s pick up, Australia is considering banning the use of smartphones for people under 16. So let’s start with that point about child safety, and especially given the oncoming AI. Is there anybody who would like to address that? Bilel, yeah, please.


Bilel Jamoussi: Thank you for the question on the child safety. I mean, it includes the hours that kids spend on the phone. It impacts the hearing. With the WHO, we published the standards on safe listening. And many of the phone manufacturers are implementing that to reduce the volume and the duration. We just started some work on safe viewing, because with six hours looking in the same direction, there are impacts on the eyes. But also, in terms of being less on the phone, that is a real problem. Not only in Japan, I think we see it across the globe. And I think that’s where more the schools and the younger settings of education is an opportunity to raise the awareness. I don’t think banning digital equipment would work. I think we talk about new generation, digital connected generation. So there is more education, more awareness to the kids to know what dangers they put themselves into when they spend so much time, whether it’s to. their hearing, to their eyesight, to their socializing, and having more real connections other than the virtual connections.


Helena Leurent: Any other? Yeah, please.


Peter Lucas Kaaka Jones: What we’ve noticed in Aotearoa, particularly with the members of our own tribe, Te Aupouri, we have a very young population. The average age of one of our people is 21. But we’ve also noticed that many of our infants are exposed to devices as young as four months old. And also, the type of content that children are exposed to, the content created may not actually be the sort of content you would want your children to be exposed to. So I think when we take that on board, we need to understand a range of different concerns. And the development of the brain, for example, how is that affected by continual screen time when we don’t really understand what the impact of that is in child development? So I think there’s a lot of questions, and that there is perhaps a need to be more cooperative in the way that we address that, not only regionally, but internationally.


Helena Leurent: And perhaps the biometrics question. Amanda Bunning? No Bunning? You’d be OK if smartphones were banned for a certain age younger than you? What do you think? Would that work?


Amanda Graf: I think smartphones can open a big, also a big variation for many benefits. For example, AI, we can use that to study, for example. Or it’s really helpful in the daily life. So I think completely banning it with only the age of 16 is a bit over. It’s a bit too much. I would say maybe until the age of 10 would be appropriate. Because I think. from then on you can really be cautious and if people would educate the youngers then it would even be better because that would just mean that people are more cautious and can use the digital world as a benefit.


Helena Leurent: Awesome, thank you. Okay, new legislation coming, the graph agenda, I like it. Biometrics, briefly, and I know that the team is like, Helena you’re totally over time, but I think this is a great question. How would you approach biometric, use of biometric data in this?


Audience: So we had a focus group on Metaverse and the ITU that brought, it was an open platform for every participant and came


Bilel Jamoussi: up with a number of reports and guidelines on the use of AR, VR, XR and I would invite you to look at, just search for the ITU focus group on the Metaverse, we have a number of reports that guide on the use of biometrics in the virtual space. 2024 we held the first UN virtual day and we have the second one in June in Torino, Italy. That’s the way that we approach it, is by having a conversation that is multi-stakeholder on the benefits of AR and VR and virtual reality and Metaverse, but also on the challenges and how to address them.


Peter Lucas Kaaka Jones: Just quickly, I’d like to just mention that in Aotearoa and New Zealand, for example, Māori people are very much incarcerated and when we look around the world, that story is shared by other indigenous peoples that have experienced colonisation as well. Biometric data that is gathered without consent and then used inappropriately is a growing concern, particularly where law and order and decisions around people’s future is being made. So likewise, when we think about children, I think we need to think about their parents as well, because when parents are incarcerated, they don’t have care. caregivers, and they often find themselves into places of the state. And so I think we really need to take a well-rounded approach when we think about how this type of data informs decision making and the people it impacts on.


Lauren Woodman: I would just speak to the question about, you know, sort of the range of options that we have. This biometric question, I think, sort of fits into that a bit, which is there are very appropriate times to use biometric data for very legitimate reasons. I’m not sure that it being collected without consent, you know, because I’m playing a video game, is the right level of consent. And I think we have to have a much more sophisticated conversation around what kind of data for what kind of usage, with what kind of consent, with what kind of agency. And we cannot assume that a single approach works for all data. It will not work for all environments in which that data might potentially be used. Even if it’s appropriate today, it may not be appropriate tomorrow as technology changes, and we have to think about how do we remedy those situations where data is being used for purposes for which it was not collected, intents for which it was not collected. And that is a much more sophisticated conversation. To Amanda’s point about education, I think a lot of us came into the technology age thinking, my data is not that important, what would anybody want to know about me? You start to break that apart, and all of a sudden that becomes, you know, a real question. There are questions around appropriate use, appropriate scenario for what time period. And the one thing I would say is all of us have to lean into this conversation a lot more aggressively, because governments have frameworks, but they don’t actually know all, they can’t know all of the different use cases in which inventors and innovators are coming up with ways to use data. All of us as consumers can’t know all of those different ways, and those voices have to come. forward because if it if it’s technology that we want and if it’s technology that we want to use to benefit us we have to be part of that conversation and that’s gonna mean that we all have to engage and understand you know how data is being used and and what agency we maintain over it.


Helena Leurent: How how would you what would you want anybody in this room or listening to express that to create their their form of representation in that system?


Lauren Woodman: I think that’s a great question and I think we don’t have great scenario we don’t have great methodologies to do that. That having been said local communities are a great place to start. I think you know consumer organizations organizations with which your you are particularly involved are important. I think parents groups are important. I think talking to your local legislators are important. I think voting with your dollars are important. I think opting out when you don’t want your data to be used or when you read the user agreement you don’t understand how that data is being used you can’t figure it out opt out. It is actually not required for you to opt in. Sometimes it is maybe you don’t need it maybe you do but like I think people have to vote with with the methodologies that they have and they have to talk to their local policymakers and and and those that you know bring us all together and have those conversations. You know if you want you know it my my children will tell you that my favorite phrase and they hate it is to be a participant in your own rescue. If you don’t like what’s happening with your data then you got it we’ve got to figure out a way to shift that conversation and it will not happen overnight it won’t happen just because you know one of us opts out but when people really start asking hard questions and start wanting to know answers you know the the incentives that we need to put in place need to be different than they are right now and until we start asking questions and voting with with the way that we use technology that conversation is going to stay out of our own hands.


Helena Leurent: How many of the standards actually incorporate or you know ways in which countries are addressing this? incorporate that representation and engaging people’s voice so that you can understand how is this actually working?


Bilel Jamoussi: I would say all of them because the way we approached AI and data is by launching the AI for Good platform in 2017, way before chat GPT and generative AI became fashion, and the idea was to have a multi-stakeholder dialogue on AI and data. This year we’ll have our AI for Good in Geneva 8 to 11 July, please mark your calendar and come, and from there we spawn a number of activities including robotics competitions for youth, so in the robotics there is a lot of AI and that exposes the children in the schools to the importance of data science, of what data could be used for, we created a number of focus groups looking at different AI applications for health, for natural disaster management, for agriculture, so this multi-stakeholder dialogue in the AI for Good summit is a great way to continue the conversation that Lauren hinted that is an ongoing process that we need to maintain.


Helena Leurent: Okay, do I have time to go for another round of questions? A quarter to two? Excellent, all right, we’ve been given more time guys. All right, let’s go for another round of questions. Lady in the front here, two ladies in the center there, I can see with their hand up. Anyone? And gentleman to the right at the back, please.


Audience: Hi, my name is Marvi Maiman, I’m a former social protection minister from Pakistan, but in my current role here I’m the president of LAWEP, which is the first generative AI model legislative firm. My question really is, do you believe that moving from policy reforms to legislative reforms and legislation is actually useful, especially since we at LAWEP are designing from policy to legislation keeping 200 UN countries in mind, model legislation to do with online harassment in this particular case. This is one of the examples that we’re doing, but we could be doing anything. But again, I bring it back to, do you think it’s useful policy shift to legislative cementing? Because I’ve seen very little of that, not just because I’m a former parliamentarian, but just in my field of work. People, are they thinking enough in those terms?


Helena Leurent: Okay, thank you. Let’s bring the mic into the center here. Lady with the brown sweater first, I think.


Audience: Thank you. My name is Dr. Kiran. I’m here with TikTok. I’m a clinician and a creator. And it was a question about the digital literacy. And my view is that it’s kind of five prongs to this. The first one is parents knowing how to help children and support them in their digital use. And I think technology’s advanced a faster rate than parenting programs have. And parents don’t get a manual for, how much time should they be allowed? How much access should they be allowed? But they need to have it because that’s the world that we now live in. Parents then need to know how to teach those skills. Professionals like myself need to then help parents that then struggle or children that get addicted to tech. And then platforms have a responsibility to make the online world safe for the children and reduce harm and the legislation. And then also the wider education of people. And I think often, so my question is, do you think that it’s not just one thing? It is kind of a range of those things that all need to be looked at for digital wellbeing to be what we want it to be. And later on this week, I’m on a panel with the World Health Organization to talk a bit more about this. But yeah, that was the question. It’s not just the responsibility of one organization. Is it a shared thing?


Helena Leurent: Okay. Thank you. Brilliant, thank you. And gentleman over to the right here, on my right. Oh, sorry, sorry, I thought you were, please.


Audience: Hello, good morning. Hello, I’m Celine, I’m from the Philippines, also with TikTok, I’m a creator. My work focuses on biodiversity. And I understand and know that data science, AI and machine learning has such incredible impact on biodiversity monitoring, illegal wildlife trade and forest protection. But now we’re also seeing and learning its negative environmental impact. My question is, what are your thoughts on these? How do we reconcile this? And what do we do about it?


Helena Leurent: Great question, and over to the right here.


Audience: Hi, good morning. My name is Anirban, I’m a scientist and a drug developer. So my question is rather, you know, comment slash question, a rather provocative one. There’s this momentum that’s, you know, gained a lot of traction about, or against collection and sharing of personal data. I sometimes wonder whether, you know, when a momentum picks up, sometimes it has the tendency to just, you know, go ahead and pick more momentum, right? And I wonder if that’s happening with this topic. Remember that a lot of the biggest technological developments that are impacting society in a positive way and will continue to do that. And I’m talking about, for example, generative AI, which is based on machine learning, would not have happened if there was no sharing of personal data, right? Face facial recognition, digital recognition, all that kind of stuff. There’s a lot more going on. I’m not a specialist in that technology. Remember that sharing of personal data also leads to a lot of business activity. a lot of gainful employment, right? So is it, you know, are we not sometimes taking an extreme position about, you know, stopping all kinds of sharing of and collection of personal data? Should it not be a more moderated or more considered approach to the problem rather than a, you know, black and white approach? So I know there’s no right answer or wrong answer to this, but just want to, you know, throw it out in the open and get some opinions, comments. Thank you.


Helena Leurent: Thank you. Let’s take those four areas. So we have, are we in a hype cycle about privacy? My answer would be not, we’re nowhere near that yet. We need to be far more, talking far more about, if we, ooh, the day you go down the promenade and there’s privacy on every storefront rather than AI, that would be interesting, but that would be my provocative response back. But let’s say, are we in a hype cycle where we’re just sort of caught up in conversation about data and we’re not really digging into how we can approach this? The environmental impact of all of this, which is obviously crucial. The range of risk, where does responsibility fully lie? I’m going to throw in my, I think way too much responsibility for everything is being thrown back on us as an individual. We are expected to be aware about everything right now from how to read a label to how to, you know, turn off geolocation. We’re going through massive change, but anyway, biased, take my, get off my soapbox, back to the panel. And the way in which policy versus legislation. All right, maybe Amanda, I’m going to come to you about a little bit about, is the environmental impact all of this? Do you bring together your digital life and concerns about climate? Do those two things come together or are they sort of separate for you? How do you bring those?


Amanda Graf: I think I am like aware of what my digital foot is doing with the environment, but I think. I just have not enough knowledge to really try to combine those two things together because we’re not really taught or I’m not really taught how to look at my also environmental footstep is really like combined with my digital youth really.


Helena Leurent: Got it, so when when you look at AI, are you conscious about the environmental implications of that or?


Amanda Graf: Not really, I wouldn’t say I’m really conscious of anything like that.


Helena Leurent: Okay, how do we approach, where should we start? Do you have any preferences or should we guide?


Lauren Woodman: So I’ll jump in on the environmental question just because I think, you know, and to the other point that there are benefits from technology and there are risks from technology. I think one of the places that we do see industry really leaning in is how do you get, how do you use AI to actually lower the energy demands of the technology that we’re using and we’ve seen huge, huge gains in that. You know, that I think is actually a really promising place where AI and machine learning will help us better manage the environmental impact. The other thing I think we see is we see technology companies, you know, I can ask chat GPT, you know, give me three restaurants to eat in tonight or I could just go search and the environmental impact of both of those things is markedly different. It’s also markedly different in terms of the overhead for companies that are doing both of those things and so you start to see companies sort of parsing that out, which is I can answer this with search. I don’t need generative AI to do this. I can provide a good answer to the consumer. So I think you’ll see companies leaning into that place as well, into that differentiation as well from a platform’s perspective. That having been said, I do think that we have to pay attention to the environmental impact and systems designs are starting to pay attention to that because those are not just environmental costs, but they’re also cost of companies for the. use of AI and that is going to drive a lot, I think, of a lot of behavior that will have a more positive impact than if we went into gen AI wholeheartedly.


Peter Lucas Kaaka Jones: I think the question around policy versus legislation is a really important question because it raises to my attention the need to better understand the changing public mood and when the mood of the public starts to change I think that will inform an expectation of government to look at what legislation should and could and maybe will be, particularly when we think about how governments want to use generative AI to make decisions, strategic decisions, decisions they’re responsible and voted in, elected to make, because when that starts happening the public want to know who’s being employed to make these decisions. I’d imagine that as a member of the public and so I think that it’s very important that we understand policy and the place of legislation when we see an uptake of generative AI in decision-making. The question about data sharing I think is very important as well and the key there for me is the word sharing. Sharing is a decision so whether something’s being harvested or where it’s being shared are two very different things and information that’s harvested without consent is being used for a specific purpose of course but it’s also being harvested without consent, without someone’s knowledge and I’m not saying that we can turn back time but it’s good to know where that information has come from, that’s not always readily available but I think it’s important for us to understand consent and when we think about privacy it’s also good to think about authorship and if we think about content we’re thinking, who’s making this content? Where’s this content derived from? When we think about biodiversity, that’s something that I’m particularly interested in as a Māori person from Te Aupōuri. What we understand in terms of our language is that we’ve named every rock, we’ve named every river, we know every plant, the animals, all of that stuff is so closely connected with our culture and our language. Taking that on board when we look at the environmental impact of AI and we look at the need to address community needs, perhaps we could cooperatively take a new approach. When we think that there are communities that still exist in the Pacific and in places that are without electricity, is there an opportunity for us to harness traditional knowledge around where sunbelts are? Solar energy, understanding where geothermal energy is an opportunity, where does energy cost less to make or create or develop? And how can we have data centres or data infrastructure, data facilities, that not only address the need to have generative AI or data storage or data centres and those sorts of things, but address problems hand-in-hand with those places that that data comes from or originates from? So those are things that I think could be developed in partnership with communities. The things that we could address together through perhaps having a network of smaller data centres that contribute to the wellbeing, the digital wellbeing, the physical wellbeing, and also the community that those people live in. Kia ora.


Bilel Jamoussi: Yes. it’s important to try to demystify AI. So AI is basically two things. There is the data, and there is the algorithms or the machine learning that operates on the data in a massive way. And there are all kinds of data. There is personal data, which is a minuscule amount of the data that’s out there. There’s a lot of industrial data. There is a lot of sector-specific data. And harvesting that data is tremendous because that’s the focus of using AI for good or data for good. When you have thousands of radiology scans and you sift through them in an anonymous way, then the healthcare sector could be augmented in being able to detect diseases that only AI could see. A radiologist would not be able to see with the naked eye. That’s AI for good. That’s data for good. If you look at the data of the agriculture, having drones, taking pictures, having internet of things, sensors in the ground to tell you whether the ground is wet enough or not, all of that data being harvested would allow us to produce more crop, which is good. If you look at the satellite imaging and being able to detect natural hazard that’s coming, whether it’s a fire, a landslide, water, flooding and so on, that’s data for good. That’s another AI for good approach. And there is the personal data. And in the personal data, as I said, at the beginning, it’s a spectrum. It’s where you’re comfortable because the more data you give, the more services you’re gonna get. And you need to be conscious that there is a transaction there. You’re selling something. You’re not maybe paying money to get a service, but you’re giving your data to get a service. And everybody has a different threshold of acceptance of how much data they are willing to give on a personal note and how much services they’re gonna get. But you need to know that those free services are not really free. And then. To the question on the environment, certainly data centers are consuming more and more power. And when you do generative AI searches and queries, they are quite intense in energy use. What we are doing in the ITU is we have a number of standards that measure how much electricity is used for all the information and communication technologies from networks to wireless networks to data centers and so on. So the first thing that’s important is to be able to measure the footprint in the same way. And to be cognizant that there is two sides to the coin. One is yes, data centers and AI is going to use a lot of power to produce results. But at the same time, when you look at all of those AI for good applications, for instance in Switzerland, Swisscom produced a lot of data on mobile phone presence around bus routes in the morning and that data was used anonymously to optimize the bus routes. So when you optimize the bus route, you’re using less fuel and that’s good. So we need to have a more educated discussion on the good and the bad. Certainly there is more use of electricity for the data center, but you have a lot of AI applications that reduce energy use in other sectors. And in the ITU, we published a report and I invite you to look at it, AI and the environment just a few months ago that looks at both sides of the coin and stresses the importance of having international standards to keep everyone at the same benchmark and at the same measure. Just to close on this, being in a high school and having students and teachers, I think all of us owe it to the teaching community to be up to speed on data, on AI, on cyber security, even just at a level of framework to know, like there are so many tools of generative AI, there is ChatGPT, there is Google Gemini, there is MetaLlama, there is all kinds. of stuff. One basic is to have a table to see what’s the application of each tool, how much electricity it might be using, like to your point a search versus chat GPT query. Having that knowledge with teachers and students will educate how much energy we’re using to your point on your digital footprint. Thank you.


Helena Leurent: Thank you. Can I check those who’ve asked questions because we’re coming to a close at this point. If you’ve asked a question do you feel that’s been appropriately touched on and responded to as best as possible? Thank you. Got a thumbs up there. Thank you very much. Okay. Recognize there are a range of approaches we need here. There is hope. There is also we need to be aware of the potential risks and that’s both a personal responsibility, it’s a personal right and it’s a global responsibility and right and everywhere in between. This is the you know the joy and the beauty of systems right. Are there any final questions? They’re gonna have to be super super brief. We do have one here. Okay to you. All right. A brief question and then we’ll wrap up and I’m going to ask each of our panelists their predictions and hope for 2025. The key thing that you’d like to see happen.


Audience: Hi. I’m Charu Tripathi I’m from Zurich. I head a startup which provides marketing technology solutions and services to companies. Very insightful sessions. Thank you so much. Just wanted to understand and taking cue from Anirban’s question that sometimes data is good, bad. We are talking about it pros and cons. There are marketing technology tools which helps companies to for data protection and which actually avoids the slip of data and we can see there are so many online frauds. So are the companies not using them efficiently and effectively? That was just my question.


Helena Leurent: Okay. All right. Are businesses stepping up in the appropriate way to protect people from Fraud and from the worst cases. Where do we see that happening?


Bilel Jamoussi: some businesses are because they are conscious of their reputation as a being in the stock market and and They’re conscious of their social responsibility Some others may not to the extent that they should and I think that’s where policy and legislation Comes into play with certain standards that need to be implemented to uphold those policies


Helena Leurent: Okay, that’s one okay,


Peter Lucas Kaaka Jones: I think that yeah, I totally agree and if trust is diminished in that relationship Between the business owner and the people that they’re serving. I’m so too with their business. So I think that’s an evolving Conversation.


Helena Leurent: Okay. Well, it’s a hopeful response in in a week where we’ve had folks stepping back from fact-checking and you know Raising questions about their responsibilities Okay


Peter Lucas Kaaka Jones: Well, that’s a totally another I think that’s a totally different conversation Okay, and and and it’s and it raises to our awareness the concerns that people are having But again, it also raises to our awareness the jurisdiction that we have over those decisions to fact-check or not Fact-check as well, which are pretty much minuscule at the moment. Yeah


Helena Leurent: Yeah For me that that concept of digital safety is a surround sound safety our way of understanding safety That this is almost a sort of we silo. Well, that’s about content This is about fraud for the person the average person in the street If any of us are average, you know, it’s it’s whether I feel safe or not and have I got enough fraud different


Peter Lucas Kaaka Jones: Yeah. Yeah. Did you get faked or did you get fraud? Yeah I mean, I mean like it’s it’s it’s an interesting question on the on the fake or fraud


Bilel Jamoussi: We launched last May a collaboration on multimedia authenticity Because you can create fake text you can create fake images. You can create fake videos, but there are also standards and take-home techniques that allow you to watermark and have AI to actually tell you whether the video or audio is a fake or authentic. So with the international standards organizations, ISO, the IEC and the ITU in Geneva, we created a collaboration on multimedia authenticity to bring all of the standards that are available and all the technical tools and raise the awareness with the policymakers so they know if they’re going to legislate against deep fakes and fake content, what tools they can actually implement to make that happen.


Peter Lucas Kaaka Jones: I think that the public are asking questions, and rightly so, about are those that want to ensure safety moving fast enough? And what is the expectation of fast enough? And how do we measure that speed? And what do we hope to achieve? And why are we doing it? There’s so many questions right now that I don’t think people fully understand. And it’s important that we start to learn to understand the impact that that’s having on people in their daily lives.


Helena Leurent: All right. Predictions for 2025. What do you think will happen? And I’ll start with you, Peter Lucas, and come this way. What’s your hope, prediction or hope, around this topic for 2025?


Peter Lucas Kaaka Jones: Well, one of my hopes is really that we understand the opportunity while learning to manage the risk. And that whilst we move forward, we’re not forgetting about the concerns that are raised to our awareness. And that we learn new ways to address those. And don’t just address them as individuals or address them as specific businesses, but are a part of a network that collectively make important decisions to strategically address the issues that can cause harm. Harm to people, harm to places, harm to the environment, harm to the way that we might make decisions, whether those are policy decisions or decisions around lawmaking. For me, I think that my prediction for this year is there’s going to be a growing interest and a growing expectation that governments start to balance their approach around rules and regulations, moderation of AI use, and not just telling the world that they’re going to be using it to make decisions or, I guess, address more issues faster. So I think that as that evolves, so too will people expect them to better understand policy and balancing that with lawmaking that ensures that we do have AI for good and do no harm is not just a slogan, but it’s part of our approach to the work that we do.


Helena Leurent: Thank you. Lauren?


Lauren Woodman: I’ll respond to both. My prediction, super general, I think what you’ll see is continued rapid development in the quality of AI models that are out there and so the conversation around how do we use these things for good and for benefit will continue to evolve and that’s encouraging. You know, you want to see those conversations encouraged. My hope is that we use the opportunity of the continued development of the technology to take a very holistic view of security and safety because, to your point exactly, all of us have a different definition of that. It ranges in the scenarios, I want a safe computing environment, I want a safe food supply, I want a safe energy environment, all of those types of things, all of those are based on different levels of data and we all have to lean in from a legislative perspective all the way down to us at an individual level of thinking about what that means for us personally and then asking questions and demanding the types of systems that would allow us to feel safe and trust the systems on which we rely?


Bilel Jamoussi: Probably three things. One we are seeing is a growth in digital identity, and digital identity is important for child online protection, because if you know that person is under a certain age, then certain content would not be appropriate, and certain websites and so on would not be accessible. And we are seeing growth in interest. Tomorrow I’ll be on a panel with the Open Wallet Forum with the Minister of Justice from Switzerland and the Secretary General of ITU, where we are launching the Open Wallet Forum, which is a joint venture between ITU and the Linux Foundation, to create digital wallets where you have your eID, your driver’s license, your vaccination record, and so on, in an interoperable cross-border system. I think that’s key for any digital public infrastructure service or digital protection of citizens and users. The second point is 2025 is the international year of quantum. On the first week of February in Paris, UNESCO is hosting a big seminar on quantum and its applications, and at the AI for Good, we have a quantum for good focus. And the reason I bring quantum in this conversation, quantum computers are very capable in going through data in a very energy-efficient way, compared to the farms of CPUs that we see today in the computing centers. And we’re very hopeful that quantum will evolve in a very positive direction. And the third one is the interplay of AI and cybersecurity. On one hand, we have the deepfakes and the frauds and so on, but AI is also a mechanism to detect fraud very quickly. If your credit card is being used in the wrong place, AI could quickly, by the various credit card providers, detect that there is fraud. So AI is an amazing tool. that can help us with the cyber security improvement.


Helena Leurent: Thank you. Amanda, the last word is yours.


Amanda Graf: Yeah, I think the others pretty much summed it up. I think or I hope that there will be more benefits from using digital media as well as more education to it, that people know what they consent with, what their data is happening right now. So I just hope that really education comes to more people and also that we can benefit from AI and all the growing media right now.


Helena Leurent: Perfect. Thank you very much. So there is peril, but there is promise. What’s key is our awareness and our involvement and our voice. So thank you very much. Please join me in thanking our panellists who’ve been amazing. And sincere thanks to all of you for joining here today and for your great questions and your interest in this topic. So stay safe, stay well, and may all of you enjoy digital safety in 2025. Take care. Thank you. Thank you.


A

Amanda Graf

Speech speed

145 words per minute

Speech length

435 words

Speech time

179 seconds

Need for greater awareness and education about data usage

Explanation

Amanda Graf argues that there is a lack of education about what happens to personal data online. She believes people often consent to data usage without fully understanding the implications.


Evidence

She mentions that people accept cookies without really knowing what it means.


Major Discussion Point

Digital Safety and Data Protection


Agreed with

– Peter Lucas Kaaka Jones
– Bilel Jamoussi
– Lauren Woodman

Agreed on

Need for increased awareness and education about data usage


Hope for more benefits and education around digital media usage

Explanation

Amanda expresses hope for increased benefits from digital media use and improved education about it. She emphasizes the importance of people understanding what they’re consenting to when using digital platforms.


Major Discussion Point

Future of Digital Safety and AI


P

Peter Lucas Kaaka Jones

Speech speed

140 words per minute

Speech length

2454 words

Speech time

1047 seconds

Importance of consent and data governance for indigenous communities

Explanation

Peter Lucas Kaaka Jones emphasizes the need for indigenous communities to have control over their data. He argues that data belongs to the community rather than individuals and should be governed accordingly.


Evidence

He draws a parallel between data harvesting without consent and the historical privatization of indigenous lands.


Major Discussion Point

Digital Safety and Data Protection


Agreed with

– Amanda Graf
– Bilel Jamoussi
– Lauren Woodman

Agreed on

Need for increased awareness and education about data usage


Differed with

– Bilel Jamoussi

Differed on

Approach to data governance and ownership


Opportunity to harness traditional knowledge for sustainable tech infrastructure

Explanation

Peter suggests using indigenous knowledge to develop sustainable technology infrastructure. He proposes creating networks of smaller data centers that benefit local communities.


Evidence

He mentions the potential use of traditional knowledge about sunbelts, geothermal energy, and other natural resources to inform the placement of data centers.


Major Discussion Point

Environmental Impact of Technology


Agreed with

– Bilel Jamoussi
– Lauren Woodman

Agreed on

Balancing benefits and risks of AI and data usage


Using technology to preserve indigenous languages and knowledge

Explanation

Peter discusses how technology can be used to preserve and revitalize indigenous languages and knowledge. He argues that this approach can unlock cultural insights and provide new ways of looking at the world.


Evidence

He mentions his work in teaching computers to speak Maori and the development of speech-to-text technology with 92% accuracy for the Maori language.


Major Discussion Point

AI and Data for Social Good


B

Bilel Jamoussi

Speech speed

159 words per minute

Speech length

2419 words

Speech time

911 seconds

Spectrum of data sharing, from complete privacy to openness

Explanation

Bilel Jamoussi describes data sharing as a spectrum ranging from complete privacy to full openness. He emphasizes the need for users to find a balance based on their comfort level and the services they want to receive.


Evidence

He mentions the trade-off between giving personal data and receiving services, noting that ‘free’ services often come at the cost of personal data.


Major Discussion Point

Digital Safety and Data Protection


Agreed with

– Amanda Graf
– Peter Lucas Kaaka Jones
– Lauren Woodman

Agreed on

Need for increased awareness and education about data usage


Differed with

– Peter Lucas Kaaka Jones

Differed on

Approach to data governance and ownership


AI can help optimize energy use in various sectors

Explanation

Bilel argues that while AI and data centers consume significant energy, they can also help optimize energy use in other sectors. He suggests that the benefits may outweigh the costs in some cases.


Evidence

He provides an example of Swisscom using mobile phone data to optimize bus routes in Switzerland, resulting in reduced fuel consumption.


Major Discussion Point

Environmental Impact of Technology


Agreed with

– Peter Lucas Kaaka Jones
– Lauren Woodman

Agreed on

Balancing benefits and risks of AI and data usage


Potential of AI to improve healthcare, agriculture, and disaster management

Explanation

Bilel highlights the potential of AI to bring significant improvements in various sectors. He argues that AI can enhance capabilities in healthcare diagnostics, agricultural productivity, and natural disaster prediction.


Evidence

He mentions examples such as AI analyzing radiology scans to detect diseases, drones and IoT sensors optimizing crop production, and satellite imaging for early detection of natural hazards.


Major Discussion Point

AI and Data for Social Good


Need for multi-stakeholder dialogue on AI and data standards

Explanation

Bilel emphasizes the importance of ongoing multi-stakeholder discussions to develop AI and data standards. He argues that this approach ensures a balanced and representative outcome.


Evidence

He mentions the AI for Good platform launched in 2017 and the upcoming AI for Good summit in Geneva as examples of multi-stakeholder dialogues.


Major Discussion Point

Policy and Legislation for Digital Safety


Role of international frameworks in improving cybersecurity

Explanation

Bilel discusses the importance of international frameworks in enhancing cybersecurity. He argues that these frameworks help countries develop and improve their cybersecurity posture.


Evidence

He mentions the ITU’s global cybersecurity index and the development of standards for personal identifiable information protection.


Major Discussion Point

Policy and Legislation for Digital Safety


Growth in digital identity solutions for online protection

Explanation

Bilel predicts growth in digital identity solutions as a means of enhancing online protection. He argues that digital identities can help in areas such as child online protection and cross-border digital services.


Evidence

He mentions the upcoming launch of the Open Wallet Forum, a joint venture between ITU and the Linux Foundation, to create interoperable digital wallets for various personal documents.


Major Discussion Point

Future of Digital Safety and AI


L

Lauren Woodman

Speech speed

193 words per minute

Speech length

2170 words

Speech time

672 seconds

Growing sensitivity to personal data usage and protection

Explanation

Lauren Woodman notes an increasing awareness and sensitivity towards personal data usage. She argues that people are becoming more conscious of how their data is used and are taking steps to protect it.


Evidence

She mentions her personal actions such as rejecting cookies, clearing data, and writing to companies to remove her data.


Major Discussion Point

Digital Safety and Data Protection


Agreed with

– Amanda Graf
– Peter Lucas Kaaka Jones
– Bilel Jamoussi

Agreed on

Need for increased awareness and education about data usage


Need to balance environmental costs with benefits of AI and data centers

Explanation

Lauren discusses the need to balance the environmental impact of AI and data centers with their potential benefits. She suggests that companies are starting to differentiate between necessary and unnecessary use of AI to manage environmental impact.


Evidence

She gives an example of companies choosing between using search or generative AI based on the environmental impact and overhead costs.


Major Discussion Point

Environmental Impact of Technology


Agreed with

– Peter Lucas Kaaka Jones
– Bilel Jamoussi

Agreed on

Balancing benefits and risks of AI and data usage


Importance of including diverse voices and data in AI development

Explanation

Lauren emphasizes the need for diverse representation in AI development. She argues that the data used to train AI models should represent the global community to ensure equitable outcomes.


Evidence

She points out that current data used in AI training is not representative of the global or even the internet-using community.


Major Discussion Point

AI and Data for Social Good


Continued rapid development of AI models and need for holistic security approach

Explanation

Lauren predicts continued rapid development in AI model quality. She argues for a holistic view of security and safety that encompasses various aspects of life, from computing to food supply and energy.


Major Discussion Point

Future of Digital Safety and AI


Agreements

Agreement Points

Need for increased awareness and education about data usage

speakers

– Amanda Graf
– Peter Lucas Kaaka Jones
– Bilel Jamoussi
– Lauren Woodman

arguments

Need for greater awareness and education about data usage


Importance of consent and data governance for indigenous communities


Spectrum of data sharing, from complete privacy to openness


Growing sensitivity to personal data usage and protection


summary

All speakers emphasized the importance of educating users about data usage, consent, and privacy implications in the digital world.


Balancing benefits and risks of AI and data usage

speakers

– Peter Lucas Kaaka Jones
– Bilel Jamoussi
– Lauren Woodman

arguments

Opportunity to harness traditional knowledge for sustainable tech infrastructure


AI can help optimize energy use in various sectors


Need to balance environmental costs with benefits of AI and data centers


summary

Speakers agreed on the need to balance the benefits of AI and data usage with potential risks, including environmental impacts.


Similar Viewpoints

Both speakers emphasized the importance of including diverse perspectives and data in AI development, particularly from underrepresented communities.

speakers

– Peter Lucas Kaaka Jones
– Lauren Woodman

arguments

Importance of consent and data governance for indigenous communities


Importance of including diverse voices and data in AI development


Both speakers highlighted the potential benefits of AI in various sectors while emphasizing the need for a comprehensive approach to security and development.

speakers

– Bilel Jamoussi
– Lauren Woodman

arguments

Potential of AI to improve healthcare, agriculture, and disaster management


Continued rapid development of AI models and need for holistic security approach


Unexpected Consensus

Environmental impact of AI and data centers

speakers

– Peter Lucas Kaaka Jones
– Bilel Jamoussi
– Lauren Woodman

arguments

Opportunity to harness traditional knowledge for sustainable tech infrastructure


AI can help optimize energy use in various sectors


Need to balance environmental costs with benefits of AI and data centers


explanation

Despite coming from different backgrounds, all three speakers acknowledged the environmental impact of AI and data centers, and suggested ways to mitigate or balance these impacts. This consensus was unexpected given the diverse perspectives represented.


Overall Assessment

Summary

The main areas of agreement included the need for increased awareness and education about data usage, the importance of balancing benefits and risks of AI and data usage, and the recognition of environmental impacts of technology.


Consensus level

There was a moderate level of consensus among the speakers, particularly on broad principles. This suggests a growing recognition of key issues in digital safety and AI development across different sectors and perspectives. However, specific approaches and solutions varied, indicating the need for continued dialogue and collaboration to address these complex challenges.


Differences

Different Viewpoints

Approach to data governance and ownership

speakers

– Peter Lucas Kaaka Jones
– Bilel Jamoussi

arguments

Importance of consent and data governance for indigenous communities


Spectrum of data sharing, from complete privacy to openness


summary

Peter Lucas Kaaka Jones emphasizes community ownership and consent for data, especially for indigenous communities, while Bilel Jamoussi describes data sharing as a spectrum where individuals can choose their level of openness based on personal preferences and desired services.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around data governance, ownership, and the approach to increasing digital awareness and safety.


difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the importance of digital safety, data protection, and the need for education. The differences mainly lie in the specific approaches and perspectives, which are often complementary rather than contradictory. This low level of disagreement suggests a general consensus on the importance of the topic and the need for action, which could facilitate the development of comprehensive strategies for digital safety and data protection.


Partial Agreements

Partial Agreements

All speakers agree on the need for increased awareness and education about data usage, but they differ in their approaches. Amanda Graf emphasizes formal education, Lauren Woodman focuses on personal actions and choices, while Bilel Jamoussi suggests a spectrum approach where users find their own balance.

speakers

– Amanda Graf
– Lauren Woodman
– Bilel Jamoussi

arguments

Need for greater awareness and education about data usage


Growing sensitivity to personal data usage and protection


Spectrum of data sharing, from complete privacy to openness


Similar Viewpoints

Both speakers emphasized the importance of including diverse perspectives and data in AI development, particularly from underrepresented communities.

speakers

– Peter Lucas Kaaka Jones
– Lauren Woodman

arguments

Importance of consent and data governance for indigenous communities


Importance of including diverse voices and data in AI development


Both speakers highlighted the potential benefits of AI in various sectors while emphasizing the need for a comprehensive approach to security and development.

speakers

– Bilel Jamoussi
– Lauren Woodman

arguments

Potential of AI to improve healthcare, agriculture, and disaster management


Continued rapid development of AI models and need for holistic security approach


Takeaways

Key Takeaways

There is a growing need for greater awareness and education about data usage and digital safety, especially for younger generations


Indigenous and diverse perspectives are crucial in developing ethical AI and data governance frameworks


AI and data have significant potential for social good in areas like healthcare, agriculture, and disaster management, but environmental impacts must be considered


A balance needs to be struck between leveraging data/AI benefits and protecting privacy and security


Multi-stakeholder dialogue and international cooperation are essential for developing effective digital safety standards and policies


Resolutions and Action Items

Continue developing international standards and frameworks for digital safety through organizations like ITU


Increase education and awareness efforts around data usage and digital literacy


Incorporate more diverse voices and data in AI development


Develop digital identity solutions to enhance online protection


Explore use of quantum computing to improve energy efficiency of data processing


Unresolved Issues

How to effectively balance rapid AI development with adequate safety measures and regulations


Addressing the digital divide and ensuring equitable access to technology benefits


Determining appropriate levels of data sharing and privacy for different contexts and cultures


Measuring and mitigating the full environmental impact of AI and data center growth


Establishing clear accountability and governance structures for AI decision-making systems


Suggested Compromises

Finding a middle ground between complete data privacy and openness based on user awareness and consent


Balancing environmental costs of data centers with potential energy optimization benefits of AI


Combining traditional knowledge with modern technology infrastructure for sustainable development


Using AI to both create content and detect fraudulent/fake content


Thought Provoking Comments

Being modern doesn’t mean being Western

speaker

Peter Lucas Kaaka Jones


reason

This comment challenges the dominant paradigm of technological progress and suggests that indigenous perspectives have valuable contributions to make in shaping our digital future.


impact

It shifted the conversation to consider non-Western approaches to data ownership and digital rights, leading to a discussion of collective rather than individual data ownership.


Māori data must be subject to Māori governance. Māori data governance.

speaker

Peter Lucas Kaaka Jones


reason

This introduces the concept of data sovereignty for indigenous peoples, highlighting the importance of cultural context in data governance.


impact

It deepened the conversation around data rights and governance, prompting consideration of how different cultural perspectives can be incorporated into global data standards and practices.


Part of being digitally safe is having an environment in which we can use these tools to accomplish what it is that our own respective personal goals may be.

speaker

Lauren Woodman


reason

This reframes digital safety as not just protection from harm, but as enabling positive outcomes and personal empowerment.


impact

It broadened the discussion from focusing solely on risks to considering how to create an enabling digital environment that serves diverse needs and goals.


We need to have a much more sophisticated conversation around what kind of data for what kind of usage, with what kind of consent, with what kind of agency.

speaker

Lauren Woodman


reason

This comment highlights the complexity of data issues and the need for nuanced approaches rather than one-size-fits-all solutions.


impact

It prompted a more detailed discussion of different types of data, consent models, and the varying contexts in which data is used, moving beyond simplistic notions of data privacy.


AI is basically two things. There is the data, and there is the algorithms or the machine learning that operates on the data in a massive way.

speaker

Bilel Jamoussi


reason

This comment demystifies AI by breaking it down into its core components, making the topic more accessible and easier to discuss concretely.


impact

It led to a more structured discussion of AI’s components and applications, allowing for a clearer examination of both benefits and risks in different sectors.


Overall Assessment

These key comments shaped the discussion by broadening its scope beyond Western-centric views of digital rights and safety, introducing cultural and indigenous perspectives on data governance, and reframing digital safety as an enabler of personal and collective goals rather than just a protective measure. They also helped to demystify complex concepts like AI, allowing for a more nuanced and accessible discussion of its implications. The conversation evolved from focusing on individual data protection to considering broader societal and cultural impacts of digital technologies, emphasizing the need for diverse voices in shaping our digital future.


Follow-up Questions

How can we balance digital safety measures with accessibility for developing nations?

speaker

Hugh from Canada


explanation

This question addresses the need to find a balance between implementing digital safety measures and ensuring accessibility to digital technologies for developing countries, which is crucial for global digital equity.


What are the implications of collecting biometric data from VR and AR devices, especially given law enforcement’s lack of understanding?

speaker

Audience member (immersive technology specialist)


explanation

This raises important concerns about privacy, data protection, and potential misuse of biometric data collected through emerging technologies, particularly in the context of law enforcement.


How can we address the environmental impact of AI and data centers while leveraging their benefits for biodiversity monitoring and conservation?

speaker

Celine from the Philippines


explanation

This question highlights the need to balance the positive applications of AI in environmental protection with its negative environmental impacts, particularly energy consumption.


Is there a need for a more moderated approach to personal data collection and sharing, considering the benefits it brings to technological advancements and business activities?

speaker

Anirban (scientist and drug developer)


explanation

This question challenges the current narrative around data privacy and suggests a need for a more nuanced approach that considers both the risks and benefits of data sharing.


How can we develop a comprehensive approach to digital literacy that involves parents, professionals, platforms, legislation, and wider education?

speaker

Dr. Kiran (TikTok)


explanation

This question addresses the need for a multi-faceted approach to digital literacy, involving various stakeholders to ensure comprehensive digital wellbeing.


How effective is the transition from policy reforms to legislative reforms in addressing online issues such as harassment?

speaker

Marvi Maiman (former social protection minister from Pakistan)


explanation

This question explores the effectiveness of translating policy into legislation for addressing digital safety issues, which is crucial for creating enforceable standards.


How can we better understand and address the changing public mood regarding the use of generative AI in government decision-making?

speaker

Peter Lucas Kaaka Jones


explanation

This area of research is important for ensuring public trust and accountability in the use of AI technologies for governance.


Are companies using marketing technology tools efficiently and effectively to protect data and prevent online frauds?

speaker

Charu Tripathi from Zurich


explanation

This question addresses the role and responsibility of businesses in implementing data protection measures and preventing digital fraud.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.