Open Forum #18 World Economic Forum – Building Trustworthy Governance
Open Forum #18 World Economic Forum – Building Trustworthy Governance
Session at a Glance
Summary
This panel discussion focused on the future of the internet and the development of digital technologies, exploring regulatory, ethical, and practical considerations. Participants emphasized the importance of building a global infrastructure to support emerging technologies like AI and the metaverse. They discussed the need for adaptable, interoperable regulations that promote digital connectivity while respecting data privacy and security.
The conversation highlighted Greece’s digital transformation journey, showcasing how investment in digital public infrastructure can lead to economic growth and improved governance. Panelists stressed the importance of creating regulatory frameworks that are flexible enough to keep pace with rapidly evolving technologies while addressing cross-border challenges and accountability issues.
Ethical considerations for the private sector were explored, with emphasis on integrating ethical principles into product development and building user trust. The discussion touched on data stewardship and sovereignty, noting the tension between maintaining national digital sovereignty and preventing internet fragmentation. Participants agreed on the need for collaborative, multi-stakeholder approaches to governance that prioritize user privacy, security, and consent.
The panel also addressed the importance of cultural engagement in new digital spaces and the challenges posed by evolving hardware standards. They concluded by emphasizing that all stakeholders have an active role in shaping the future internet, and that a principled approach focusing on user needs and economic opportunities is essential for positive development.
Keypoints
Major discussion points:
– The importance of building trust, transparency and user control into emerging internet technologies and platforms
– The need for adaptable and interoperable regulatory frameworks that can keep pace with rapid technological change
– The role of digital public infrastructure in enabling economic growth and improved governance
– Balancing data sovereignty with the need for global data flows and interoperability
– Ethical considerations and accountability in AI and other emerging technologies
Overall purpose:
The discussion aimed to explore key considerations for shaping the future of the internet and digital technologies in a way that promotes trust, economic opportunity, and good governance while addressing potential risks and challenges.
Tone:
The tone was largely collaborative and optimistic, with panelists from different sectors sharing perspectives on how to responsibly develop emerging technologies. There was a sense of shared purpose in wanting to create a better internet future, even while acknowledging complexities and challenges. The tone became more action-oriented towards the end, with calls for active participation in shaping the future of the internet.
Speakers
– Judith Espinoza: Governance Specialist, World Economic Forum (Moderator)
– Hoda Al Khzaimi: Advisor to multiple industries and companies
– Brittan Heller: Senior Fellow of Technology and Democracy, Atlantic Council
– Robin Green: Representative from Meta
– Apostolos Papadopoulos: Chief Technology Officer, Hellenic Republic of Greece
Additional speakers:
– Audience: Representative from Digital Impact Alliance (DIAL)
Full session report
The Future of the Internet: Navigating Emerging Technologies and Governance Challenges
This panel discussion, moderated by Judith Espinoza, brought together experts from various sectors to explore the future of the internet and the development of digital technologies. The conversation focused on regulatory, ethical, and practical considerations for shaping a digital landscape that promotes trust, economic opportunity, and good governance while addressing potential risks and challenges.
Key Themes and Discussion Points
1. Emerging Technologies and Their Impact
The panelists emphasized that the future internet will be shaped by a constellation of emerging technologies, including artificial intelligence (AI), extended reality (XR), blockchain, and quantum computing. Judith Espinoza highlighted that AI should be viewed as an enabler for other technologies rather than a standalone product. This perspective underscores the interconnected nature of technological advancements and their collective impact on the digital landscape.
The discussion touched upon the need for a global infrastructure to support these emerging technologies, with particular emphasis on the development of the metaverse. Panelists agreed that building trust, transparency, and user control into these platforms is crucial for their successful integration into society.
2. Regulatory Frameworks and Governance
A significant portion of the discussion centered on the need for adaptable and interoperable regulatory frameworks that can keep pace with rapid technological change. Robin Green, representing Meta, stressed the importance of technology-neutral legal frameworks that can evolve alongside innovations. This view was echoed by Brittan Heller, who emphasized the need for cross-border regulation and coordination for effective internet governance.
The panel highlighted the challenges of balancing data sovereignty with the need for global data flows and interoperability. Robin Green argued for the importance of maintaining an open, interoperable internet while respecting national digital sovereignty concerns. Hoda Al Khzaimi emphasized the importance of respecting legal sovereignty rights when developing technology regulations across different jurisdictions.
The panelists agreed on the need for adaptable and flexible governance frameworks, with Hoda Al Khzaimi suggesting sandboxing approaches for developing regulations for emerging technologies.
3. Digital Public Infrastructure and Economic Growth
Apostolos Papadopoulos, representing the Greek government, shared insights from Greece’s digital transformation journey. He provided specific examples and statistics, such as the implementation of a national digital identity system, which led to a 25% increase in digital service adoption. The country also saw a significant reduction in bureaucratic processes, with 94% of public services now available online. This real-world example illustrated the potential benefits of embracing digital technologies at a national level.
The panel agreed that digital public infrastructure, including payment systems and digital identity, serves as a crucial pathway for connection and economic opportunity. Judith Espinoza emphasized the alignment of interests between users, human rights advocates, and economic development stakeholders in building a robust digital ecosystem.
4. Trust, Ethics, and User-Centric Design
Hoda Al Khzaimi stressed the importance of incorporating ethical considerations into product functionality from the outset. She advocated for transparency and accessibility in AI algorithms, as well as the implementation of user-centric dashboards that clearly show how personal data is being used and processed. Al Khzaimi also highlighted the need for a single source of truth in trust stack guidelines.
Robin Green echoed these sentiments, highlighting Meta’s commitment to responsible innovation principles that focus on user trust and safety. He provided practical examples of how these principles are applied, such as implementing privacy-by-design features and conducting regular human rights impact assessments. Green also emphasized the importance of accessibility in technology design.
5. Challenges and Opportunities in the Digital Age
The discussion touched upon potential risks associated with emerging technologies, including increased surveillance capabilities and the erosion of privacy. Brittan Heller raised concerns about accountability and transparency in automated systems, emphasizing the need for robust safeguards.
The panel explored the evolution of consent mechanisms for new computing platforms, recognizing that traditional models may not be sufficient in immersive or AI-driven environments. Brittan Heller highlighted the potential loss of cultural engagement spaces in the next iteration of the internet and stressed the importance of hardware floor considerations in emerging technologies like XR.
Hoda Al Khzaimi pointed out the potential of government technologies as a growing industry, suggesting opportunities for innovation in this sector.
6. Multi-stakeholder Approach to Governance
A key takeaway from the discussion was the importance of a collaborative, multi-stakeholder approach to internet governance. The panelists agreed that all stakeholders—including governments, private sector entities, civil society organizations, and users—have an active role in shaping the future internet.
The discussion also touched on the challenges faced by developing nations, with Ibrahim raising a question about how African countries can develop data governance frameworks.
Unresolved Issues and Future Directions
While the panel reached consensus on many points, several unresolved issues emerged:
1. Effectively balancing data sovereignty with cross-border data flows
2. Addressing potential increased surveillance and privacy erosion in new technologies
3. Resolving hardware floor issues in emerging technologies like XR
4. Evolving consent mechanisms for new computing platforms
5. Ensuring accessibility and inclusivity in the future internet across different regions and demographics
6. Developing appropriate data governance frameworks for developing nations
The discussion concluded with a call for continued dialogue and collaboration among stakeholders to address these challenges. The panelists emphasized the need for a principled approach that focuses on user needs, economic opportunities, and ethical considerations in shaping the future of the internet.
In summary, this thought-provoking discussion highlighted the complex interplay between technology, regulation, user rights, and societal values in the digital age. It underscored the need for adaptable frameworks, trust-building mechanisms, and the preservation of cultural spaces as we navigate the evolving landscape of the internet and emerging technologies.
Session Transcript
Robin Green: changing, but in order to make this happen, it’s going to be essential to have the global infrastructure that supports it. Data centers are a great example of some of the kinds of infrastructure that we’re going to need, but in order to really, I’m so sorry, I think some people online couldn’t hear me. In order to grow that infrastructure, it’s going to be really important that we have a regulatory and legal environment that supports it. This means having globally predictable, interoperable, and adaptable regulations that promote digital connectivity and really bridge the digital divide, and that promote data flows and secure communications like encryption of data and transit.
Judith Espinoza: I really appreciate what you said about AI always being part of these technologies, right? I think it’s easy, perhaps from a consumer perspective, to look at things as siloed developments, but as we move into the next phase of internet, we can see that none of this is developed on its own. These are things that have to go together. AI is a enabler for lots of these technologies, but it’s not a product on its own, so I think this is perfect. With that, I also want to share, part of the way that at the forum we’re envisioning the future of the internet is that these are digital intermediaries for connection, whether it be through social media, whether it be to commerce, whether it be to health, agenda AI, you name it, right? And one of those ways, one of those pathways forward is through digital public infrastructure. So the way that people can connect to each other, also economic opportunity, growth. And with that, I want to turn to Apostolos, and I want to ask you, how has Greece advanced the next iteration of the internet experience through digital public infrastructure? How are you developing DPI in Greece, and what are some of the, maybe, the governance opportunities that that presents, right? DPI as an enabler of good governance, as a means of connection.
Apostolos Papadopoulos: Thank you very much for your question, and I’m excited to be here. So in the Greek context, I think, the digital transformation journey of the country is in two stages, in two phases. We’re currently in a stage where we are doing a lot more work in AI and working with emerging technologies, and you were talking about experiences, and I think the permeating word that would delineate this would be trust and directness and transparency. So citizens would like to interact with governments and to have a direct and easy way to do that. And a way to do that in a way. So currently, we’re doing a lot of work in AI. We are doing work in LLMs, where we created a government chatbot, so citizens can interact with the government portal and figure out easy ways to interact with every service and have access to digital services. We’re doing work in AI and education with digital tutoring and homework assignments. So in this phase, we’re investing a lot in new and emerging digital public infrastructure, emerging technologies. The first phase that allowed us to do that starts in 2019 with the creation of a digital ministry, digital transformation ministry, and that was because up to that point, some of that did not exist in Greece, and that created the baseline for the second phase to be able to be executed. So from 2019 to 2023, there’s been a digital tiger leap, as people have called it, in the sense that digital adoption was very low in Greece. 2018, we had 8 point something million digital transactions in total. Greece is a country of about 10 million people, so it’s a very low number of adoption. But 2023 ended with 1.4 billion. So if you chart that, it’s exponential growth, both in terms of supply as well as demand. So this stage, this first stage, created the regulatory framework, the engineering framework, the platforms for us to be able to go in the second phase and do more work with emerging technologies. And the regulatory framework, speaking of that, is a crucial layer of this stack. So you have to have common sense, light touch approaches, regulation, people can trade both internal and inside the government, as well as external partners. And overall, I would say, API in Greece currently is very much a given, and digital is considered something that is, you know, by default.
Judith Espinoza: People and businesses expect of the government. Thank you so much. I want to follow up with one more question for you. You’re talking about exponential growth and usership, and in following this model, do you think you see this as an essential way, I guess, for also for financial growth for the country, right? You’re connecting, it’s peer-to-peer, it’s also services-to-peer, and also, I guess, for businesses as well. How do you see this growth?
Apostolos Papadopoulos: Yes, very positively. One of the deliverables of this approach has been $2.5 billion in investment in FDI. We have, we are the, Greece is the only European member state, the only European member state, along with Poland, a major high-risk country. So, okay. Can you hear me better now? Perfect. Okay, great. All right, thank you. Sorry. So, FDI is a crucial part of this equation. Can you hear me better? Yes, that is fantastic. Okay. So, we had a microphone problem. So, I was just saying, FDI is crucial, and it is a direct byproduct of the strategy, and of the execution of the strategy. So, the Greek government has been working with international and local partners, and there has been a great synergy between all the stakeholders, and both in terms of job growth, as well as in terms of investments, has been a very positive story so far. Thank you so much.
Judith Espinoza: With that, you know, there is an interesting narrative that we are starting to weave here, right, which is investment, and that leads to growth, and that leads to opportunity. And that builds good governance, right? This is an opportunity to build better governance, to build better trust among stakeholders. And with that, I really want to pivot now to Britain. You know, we are talking about the Internet evolving, and as these technologies evolve, I wonder, what do you think are the core regulatory and policy obstacles that we must overcome to really make a better Internet, right? What have we done wrong? Where can we do things better? And are there really any new risks that you think regulators should be paying attention to? Thank you.
Brittan Heller: Can you all hear me? Great. So, I teach international law and AI regulation, and have worked in emerging technologies for about eight years now. So, I’m going to give you the conclusion first. The conclusion is that emerging technologies are a constellation, and if your regulatory approach focuses on one aspect in lieu of the others, you’re going to miss the bigger picture. So, you have to think about the way that AI will be interacting with immersive technologies. We’ll look at new payment systems like blockchain. We’ll look at the new petrol of the Internet, quantum computing, and seeing how all of those systems will feed off each other, will interact with each other, and how existing law may not be a clean fit for these new technologies. There are four things that I think can be valuable when you’re trying to figure out this puzzle about if your existing law will fit, and how to determine what needs to be addressed first in a regulatory regime. The first obstacle is ensuring that these regimes, which were designed primarily in the late 1990s and early 2000s, are adaptable enough to keep pace with the rapid evolution of these technologies. One example that I work a lot on are virtual reality or XR systems. We put on a conference at Stanford Law School last year called Existing Law and Extended Reality, because you can’t just take laws formulated for 2D computing, put them into 3D spaces, and expect that they’re going to work the same way. The way that you formulate jurisdiction, the way that privacy concerns operate in a technology that is different from your laptop because it has sensors that must reach out into the environment to calibrate your devices. your privacy looks different when it’s not just based on the words that are going in and out of servers, when it’s actually location-based and based on your biometric data. So looking at that, how adaptable is your legal system? Second is the question of cross-border regulation, and I know I sound like I’m coming straight at you from 2006, but it’s a very important issue, and when you look at all of this, look at it with all puns intended as a second bite at the apple. All of the things that you wish could be different about the way internet governance works and manifests in your jurisdiction, in your company, in your stakeholder group, you have a chance to do it differently this time. Take that opportunity. So looking at the way that data protection laws align with regulations in other parts of the world so we don’t create another fragmentary regulatory landscape, and how do you create the coordination necessary to make this work across different countries? Third is a question of accountability and transparency. As we rely more on automated systems, the question of who is responsible when something inevitably goes wrong becomes much more complicated. So when I evaluate AI regulatory regimes, it’s not just the robustness or strength of the laws that I look at, it’s the actual enforceability of those regulations. And the way that laws that are cut and pasted from one country and placed into another legal context may not have the same impact on the ground and in the business sector based on the way your corporate laws are structured. So you can’t expect the same results by cutting and pasting. And finally, in terms of new risks, one of the most pressing concerns is the potential for increased surveillance and erosion of privacy. As these technologies are evolving, you see enabled, they enable more granular tracking and profiling of individuals, oftentimes without knowledge or consent. And in new technologies where AI grows legs and walks out in the world amongst us, you you need this type of information to calibrate the device. So your conception of privacy, of consent, freedom of freedom of information, all of these things need to shift in the type of understanding that you see embedded in earlier generations of laws. Overall, regulators need to think about these risks on a broad scale, focusing on fundamental rights while fostering innovation. And the nice thing about these new ecosystems is that what is good for users is also good for human rights. So they don’t have to develop at odds with each other when you’re starting to create these systems of new. Thank you. Thank you so much. And I think this is a perfect segue to
Judith Espinoza: you, Dr. Hoda. We’ve heard now what those policy gaps are. I wonder, this is governance and policy, right? But from your perspective, you’ve advised multiple industries, multiple companies. What do you think the most important ethical considerations are for the private sector when developing these technologies? How do you think that this can be built in a trustworthy way? And then also, we always talk about trust at the forum, right? We want to talk about how you build trust with users, with society at large. But what are the metrics then to know that something is trustworthy, right? We can all say that something is trustworthy, but how can we prove that there is trust there, right? Whether it’s the government level, whether it’s at the product level. I
Hoda Al Khzaimi: think one of the most important aspects that faces the private sector is how can you bring the ethical stack into the trust component, into the functionality of the final product that you’re putting into the market. We have talked about several trust frameworks that exist internationally, with the OECD, with the UN, and as well with the World Economic Forum and in TASSI, which is mostly that’s addressable to accountability, transparency, security, inclusivity, and interoperability. But when you look at the technology that’s being produced in the market today, you don’t see that kind of holistic deployment of ethical components across map and the technology stack. So how can we encourage that at the algorithmic level is very important. And I think right now in 2024, when we are trying to publish in my research group and any kind of AI top tier conferences, what I see very positive is the fact that they kind of encourage you to make sure that your algorithm is accessible and the transparency is available in the system. And that’s quite important, because then you start changing the system. And you don’t get access to publication unless you do that. And I would like to see these kind of not just as well existing on the platform level. Because when we talk about the current social media platform, for example, we don’t see the same level of transparency. I mean, I’m not talking about annual reporting or reporting that exists at specific periodic level, but that kind of dynamic, quick at the tip of your finger level of ethical transparency that exists, that will tell you who used your data for what purpose you use your data, that kind of end user dashboard platform that should exist for user. And I think in the in the research space, we do a lot to improve security, we do a lot to make sure that we have, you know, privacy aspects, zero trust systems, homomorphic encryption, federated learning, these big tools that takes us sometimes years to develop in order to bring trustworthiness and level of reliability and security into the technology, but not necessarily always we see them used or transformed into the product cycle. So that’s kind of concerning on the map, on the holistic map. And I think the 2000, this area of 2025 to 2030, would be the period where we perfect this, perfect this kind of transitioning of ethical components into technology. That’s the first head, I think, or challenge that we see across the map. And the second challenge is for us to understand that bringing ethical and trustworthy digital solutions into the platform is a multi stack layer kind of challenge. So you’re not dealing only with the technology or with the ethical stack, but also dealing with the regulation aspects with the harmonization efforts that exist across the globe. You’re dealing with how we should write those into policies and to, as well, regulations that would bring data acts into action in different jurisdictions, respecting the indigenous differences of those jurisdictions is very important, because for as Breton just said, it’s quite different to bring activation of laws when you’re dealing with it in specific jurisdiction versus another. And we should respect that. And we should allow those kind of legal sovereignty rights of developing the law when it comes to technology to exist across different markets. So this is the second, I think, challenge that I see existing. And it worries me at the moment that everybody is looking at the EU, for example, AI act as the grand flagship regulation to be used across jurisdiction, which is not going to be the same because it’s risk oriented kind of framework of legislation that might not work for Asian countries, for example, where they are much more concerned about the value principle based kind of approach, and they want that to be, as well, translated into the platform. So interpretation of ethics and legislation into the platform is very important. And your second question is, what do we have to include when we are talking about the trust stack across board? I mean, if you ask the technology oriented person, the answers will be different than if you asked a legal kind of entity. And if you asked a different stakeholder who’s coming from the policy framework or from the implementation industrial framework, and in my opinion, the first thing we should have is a single source of truth into this, like a governance structure that would tell you a trust stack should, and the best kind of guidelines aspect include these different layers. And to me, the first layer is the ability to allow users to have accessibility to their data, and also visibility of data transactions that exist across the map, and authenticating who actually access those data transactions at different layer of the mapping. And this is something that we had in conversation and research communities, and as well industrial communities since 2009, because we had this massive technological crisis where user woke up one day and realized that they want to have acquisition of their own, as you have said, and also our colleague from Greece just highlighted, accessibility to data and accessibility to data market is considered today an economy by itself. The work we’re seeing around the DPIs and the government technologies, which is the new rise of technologies that we are gonna see until 2030, is gonna be massing to over 6 trillion USDs. So it’s a huge industry that’s being developed on the back of the data that’s being provided by the citizens. So how can we make sure that the first layer of providing accessibility to those data in a secure manner is available for the users? The second layer is about the security stack, and this is what we already have, and I think we’ve done quite a rigorous work around it. We just have to perfect the adaptability of those security stack onto different platform, especially if we’re talking about the metaverse, or if we’re talking about this kind of real-time transactions, then we need to make sure that they are, I would say, fast enough, they are as well light in operations to be able to be computed into different devices as well. And the third layer is the layer of legislation and regulation, because, I don’t know, we have discussed this several times across the map, but I think I just wanna reiterate this for people who don’t understand that legislation takes time. Legislation takes, I mean, a cycle of three years or a cycle of more than three years in certain jurisdictions to take an effect. And technology development is not waiting for legislation to be passed. We see new models of AI being deployed and pushed across markets, so how can we protect users through legislations if we can produce maybe something that’s faster than what we’re having in the current cycle?
Judith Espinoza: It’s very important. I wanna come back to a couple of your points, especially on data and open-source modeling, but I wanna, in the interest of time, open this up now to the audience. I wanna see if anyone has any questions. We can go ahead and pass the microphone around, and I’m also gonna ask that we monitor the chat online to see if there’s some questions. But we have a question over here. I can pass you the mic. Please tell us what your name is and where you’re from, and please address. Sure, fantastic, thank you.
Audience: My name is Ibrahim, and I’m from the Digital Impact Alliance, DIAL. We work in supporting countries in Africa to deal with or develop data governance frameworks which are in-line, up-to-standard, with global best practices. Now, with that in mind, what, and Dr. Hoda, I’m looking at you for this question, probably. Britain said this legal framework development is a second bite at that apple, which I think is quite exciting. But with countries in Africa, which are latecomers into this digital governance space, and with the advent of fast-paced development of technologies that require consume, ingest, but at the same time produce a whole lot of data, how do you expect countries in Africa, or how do you advise for them to deal with private sector actors at this point in time with enabling legal frameworks, with supportive legal frameworks that are not stifling innovation, but at the same time creating that ability to drive value out of engagement with the private sector? Ibrahim, right?
Hoda Al Khzaimi: Thank you so much for the question, I would say. The first thing I would say that Africa is not a latecomer into this conversation, because Africa itself have produced the first, I would say, payment infrastructure. Like, within DPI infrastructure, you care about payment scales, you care about digital identity, you care about, as well, accessibility to healthcare services and other type of services on the platform, and regulation. And Africa, as I said, is not a latecomer into this conversation. And Africa, with examples that happened in Kenya, like the M-PASA, for example, and the payment structure, were pioneering in this space, even globally, so I would say. And I think it’s one of the first or two payment global systems that existed. And as well, Rwanda itself, at the moment, is building loads of good stack when it comes to government tech, that is also pioneering on a government level. So I think there is a lot to learn from Africa when it comes to their mass deployment onto those structures. And also, when we talked, like in 20, I think, 23, we talked to the Minister of Technology and Infrastructure in Rwanda, and they were also trying to pass this knowledge through African countries to African countries from Rwanda, which is great to see. My advice, when it comes to developing legislation or regulations for government technologies in general, touching emerging technologies, not just one aspect of technology, is try to embody what we have already seen in the global de facto, which is a sandboxing approach. A sandboxing approach normally is something that we see mostly in financial sectors, because you’re trying to de-risk the threat that might come into the financial space from adapting a new technology or adapting as well a new emerging aspects into the mass deployment of a system. So a sandboxing approach into those technologies between private sector and public sector is quite important. And this is what we have tried to push for with the World Economic Forum in UAE as well. We have established a framework for the World Economic Forum in UAE as well. We have established this kind of a global trade regulatory structure where countries are encouraged to come and be on boarded on it to understand how can they deploy specific technologies like AI into different domains, not just in the government, but as well in the public sector as well as industry. So I think learning from those global examples and building your own niche localized example is quite important for you to understand the current pressing needs in your markets and try to keep that kind of indigenous a space of solution making and build your own jurisdiction of regulations and policies because this is something you should not, as Britain said, I do agree on this 100%, you should not copy paste from a global structure. You should try to understand the nonsense and the problems and the challenges that you have in the ground and that you’re trying to solve for because it’s part of the sovereignty aspects of technology, sovereignty aspects of data, sovereignty aspects as well of the infrastructure that you will be developing for these type of technologies across the map.
Judith Espinoza: Do we have anyone else in the room with a question? If not, can we pull up maybe the chat from the Zoom room so we can also look at that? Okay, while we wait for that to come in, I wanna… Okay, we’ve touched on some of these, but I wanna touch on something that came up here in this conversation. And I wanna really, there seems to be a tension, right? In most bodies of research and some work about having sovereignty, right? But also making sure that the internet that we develop isn’t fragmented. And I wanna, and a large part of that is this data economy that you touched on. And a lot of that is this really just global data stewardship, right? I mean, we’re talking about tech and we’re talking about platforms whether decentralized or centralized that really span multiple physical jurisdictions, right? Across countries, across nations, regionally. So I wanna come in and I wanna open this up. First, I wanna direct it to Robin. How is meta thinking about this data stewardship aspect of this technology, of this future internet? All of these technologies are sort of changing the way users either produce data or interact with data. So yeah, how is meta thinking about this? And how do you see it maybe changing or affecting again, building on that user trust?
Robin Green: Thanks, that’s such an important question. And I think it applies not only when you’re thinking about the metaverse and AI and things like that, but really to the way that we are interacting with the internet in general. I think we really need to get crisp on what we mean by sovereignty, right? Because there are a lot of different approaches and in different definitions to digital sovereignty. For some, it can mean sovereignty of government and often that historically has been very territorial in nature and physical in nature. But then with the internet, that sort of shifts all of that. But then there’s also the concept of personal sovereignty, digital sovereignty. And so I think one of the most important things to do is make sure that as we are creating different governance frameworks, we’re doing two things. One, making sure that they’re interoperable with one another so that we are not creating frameworks that are not compatible so that you can’t offer services in two separate jurisdictions at the same time that are more or less the same. And so I think that’s sort of one of the key things essential to ensuring that is making sure, as I mentioned earlier, we’re promoting things that are foundational to an open, interoperable and secure internet, in particular, the free flow of data across borders and digital security and broader adoption of some of the best technologies and tools that we have to augment digital security, like encryption of data. data in transit and data at rest. The second thing is we need to make sure that governance is adaptable. And that is a really hard needle to thread. I think we do this in every space of digital governance the best we can, but we’re still really trying to get to good. And the reason for that is because it’s really hard to know what the future’s gonna look like. I think Britain was absolutely hitting the nail on the head when she was talking about how these laws that we’re often applying today that were created in the 80s, 90s, and early aughts, they don’t necessarily seamlessly fit with the technologies of today. So let’s take that as a cautionary tale, not only around making sure that we are not just copy, pasting, and making the mistakes of yesterday, but also making sure that as we’re creating legal frameworks, we’re building them with, sorry, this keeps going out on me. We’re building them with enough flexibility and adaptability, and in a way that in some sense is really technology neutral, even though we’re still talking about tech governance, so that in 20, 30 years, we’re not in the same position where we have a slower to develop legal framework than technology is adopted that really is not fit for purpose. To that end, I think governance has to be collaborative, cooperative, and multi-stakeholder. One of the most essential things in how we think about not only product and, excuse me, product and service governance, but also just creating what the policy frameworks and legal frameworks around the world, what we think that they should look like, is making sure that we’re collaborating with other private sector peers, not only within our sector, but with other kinds of companies in different sectors as well, collaborating with government, civil society, academia, and users, and I think that’s one of the great examples of why FORA’s like IGF are so critical. It gives us this opportunity to come together and to really promote that kind of multi-stakeholderism, and then I think the last thing is we have responsible innovation principles, and one of the things that’s really important about those principles is we’ve developed them in a way that is meant to be adaptable in just the same way that I’m sort of suggesting our legal frameworks need to be adaptable. They’re high-level principles that we have to execute on in a way that users trust, and the way that we know we’re doing that right is because users are happy with it, and it’s exactly like Britton said. What’s good for users is good for human rights, and frankly, what’s good for users and human rights is also good for economic development and digital transformation, and so our responsible innovation principles are never surprise people. A good example of that is on our smart glasses, the Meta Ray Bans, if they’re turned on, you can see a little LED light, so people will know if a person in their vicinity is using these glasses to take pictures or to livestream or something like that, and if the user actually tries to cover up the LED, they’ll get a prompt that they have to uncover it in order to continue using the product as they want. In addition to that, we wanna provide controls that matter. This is especially important as it applies to youth using our products, not only making sure that youth have those controls and that we’re starting with built-in privacy by default, but also making sure that parents have the kinds of controls that they want so that they can play a really active role in guiding the experiences of the, excuse me, the experiences that their children are having online using these technologies. In addition to that, consider everybody. Consider everybody is our third principle, and it’s really meant to ensure accessibility. It’s meant to ensure that this is an internet and these are technologies for everybody. An example of how we do that is by making sure that we have adjusted height, for example, on our Meta Horizon operating system, which means that whether you are standing up or sitting down you can have the same really comfortable experience in VR. We also have a put people first principle. This is all about privacy and security. Oh, I’m sorry, I’m not good at holding microphones. You’d think I was a digital native, and so some of this would be easier, but I’m not great with technology, although I guess this isn’t really digital technology. So anyways, put people first, privacy, security, I could go on about that for a very long time. In the context of VR in particular in the metaverse, well, VR and augmented reality and XR, I think we think a lot first and foremost about the youth experience and making sure that we’re building privacy and security into that, but then the other aspect of that is making sure that adults have that same kind of control over their experiences and autonomy. We implement this through a lot of different approaches that range from the kinds of user controls that we’ve talked about, but also privacy enhancing techniques like processing data on device. And then we also try to minimize data collection as much as we can, and then we do safety and integrity as one of the major things, and I think you’ll notice that safety and integrity sort of principles are woven throughout some of our other principles, but it’s also its own standalone principle. And we really try to live that and make sure that our users can experience that principle by fostering safe and healthy communities. We want to make sure that we are promoting communities where people can gather with shared intent incentives and establish positive norms to connect online. We want to be empowering people, developers, creators, and users with the kinds of tools to create the experiences they want, but we also need to make sure that people with bad intentions are not able to just do whatever they want on services. And so with that in mind, we have a code of conduct for virtual experiences that makes sure that we do things like prohibit illegal, abusive behavior, or excuse me, behavior that promotes illegal activity, behavior that is abusive, or behavior that could actually lead to physical harm. And then we’re also doing things to promote admins and their ability to moderate their spaces. And so we just want to make sure that as we’re thinking about these things, those high-level values, those principles are really adapted into governance structures that governments are considering so that we can really be maximizing voice, safety, authenticity, dignity, and privacy in the growing adoption of these new technologies.
Judith Espinoza: Thank you, Robin. I think that was very comprehensive. And I want to touch on one thing that I think is really important, right? So when you’re developing these frameworks, right, you really do need a whole society approach, but there’s also something interesting here that I think we can all take away, which is there really is an alignment of interest, right? And it’s an alignment of interest for everyone because trust makes things work, right? When a user trusts a technology or trusts a platform or a service, that can expand, that can grow. That’s an opportunity for growth for everyone. And with that, I want to pass this on to Apostolos now. You’re sort of the example of what private-public cooperation can do. It’s kind of like the bread and butter that we do at the forum. So I want to ask you, how does Greece approach this, right, this issue of data? How do you approach data stewardship? How do you come up with these frameworks that work, that are trustworthy, that are interoperable, and that leverage all of these sort of new technological innovations so that people can have better access to opportunities through digital intermediaries? And then I’m going to pass on to Britton after that on a similar question, but I’ll let Apostolos go first. Please, go ahead.
Apostolos Papadopoulos: Thank you, Judith. Fantastic question. I think in the Greek context, trust, privacy, and data security are defining axioms and characteristics of the digital transformation strategy. Everything that was done and is still being done has always put users first, citizens first, their data, and everything happens with consent. So my colleagues here mentioned a bunch of great words earlier. Transparency, consent is important. So anything, anytime, a digital service, whether that’s commenced by the citizen or by another government organization, has to access data. The citizen has to consent to that data processing. Other than that, from an institutional perspective, when the Ministry of Digital Governance was created, the minister, it was designed that he was endowed with CIO roles, let’s say. That means he or they had the unilateral power to connect any data set they want. But I think connect is the operative keyword here because it’s not about owning the data sets. It’s not about owning the data. It’s about simply connecting different registries with the intent of producing a digital service outcome. for the citizen and the citizen has explicitly asked for that. So it’s not about the government going out there on its own and processing data and creating new registries and creating, you know, stuff like that. But it’s about creating the experience and creating the trust culture that people know, oh, I want to do X, Y, Z. Here’s how I do it. Here’s one platform to do it. And it’s being done in a transparent way to me and to my understanding. So trust, openness, trustworthiness are defining characteristics of the digital transformation strategy.
Judith Espinoza: Okay, thank you so much. You know, when we talk about traditional digital public infrastructure, the things that kind of come up really always are, you know, data exchange, online payment systems, and digital identity. And so, you know, across the stage, we see how people approach that in different ways, right? Whether you’re building soft digital identities and footprints through like a meta account or, you know, your Google account or whatever it is. But these all sort of build on this aspect of connection. And I want to pass on to you now, Britton. What do you think are those gaps really? Because we’re talking about, you know, theoretically, and we see this alignment, right? This is a good alignment of incentives. But what do you think is the gap there then to take us there? And then you can talk about it from a regulatory standpoint, but what do you think are the gaps there to make sure that we sort of all align and take this work forward? Three things.
Brittan Heller: Number one, I think if we are not deliberate about creating spaces for cultural engagement and education in the next iteration of the internet, we will not have them in the same way that we did in the first. When you look at the people who created the internets, the first time, they all were professors who were trying to share information. They really privileged, they worked for government organizations. They got their funding from government organizations. With the next iteration, having extensive private investment into it, it is not a natural evolution to have a cultural space emerge if civil society does not ask for it and if governments aren’t aware that that is a gap. You can look at this with the metaverse where you saw certain countries starting to create cultural properties. Barbados created an embassy in the metaverse. South Korea had a widespread presence. And if you look at Saudi Arabia, there’s actually augmented reality aspects of their cultural tours when you go to some of their UNESCO World Heritage Sites. So you have to think about how the things that make people unique, the things that your people value, the things that make you special, translate into the new mediums of computing. The second is you have to think about hardware floor because the hardware floor for some of these new technologies is not solidified yet. What this means is that we risk creating fragmentation via technical means when we may not intend for that to happen. The example for that is Magic Leap just announced that they are going to stop supporting their first edition of their XR headset. So all of the content that was created for the last eight years will no longer be accessible in a matter of weeks. This is happening again and again and again, and there are many industry groups and user groups within the XR community who are very, very concerned about the loss of their data, the loss of their creative energy because the hardware floor is not settled. We don’t know the format. There are groups working on that now that are just starting to emerge, like the Metaverse Standards Forum. Most people are very surprised to learn that it was just this year that the file format for 3D assets to actually move between worlds and function between worlds was created by Adobe, so the equivalent of a PDF-type format for digital assets. We’re really at that phase in some of these new computing platforms, and so you have to think about what that means and what will be lost if we don’t bring it along. I think the final piece is looking at ways that concepts like consent can be evolved with new computing platforms. I did a study that was published and presented at ISMAR, which is a big conference about spatial computing. Kind of strange for an international law professor to be there, but we were looking at different ways that the notice and consent mechanisms that you have in flat-screen traditional computing could be adapted to 3D computing, and if the affordances of 3D technology meant you could do it differently. And we found that, yes, you could do it differently. Users liked the mechanism that we built that showed them that their eyes were being tracked and how the eye tracking was working. They responded really, really positively to that, and then they felt like they were able to consent to the use of their data in more meaningfully informed ways. That’s kind of anathema to what a lot of companies thought, that if you showed people that their eyes were being tracked, it might freak them out, to be honest. But they liked understanding what the data flows… We visualized the data flows for them and explained to them how the device worked. That was the basis for meaningfully informed consent that you couldn’t do on a flat screen. You had to do it in 3D. I think those are the three pieces that might get overlooked if we’re just looking at it through a pure kind of platform policy or regulatory lens. That’s fantastic. Thank you.
Judith Espinoza: And we have now the warning three-minute mark, but I want to wrap up. And I think there’s some good takeaways to this, right? First, I think when we think about the future Internet, all of us are active participants in how we build that future together, right? None of us are, like, passive users of the Internet or online or digital intermediaries. We all have an active role in how we shape that. And I want us all to feel empowered and walk away in knowing that what we do matters, right, from a user standpoint or through your own personal capacities in whatever way you join us. I see Jeff from Amazon Web Services here, and we’ll chat in a bit with him. But the second takeaway is, regardless of what the future Internet looks like, right, we have to make sure that we’re taking a principled approach to how we build this, right? We want to make sure that the users at the center, that digital public infrastructure really is a means to further, whether it’s economic opportunity or connectivity, whether it’s metaverse, whether it’s projects like the ones that Brittan mentioned. And there’s also, you know, there’s the Duaverse now, which is like a Dubai Electricity and Water Association created this, like…
Hoda Al Khzaimi: I mean, in UAE, we have many. We have, as well, the one with MR and the land authority where you can pay and actually co-pay for real estate assets on this spot. We have, as well, developed a strategy that is extremely applicable to a wide scale of industries, and we are encouraging the industry to build that kind of metaverse collaborative space that reflects back into the economy and different FDI structures. So I think it is about how the leadership of this space will happen. I mean, we have advocacy on across the map from the leaders of the country, which translate to building economies and building companies and building a solution that translates across map. But this is exactly what we talked about, right?
Judith Espinoza: So we, in these examples, see how metaverse or AI is being built into DPI, right? This is really pushing forth how people are going to experience the future of the internet. And I think, lastly, right, all of our incentives align. No one advocates. No one wants, like, a bad future internet. So it’s important to all come together. And I want to thank, to close up, I want to thank the IGF for hosting us and allowing us to have this space. I want to thank all of you for being wonderful supporters of our work, but also really great collaborators in what we do. And, you know, the final takeaway is this is kind of the example of what we want moving forward, right? This is all of society represented on this panel and through the work that we’ve been doing here for the last couple of days. So I encourage you to take that with you and be active participants in the future internet that we want to create, right? It’s not static. It’s a product that keeps evolving. And we keep evolving with it. So, again, thank you so much. I’ll let all of us go. Again, thank you for spending the last day of the forum with us. We’re super grateful. And if you have questions and you want to hang around, please do so. We’ll be here for a couple more minutes. Thank you. Round of applause for our wonderful panelists. Thank you. Thank you. Thank you.
Judith Espinoza
Speech speed
211 words per minute
Speech length
1761 words
Speech time
500 seconds
AI as an enabler for other technologies, not a standalone product
Explanation
Judith Espinoza argues that AI is not developed in isolation but is integrated with other technologies. She emphasizes that AI acts as an enabler for various technologies rather than being a standalone product.
Major Discussion Point
The Future of the Internet and Emerging Technologies
Digital public infrastructure as a pathway for connection and economic opportunity
Explanation
Judith Espinoza highlights the importance of digital public infrastructure in facilitating connections and creating economic opportunities. She views DPI as a crucial pathway for advancing digital connectivity and fostering growth.
Major Discussion Point
The Future of the Internet and Emerging Technologies
Alignment of interests between users, human rights, and economic development
Explanation
Judith Espinoza highlights the alignment of interests between users, human rights, and economic development in building the future internet. She emphasizes that trust is crucial for the growth and expansion of technologies and platforms.
Major Discussion Point
Building the Future Internet
Brittan Heller
Speech speed
147 words per minute
Speech length
1453 words
Speech time
591 seconds
Need for adaptable legal frameworks to keep pace with rapid technological evolution
Explanation
Brittan Heller emphasizes the importance of creating legal frameworks that can adapt to rapidly evolving technologies. She argues that current laws, often designed for earlier tech generations, may not fit seamlessly with new technologies.
Evidence
Example of laws from the 80s, 90s, and early 2000s not fitting well with current technologies
Major Discussion Point
The Future of the Internet and Emerging Technologies
Agreed with
Robin Green
Agreed on
Need for adaptable and interoperable legal frameworks
Importance of cross-border regulation and coordination for internet governance
Explanation
Brittan Heller stresses the need for coordination in cross-border regulation for effective internet governance. She highlights the importance of aligning data protection laws globally to avoid a fragmented regulatory landscape.
Major Discussion Point
The Future of the Internet and Emerging Technologies
Constellation of emerging technologies (AI, XR, blockchain, quantum computing) shaping the future internet
Explanation
Brittan Heller describes the future internet as being shaped by a constellation of emerging technologies. She emphasizes that focusing on one technology in isolation will miss the bigger picture of how these technologies interact and influence each other.
Evidence
Mentions AI, XR, blockchain, and quantum computing as examples of interconnected emerging technologies
Major Discussion Point
The Future of the Internet and Emerging Technologies
Potential for increased surveillance and erosion of privacy with new technologies
Explanation
Brittan Heller warns about the potential for increased surveillance and privacy erosion with new technologies. She points out that emerging technologies enable more granular tracking and profiling of individuals, often without their knowledge or consent.
Major Discussion Point
Challenges and Opportunities in Digital Transformation
Importance of accountability and transparency in automated systems
Explanation
Brittan Heller emphasizes the need for accountability and transparency in automated systems. She argues that as reliance on automated systems increases, it becomes more complex to determine responsibility when things go wrong.
Major Discussion Point
Challenges and Opportunities in Digital Transformation
Need for deliberate creation of cultural engagement spaces
Explanation
Brittan Heller stresses the importance of deliberately creating spaces for cultural engagement in the next iteration of the internet. She argues that without intentional effort, these spaces may not naturally emerge as they did in the first iteration of the internet.
Evidence
Examples of countries creating cultural properties in the metaverse, such as Barbados creating an embassy and Saudi Arabia using augmented reality for cultural tours
Major Discussion Point
Building the Future Internet
Importance of addressing hardware floor issues in new technologies
Explanation
Brittan Heller highlights the need to address hardware floor issues in new technologies to prevent unintended fragmentation. She warns that unsettled hardware standards can lead to loss of content and creative energy.
Evidence
Example of Magic Leap discontinuing support for their first edition XR headset, making years of content inaccessible
Major Discussion Point
Building the Future Internet
Evolution of consent mechanisms for new computing platforms
Explanation
Brittan Heller discusses the need to evolve consent mechanisms for new computing platforms. She argues that 3D computing environments offer new possibilities for obtaining meaningful informed consent from users.
Evidence
Study presented at ISMAR showing users responded positively to visualizations of eye tracking and data flows in 3D environments
Major Discussion Point
Building the Future Internet
Robin Green
Speech speed
152 words per minute
Speech length
1506 words
Speech time
591 seconds
Importance of interoperable governance frameworks to avoid fragmentation
Explanation
Robin Green emphasizes the need for interoperable governance frameworks to prevent fragmentation of the internet. She argues that frameworks should be compatible across jurisdictions to allow consistent service offerings.
Major Discussion Point
Data Governance and Digital Sovereignty
Agreed with
Brittan Heller
Agreed on
Need for adaptable and interoperable legal frameworks
Need for technology-neutral and adaptable legal frameworks
Explanation
Robin Green stresses the importance of creating legal frameworks that are technology-neutral and adaptable. She argues that this approach will ensure the frameworks remain relevant as technology evolves rapidly.
Major Discussion Point
Data Governance and Digital Sovereignty
Agreed with
Brittan Heller
Agreed on
Need for adaptable and interoperable legal frameworks
Balancing data sovereignty with an open, interoperable internet
Explanation
Robin Green discusses the challenge of balancing data sovereignty with maintaining an open and interoperable internet. She emphasizes the need to promote free flow of data across borders while ensuring digital security.
Major Discussion Point
Data Governance and Digital Sovereignty
Differed with
Hoda Al Khzaimi
Differed on
Approach to data sovereignty and internet governance
Need for cross-border data flows and digital security measures
Explanation
Robin Green highlights the importance of promoting cross-border data flows and implementing strong digital security measures. She specifically mentions the need for encryption of data in transit and at rest.
Major Discussion Point
Data Governance and Digital Sovereignty
Importance of regulatory frameworks supporting digital infrastructure
Explanation
Robin Green emphasizes the need for regulatory frameworks that support digital infrastructure development. She argues that such frameworks are essential for the growth of technologies like AI and the metaverse.
Evidence
Mentions data centers as an example of necessary infrastructure
Major Discussion Point
Challenges and Opportunities in Digital Transformation
Need for globally predictable, interoperable, and adaptable regulations
Explanation
Robin Green stresses the importance of creating globally predictable, interoperable, and adaptable regulations. She argues that such regulations are crucial for promoting digital connectivity and bridging the digital divide.
Major Discussion Point
Challenges and Opportunities in Digital Transformation
Responsible innovation principles focusing on user trust and safety
Explanation
Robin Green discusses Meta’s responsible innovation principles that prioritize user trust and safety. She emphasizes the importance of providing controls that matter and considering everyone in the development of new technologies.
Evidence
Example of LED light on Meta Ray Bans to indicate when they are in use for recording or livestreaming
Major Discussion Point
Trust and Ethics in Technology Development
Importance of privacy, security, and user controls in new technologies
Explanation
Robin Green highlights the importance of privacy, security, and user controls in new technologies, especially for youth. She emphasizes Meta’s approach of starting with built-in privacy by default and providing parental controls.
Evidence
Mentions privacy enhancing techniques like processing data on device and minimizing data collection
Major Discussion Point
Trust and Ethics in Technology Development
Agreed with
Apostolos Papadopoulos
Hoda Al Khzaimi
Agreed on
Importance of user privacy and consent in data processing
Multi-stakeholder approach to internet governance
Explanation
Robin Green advocates for a multi-stakeholder approach to internet governance. She emphasizes the importance of collaboration between private sector, government, civil society, academia, and users in shaping policy frameworks.
Evidence
Mentions the Internet Governance Forum (IGF) as an example of a platform for multi-stakeholder collaboration
Major Discussion Point
Building the Future Internet
Apostolos Papadopoulos
Speech speed
131 words per minute
Speech length
838 words
Speech time
381 seconds
Greece’s digital transformation journey and exponential growth in digital adoption
Explanation
Apostolos Papadopoulos describes Greece’s rapid digital transformation, which he calls a ‘digital tiger leap’. He highlights the exponential growth in digital transactions and adoption in the country since 2019.
Evidence
Increase from 8 million digital transactions in 2018 to 1.4 billion in 2023
Major Discussion Point
Challenges and Opportunities in Digital Transformation
Importance of user consent and transparency in data processing
Explanation
Apostolos Papadopoulos emphasizes the importance of user consent and transparency in data processing in Greece’s digital transformation strategy. He states that all data access and processing requires explicit citizen consent.
Evidence
Mentions that citizens must consent to data processing for any digital service
Major Discussion Point
Data Governance and Digital Sovereignty
Agreed with
Robin Green
Hoda Al Khzaimi
Agreed on
Importance of user privacy and consent in data processing
Hoda Al Khzaimi
Speech speed
154 words per minute
Speech length
1843 words
Speech time
714 seconds
Incorporating ethical considerations into product functionality
Explanation
Hoda Al Khzaimi emphasizes the importance of integrating ethical considerations into the core functionality of technology products. She argues that ethical components should be deployed across the entire technology stack.
Major Discussion Point
Trust and Ethics in Technology Development
Importance of transparency and accessibility in AI algorithms
Explanation
Hoda Al Khzaimi stresses the need for transparency and accessibility in AI algorithms. She highlights the positive trend in academic conferences encouraging researchers to make their algorithms accessible and transparent.
Evidence
Mentions the requirement in top-tier AI conferences for algorithm accessibility and transparency
Major Discussion Point
Trust and Ethics in Technology Development
Need for user-centric dashboards showing data usage
Explanation
Hoda Al Khzaimi advocates for user-centric dashboards that provide real-time information about data usage. She argues for a level of transparency that allows users to easily see who used their data and for what purpose.
Major Discussion Point
Trust and Ethics in Technology Development
Agreed with
Robin Green
Apostolos Papadopoulos
Agreed on
Importance of user privacy and consent in data processing
Differed with
Robin Green
Differed on
Approach to data sovereignty and internet governance
Agreements
Agreement Points
Need for adaptable and interoperable legal frameworks
Brittan Heller
Robin Green
Need for adaptable legal frameworks to keep pace with rapid technological evolution
Need for technology-neutral and adaptable legal frameworks
Importance of interoperable governance frameworks to avoid fragmentation
Both speakers emphasize the importance of creating legal frameworks that can adapt to rapidly evolving technologies and remain interoperable across jurisdictions to prevent fragmentation.
Importance of user privacy and consent in data processing
Robin Green
Apostolos Papadopoulos
Hoda Al Khzaimi
Importance of privacy, security, and user controls in new technologies
Importance of user consent and transparency in data processing
Need for user-centric dashboards showing data usage
These speakers agree on the critical importance of user privacy, consent, and transparency in data processing, emphasizing the need for clear user controls and information about data usage.
Similar Viewpoints
These speakers share the view that transparency and accountability are crucial in the development and deployment of AI and automated systems, emphasizing the need for responsible innovation that prioritizes user trust and safety.
Brittan Heller
Robin Green
Hoda Al Khzaimi
Importance of accountability and transparency in automated systems
Responsible innovation principles focusing on user trust and safety
Importance of transparency and accessibility in AI algorithms
Unexpected Consensus
Cultural engagement in the future internet
Brittan Heller
Judith Espinoza
Need for deliberate creation of cultural engagement spaces
Digital public infrastructure as a pathway for connection and economic opportunity
While not explicitly discussed by other speakers, both Brittan Heller and Judith Espinoza touch on the importance of cultural engagement and connection in the future internet, suggesting an unexpected consensus on the need for deliberate efforts to create spaces for cultural and social interaction in digital environments.
Overall Assessment
Summary
The speakers generally agree on the need for adaptable and interoperable legal frameworks, the importance of user privacy and consent, and the necessity of transparency and accountability in AI and automated systems. There is also a shared recognition of the interconnected nature of emerging technologies and their impact on the future internet.
Consensus level
There is a high level of consensus among the speakers on core principles such as user-centric approaches, the need for adaptable regulations, and the importance of transparency. This consensus suggests a shared vision for the future internet that prioritizes user rights, innovation, and responsible development of technologies. However, there are some variations in emphasis and specific approaches, particularly in how different countries or organizations are implementing these principles.
Differences
Different Viewpoints
Approach to data sovereignty and internet governance
Robin Green
Hoda Al Khzaimi
Balancing data sovereignty with an open, interoperable internet
Need for user-centric dashboards showing data usage
Robin Green emphasizes the need for interoperable governance frameworks and cross-border data flows, while Hoda Al Khzaimi focuses more on user-centric control and transparency in data usage.
Unexpected Differences
Cultural engagement in the future internet
Brittan Heller
Robin Green
Need for deliberate creation of cultural engagement spaces
Responsible innovation principles focusing on user trust and safety
While both speakers discuss the future of the Internet, Brittan Heller unexpectedly emphasizes the need for the deliberate creation of cultural spaces, which is not directly addressed by other speakers who focus more on technical and regulatory aspects.
Overall Assessment
summary
The main areas of disagreement revolve around the balance between data sovereignty and internet openness, the approach to user data control and transparency, and the emphasis on cultural aspects in the future internet.
difference_level
The level of disagreement among the speakers is relatively low, with more emphasis on complementary perspectives rather than direct contradictions. This suggests a generally aligned view on the future of the internet, with differences mainly in specific focus areas and implementation strategies.
Partial Agreements
Partial Agreements
Both speakers agree on the need for adaptable legal frameworks, but Brittan Heller emphasizes the importance of considering the constellation of emerging technologies, while Robin Green focuses more on technology-neutral approaches.
Brittan Heller
Robin Green
Need for adaptable legal frameworks to keep pace with rapid technological evolution
Need for technology-neutral and adaptable legal frameworks
Similar Viewpoints
These speakers share the view that transparency and accountability are crucial in the development and deployment of AI and automated systems, emphasizing the need for responsible innovation that prioritizes user trust and safety.
Brittan Heller
Robin Green
Hoda Al Khzaimi
Importance of accountability and transparency in automated systems
Responsible innovation principles focusing on user trust and safety
Importance of transparency and accessibility in AI algorithms
Takeaways
Key Takeaways
The future internet will be shaped by a constellation of emerging technologies including AI, XR, blockchain, and quantum computing.
There is a need for adaptable and interoperable legal frameworks to keep pace with rapid technological evolution.
Data governance and digital sovereignty must be balanced with maintaining an open, interoperable internet.
Incorporating ethical considerations and user trust is crucial in developing new technologies.
Digital public infrastructure and digital transformation offer significant opportunities for economic growth and improved governance.
A multi-stakeholder, collaborative approach is essential for effective internet governance.
Resolutions and Action Items
Develop governance frameworks that are interoperable across jurisdictions
Implement responsible innovation principles focusing on user trust and safety
Create user-centric dashboards showing data usage and processing
Establish sandboxing approaches for testing new technologies in regulatory environments
Deliberately create spaces for cultural engagement in new computing platforms
Unresolved Issues
How to effectively balance data sovereignty with cross-border data flows
Addressing potential increased surveillance and privacy erosion in new technologies
Resolving hardware floor issues in emerging technologies like XR
How to evolve consent mechanisms for new computing platforms
Ensuring accessibility and inclusivity in the future internet across different regions and demographics
Suggested Compromises
Adopting technology-neutral legal frameworks to allow for future adaptability
Balancing innovation with user protection through responsible development principles
Using sandboxing approaches to test new technologies within existing regulatory structures
Implementing privacy-enhancing techniques like on-device data processing to balance functionality with data protection
Thought Provoking Comments
The conclusion is that emerging technologies are a constellation, and if your regulatory approach focuses on one aspect in lieu of the others, you’re going to miss the bigger picture.
speaker
Brittan Heller
reason
This comment introduces a holistic perspective on regulating emerging technologies, emphasizing the interconnected nature of different innovations.
impact
It shifted the discussion towards considering the broader ecosystem of technologies rather than isolated innovations, setting the stage for a more comprehensive analysis of regulatory challenges.
Overall, regulators need to think about these risks on a broad scale, focusing on fundamental rights while fostering innovation. And the nice thing about these new ecosystems is that what is good for users is also good for human rights.
speaker
Brittan Heller
reason
This insight aligns user interests with human rights, suggesting a win-win approach to regulation and innovation.
impact
It reframed the discussion around finding solutions that benefit both users and broader societal interests, encouraging a more balanced approach to technology governance.
The first thing we should have is a single source of truth into this, like a governance structure that would tell you a trust stack should, and the best kind of guidelines aspect include these different layers.
speaker
Hoda Al Khzaimi
reason
This comment proposes a concrete solution to the complex issue of building trust in digital systems across different jurisdictions.
impact
It sparked a more detailed discussion about the specific components needed in a trust framework, moving the conversation from theoretical concerns to practical implementation.
Let’s take that as a cautionary tale, not only around making sure that we are not just copy, pasting, and making the mistakes of yesterday, but also making sure that as we’re creating legal frameworks, we’re building them with enough flexibility and adaptability, and in a way that in some sense is really technology neutral.
speaker
Robin Green
reason
This insight highlights the need for flexible, future-proof regulatory approaches that can adapt to rapid technological change.
impact
It encouraged participants to think more critically about long-term implications of current regulatory efforts and how to create more adaptable frameworks.
Number one, I think if we are not deliberate about creating spaces for cultural engagement and education in the next iteration of the internet, we will not have them in the same way that we did in the first.
speaker
Brittan Heller
reason
This comment brings attention to the often-overlooked cultural and educational aspects of internet development.
impact
It broadened the scope of the discussion beyond technical and regulatory concerns to include cultural preservation and education in the digital age.
Overall Assessment
These key comments shaped the discussion by encouraging a more holistic, user-centric, and culturally aware approach to internet governance and emerging technologies. They moved the conversation from siloed thinking about individual technologies or regulations to considering the broader ecosystem and long-term implications. The discussion evolved to emphasize the importance of adaptable frameworks, trust-building mechanisms, and the preservation of cultural spaces in the digital realm. This comprehensive perspective highlighted the complex interplay between technology, regulation, user rights, and societal values in shaping the future of the internet.
Follow-up Questions
How can we ensure that governance frameworks for new technologies are interoperable across jurisdictions while still respecting local needs?
speaker
Robin Green
explanation
This is important to avoid creating incompatible frameworks that prevent offering consistent services across different jurisdictions.
How can we make governance frameworks for digital technologies more adaptable to keep pace with rapid technological change?
speaker
Robin Green
explanation
This is crucial to avoid the problem of outdated laws not fitting new technologies, as happened with laws from the 80s-00s being applied to current tech.
How can we create spaces for cultural engagement and education in the next iteration of the internet?
speaker
Brittan Heller
explanation
This is important to ensure cultural aspects are not overlooked in the development of new internet technologies, which are largely driven by private investment.
How can we address the issue of the unsettled hardware floor in new technologies like XR?
speaker
Brittan Heller
explanation
This is crucial to prevent the loss of content and creative work due to rapid obsolescence of hardware platforms.
How can concepts like consent be evolved for new computing platforms?
speaker
Brittan Heller
explanation
This is important to ensure users can provide meaningful informed consent in new technological environments like 3D computing.
How can African countries develop supportive legal frameworks for digital governance that enable innovation while creating value from private sector engagement?
speaker
Audience member (Ibrahim)
explanation
This is important for countries that are newer to digital governance to effectively manage rapid technological development and data issues.
What metrics can be used to prove that a technology or system is trustworthy?
speaker
Judith Espinoza
explanation
This is important for building and measuring trust with users and society at large in new technologies.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online