WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age
10 Jul 2025 09:00h - 09:45h
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age
Session at a glance
Summary
This UNESCO session focused on integrating ethics into the development and deployment of emerging technologies, particularly artificial intelligence, neurotechnology, and quantum computing. Dafna Feinholz, UNESCO’s acting director of Research, Ethics and Inclusion, emphasized that ethics should be foundational rather than an afterthought in technological development, highlighting UNESCO’s role in promoting innovation while protecting human rights and fundamental freedoms.
The panelists shared diverse perspectives on implementing ethical frameworks across different sectors. Mira Wolf-Bauwens, a philosopher working in tech ethics, argued that while individual developers often have good intentions, institutional and economic pressures frequently override ethical considerations. She stressed the need to find ways to create a return on investment for ethical practices and advocated for anticipatory governance that considers potential negative outcomes early in development processes.
Ryota Kanai, a neuroscientist and entrepreneur, discussed the challenges of maintaining public trust in emerging technologies while balancing commercial pressures. He emphasized the importance of scientific validation and transparent communication about technological capabilities, particularly in neurotechnology where personal data extraction raises significant privacy concerns.
Chaichana Mitrpant from Thailand’s Electronic Transactions Development Agency shared practical implementation experiences, describing how Thailand adapted UNESCO’s AI ethics recommendations to local contexts through multi-stakeholder engagement, working with regulators, and creating governance frameworks for different sectors. The discussion revealed that effective ethics implementation requires collaboration across government, private sector, and civil society, with governance models that can adapt to rapid technological change while maintaining core ethical principles. The panelists agreed that ethics differs from regulation by being broader, more anticipatory, and focused on motivations rather than just compliance requirements.
Keypoints
## Major Discussion Points:
– **Ethics as foundational rather than an afterthought**: The panelists emphasized that ethics should be embedded from the very beginning of technology development, not added later. UNESCO’s approach advocates for ethics throughout the entire lifecycle of technology development, with all stakeholders involved at each stage.
– **Challenges of implementing ethics in commercial environments**: Multiple speakers highlighted the tension between good intentions at the individual level and institutional/economic pressures. The discussion revealed how profit motives and corporate dynamics can override ethical principles, even when developers have genuine ethical motivations.
– **Governance frameworks and keeping pace with rapid technological development**: The conversation explored how governance can remain relevant amid fast-evolving technologies like AI, neurotechnology, and quantum computing. Speakers discussed the need for anticipatory approaches and whether universal ethical principles can be applied across different technologies with specific customizations.
– **Multi-stakeholder collaboration and localized implementation**: The Thailand case study demonstrated the importance of working with various stakeholders (regulators, private sector, SMEs, citizens) and adapting international ethical frameworks to local contexts while maintaining core principles – described as “localized, customized, but not compromised.”
– **Distinction between ethics and regulation**: The panel addressed fundamental differences between ethical principles and legal frameworks, with ethics being broader, more agile, and focused on motivations and ideals, while laws provide specific, enforceable implementations of ethical concepts.
## Overall Purpose:
The discussion aimed to examine how ethics can be effectively integrated into emerging and converging technologies (AI, neurotechnology, quantum computing) from the development stage onward. The session sought to advocate for embedding ethics throughout the technology lifecycle and explore practical implementation strategies, drawing from UNESCO’s experience with non-binding ethical frameworks and real-world case studies.
## Overall Tone:
The discussion maintained a collaborative and constructive tone throughout, with speakers building upon each other’s insights rather than debating opposing viewpoints. The atmosphere was academic yet practical, with participants sharing genuine challenges and uncertainties they face in their work. There was a sense of shared commitment to ethical technology development, though speakers were candid about the difficulties of implementation, particularly regarding economic pressures and institutional constraints. The tone remained consistently engaged and solution-oriented, with speakers offering concrete examples and practical approaches to complex ethical challenges.
Speakers
– **Dafna Feinholz** – Acting Director of the Division of Research, Ethics and Inclusion at UNESCO; in charge of ethics of science and technology and bioethics at UNESCO for 16 years; led the process of elaboration of the recommendation of ethics of artificial intelligence
– **Mira Wolf-Bauwens** – Head of Initiatives Development, Geneva Science and Diplomacy Anticipator in Jesta and Open Quantum Institute; philosopher by background who works on applying philosophy in tech; worked on digital ethics, quantum and blockchain; formerly worked at IBM Quantum
– **Ryota Kanai** – Founder and CEO of Araya; neuroscience background, former university teacher and researcher; leads a large research grant program; independent expert who participated in drafting the recommendation of ethics of neurotechnology
– **Chaichana Mitrpant** – Executive Director of Electronic Transactions Development Agency (ECTA) in Thailand under the Ministry of Digital Economy and Society
– **Audience** – Various attendees who asked questions during the session
Additional speakers:
None identified beyond the speakers names list provided.
Full session report
# UNESCO Session on Ethics in Emerging Technologies: Comprehensive Discussion Report
## Introduction and Context
This UNESCO session brought together leading experts to examine the critical challenge of integrating ethics into the development and deployment of emerging technologies, particularly artificial intelligence, neurotechnology, and quantum computing. The 9 AM session, held in a full but intimate room, featured three distinguished panellists alongside moderator Dafna Feinholz, UNESCO’s Acting Director of Research, Ethics and Inclusion, who has been in charge of ethics of science and technology and bioethics at UNESCO for 16 years.
The panellists included Mira Wolf-Bauwens, a philosopher who founded a responsible quantum computing research group at IBM and conducted extensive interviews with quantum computing colleagues; Ryota Kanai, a neuroscience professor who started a company 10 years ago focusing on AI and neurotechnology, specifically brain computer interface technology to help people with disabilities; and Chaichana Mitrpant, who oversees digital policy implementation in Thailand and leads the country’s AI Governance Center and international policy advisory panel. Their diverse perspectives created a rich dialogue spanning international policy development, academic research, commercial pressures, and national implementation challenges.
## Foundational Context: UNESCO’s Mission and Approach
Feinholz opened the session by emphasizing UNESCO’s foundational mission, established after World War II to promote peace through collaboration in education, science, and culture. She explained that UNESCO’s core approach to emerging technologies centers on promoting innovation while protecting human rights and fundamental freedoms, with ethics at the center of science and technology development throughout the entire lifecycle.
A key principle underlying UNESCO’s approach is the recognition that “everybody will have a very different view and appreciation on what the benefit or harm can be,” making inclusive discussion essential. This philosophy shaped the session’s multi-stakeholder perspective and emphasis on diverse viewpoints in technology governance.
## Individual Perspectives: From Research to Implementation
### The Academic-Commercial Bridge: Neurotechnology Challenges
Kanai provided insights from his dual experience as both scientist and entrepreneur, describing the specific pressures that emerge when transitioning from academic research to commercial development. “I’m running a company and then I get investment. So investors push us to make money. I think that’s how our current economic system works,” he explained. This pressure leads some companies to market neurotechnology products that lack proper scientific validation, potentially undermining public trust in the entire field.
He emphasized that trust in science and technology is crucial for public acceptance and requires transparent communication about expert intentions and scientific validation. The challenge becomes particularly acute in neurotechnology, where the extent of information that can be extracted from neural signals remains an active research question with significant implications for privacy and regulatory frameworks.
### Quantum Computing and the Ethics-Industry Disconnect
Wolf-Bauwens shared findings from her year-long interview project with quantum computing colleagues, revealing a troubling pattern where individual ethical intentions become systematically undermined by institutional dynamics. She described how the same researchers who privately express strong ethical concerns about their work publicly compromise those values when faced with corporate pressures, funding requirements, and profit demands.
Her research revealed what she termed a “slaughtered or butchered conception of ethics in industry,” where companies use the term “ethics” to mean compliance with existing law rather than genuine ethical reflection. This distinction between authentic ethics and compliance-based pseudo-ethics helps explain why many corporate ethics initiatives fail to address real ethical concerns.
### Thailand’s Implementation Experience
Mitrpant presented Thailand’s practical experience implementing UNESCO’s AI ethics recommendations, demonstrating how international ethical principles can be adapted to local contexts through what he described as “localized, customized, but not compromised” implementation. Thailand recently hosted UNESCO’s third Global Forum on AI Ethics with over 1000 participants, reflecting the country’s commitment to international collaboration on these issues.
The Thai experience revealed the complexity of multi-stakeholder engagement, requiring different approaches for different groups: enforcement mechanisms for government agencies, collaborative relationships with regulators for private sector companies, educational programs for small and medium enterprises, and awareness campaigns for citizens. This differentiated approach acknowledges that various stakeholders have different capacities, motivations, and constraints.
## Key Themes and Consensus Areas
### Ethics as Foundation, Not Afterthought
All participants agreed that ethics must be embedded from the very beginning of technology development rather than treated as an afterthought. This requires all stakeholders to be involved at each stage, from initial research through deployment and ongoing monitoring. Wolf-Bauwens emphasized that foundational ethical considerations are essential for creating truly beneficial technologies, while acknowledging that good individual intentions frequently become compromised when institutional and economic pressures emerge.
### Anticipatory Governance and Future Scenarios
The panellists demonstrated agreement on the need for anticipatory governance that can keep pace with rapidly evolving technologies. Wolf-Bauwens argued that effective governance must imagine potential consequences 2-10 years ahead rather than merely reacting to current developments. She noted that while she initially thought governance structures themselves needed fundamental change, she now advocates for working within existing democratic and inclusive structures, despite their inherent time-lagging nature.
Kanai complemented this perspective by emphasizing the need for a portfolio of different future scenarios to prepare for various technological developments, including considering even remote possibilities that experts might dismiss as unrealistic, noting that the pace of technological change often exceeds expert predictions.
## Audience Questions and Key Discussions
### Risk Management Approaches
An audience question about risk management prompted discussion of distributed approaches that avoid placing excessive burdens on individual stakeholders. Mitrpant explained that rather than requiring each developer to conduct comprehensive risk assessments, Thailand explores models where global-level threat observation can cascade down to developers and users, reducing individual assessment burdens while maintaining effective oversight.
The panellists agreed that risk management should begin early in development processes, similar to ethics committees for human research, and that new technologies often create “gray zones” that require careful navigation.
### The Distinction Between Ethics and Regulation
A significant audience question focused on clarifying the relationship between ethical principles and legal frameworks. The panellists reached consensus that ethics is broader and more fundamental than regulation, though they emphasized different aspects of this distinction.
Wolf-Bauwens argued that ethics is more agile than law and addresses motivations and positive actions rather than just prohibitions. Kanai highlighted the practical dilemma that actions can be legally permissible while remaining ethically problematic, emphasizing that laws represent specific implementations of ethical ideals but cannot capture all important aspects of ethics.
Feinholz added that ethics provides ongoing reflection for dilemmas that laws cannot always address, particularly in rapidly evolving technological contexts. Mitrpant offered a useful analogy, comparing ethics to vaccination in that it raises the bar for all stakeholders, while laws establish minimum agreed practices.
### Technology-Specific vs. Cross-Cutting Frameworks
The discussion explored whether separate ethical frameworks are needed for each emerging technology or whether cross-cutting principles can be effectively adapted. Wolf-Bauwens advocated for establishing cross-cutting ethical principles with technology-specific customizations rather than reinventing frameworks for each new technology, arguing this would be more efficient and create more coherent governance across the technology landscape.
The participants agreed that while core ethical principles such as justice, inclusivity, and accessibility remain constant across technologies, their specific applications require careful adaptation to address the unique characteristics and risks of different technological domains.
## Implementation Challenges and Practical Solutions
### Economic Pressures and Institutional Dynamics
Perhaps the most significant challenge identified was the systematic way in which individual ethical intentions become undermined by institutional dynamics. This pattern extends beyond individual companies to entire sectors and even national policy implementation. Mitrpant shared Thailand’s experience of drafting AI legislation, where those experiencing fraud and defects supported regulation while developers opposed it due to cost concerns.
Wolf-Bauwens raised the critical unresolved question of how to put a return on investment on ethics, making ethical practices economically viable in profit-driven environments. This challenge requires developing business models that demonstrate how ethical principles can coexist with profitability.
### Flexible Frameworks and Multi-Stakeholder Engagement
Feinholz highlighted the effectiveness of non-binding normative instruments, such as UNESCO’s AI ethics recommendations, which allow adaptation by different member states and stakeholders while maintaining core principles. This approach provides flexibility for local implementation while preserving fundamental ethical commitments.
The session emphasized that effective ethics implementation requires collaboration across government, private sector, and civil society. This multi-stakeholder approach is essential but complex, requiring diverse expertise and perspectives while managing different interests and priorities.
## Conclusion and Future Directions
The session demonstrated both the complexity of integrating ethics into emerging technology development and the sophisticated understanding that experts from diverse backgrounds bring to these challenges. While significant challenges remain, particularly around making ethics economically viable and maintaining ethical commitments under institutional pressure, the discussion revealed promising approaches to addressing them.
The conversation evolved from abstract principles to practical implementation challenges, ultimately revealing that the central question is not what ethical principles to adopt, but how to create systems that can maintain ethical commitments under economic and institutional pressure. The participants’ shared commitment to foundational ethics, anticipatory governance, and multi-stakeholder engagement provides a foundation for continued progress in this critical area.
Key takeaways include the need for continued development of technology-specific ethical frameworks building on established cross-cutting principles, maintaining multi-stakeholder dialogue platforms to capture signals about technological development directions, and creating networks of experts to share knowledge across different technological domains. The session’s insights suggest that effective technology ethics requires not just good intentions or sound principles, but systematic approaches to addressing the institutional and economic pressures that consistently undermine ethical commitments.
Session transcript
Dafna Feinholz: Okay, good morning, good morning to all. Recording in progress. Thank you very much, be very welcome and thank you very much for coming to this session. We’re happy to see that we have a full room, even if it’s small, but at nine o’clock on ethics, we are very happy that we have you with us. And we are very happy about having the opportunity to have this session because, well, first, let me introduce myself. I’m Dafna Feinholz and I am the acting director of the Division of Research, Ethics and Inclusion at UNESCO. And I’m also in charge of ethics of science and technology and bioethics in UNESCO since 16 years ago. And I was also leading the process of the elaboration of the recommendation of ethics of artificial intelligence. And we’re also now going to have a recommendation on ethics of neurotechnology. So we have been working, I mean, this is one of the mandates of UNESCO, putting ethics at the center of the development of science and technology. The idea of UNESCO, as you know, UNESCO was founded with the aim of promoting peace. It was after the Second World War. And the idea is to have collaboration in the different areas of the work of UNESCO, which is education and culture and natural sciences, social and human sciences, communication, in order to foster peace. So the idea of UNESCO is to promote innovation, to promote research. But most importantly, is that this development should not be at the expense of. protection of human rights and fundamental freedoms, and that’s the departing point. And the second important departing point of UNESCO is that there needs to be this reflection ahead of what could be, I mean, if we want to develop a technology, what would be the impact of this technology? For whom? And what would be the benefits or the harms? And what kind of society will be building or we’re heading with this technology? And of course, another very important point of UNESCO is like, these questions are very complex and the answers are very complex, so there is no one unique voice that can answer these questions. So the most important part is to have a very inclusive discussion and debate about these responses, because everybody will have a very different view and appreciation on what the benefit or harm can be, depending on their needs, on their expectations, on the way they understand technology, on the kind of values and societies they envision and they want to live in. So that’s why having everyone around the table, and everyone really means everyone in the sense of everybody that is going to be affected by these technologies. And we have also always to think that sometimes people might not wish to be part of these technologies when we also have to take into account that, which is not easy in the case of some of them that are very pervasive, such as AI. But in any case, we do have to take that into account. So all that was behind the recommendation of AI and is always behind every document that we have. UNESCO has lots of documents related to this area on genomics as well, on climate change. And as I said, now, AI and also neurotechnology. So there are lots of important issues about artificial intelligence, as we know, bias, surveillance, erosion of privacy, deepening digital divides. This is also a very important ethical point. Accessibility of these technologies is one of the main ethical issues. Because sometimes ethics is thought about something very philosophical, very abstract, that we don’t know how to really interpret. But it’s very clear. I mean, it’s like, there has to be justice, there has to be no harm, there has to be respect for different understandings of technology, the respect of the good data. Because for technology to be ethical, it needs to be scientifically sound. So this is very concrete. Ethics is very concrete. So just to say that we are also working on quantum technology, we’re also working on exploration and exploitation of space. We are working on mental health of children and adolescents in the digital area, and on synthetic biology. So as you can see, we’re really trying to cover the intersections with AI. And I think one of the lessons that we have learned also is that when we create a normative instrument that is non-binding, it’s also very useful. Sometimes people think that you have to have a convention or a legally binding instrument because otherwise, nobody will comply. But it’s not what the experience that we have is that they are very useful, because they are, first of all, they are always the instruments, these instruments are always built with a high level understanding of the ethical issues of the technologies, not on the technologies themselves, because technologies change very quickly. So that’s why the idea is, what are the ethical issues behind them? So that’s why they continue to be relevant. And they also, this normal soft law kind of also allows to different member states and stakeholders to adapt themselves and to adapt the kind of framework that they need to make sure that these technologies are still governed in a way that protect human rights. So I think with this I will start zooming really in the session. I just wanted to frame a little bit why we put this session together. Now since this is a we are having this meeting of wishes and the reflection of what can happen in 20 years because it’s have been a while time already. So that’s why we thought this session could be very important because we want to try to advocate to include ethics across all the lines of what we see because of the reasons I’m just trying to explain. We want to advocate for ethics as something that it’s embedded from the very beginning when everything is conceived. That is the way we conceive it in the normative instruments that we have. We always speak about ethics through the whole life cycle of the technology and all the stakeholders that are involved in each of the stages. So this is what we want to advocate for and not to make sure that ethics is not an afterthought when it’s most of the time too late but from the very beginning. So what we want is to examine as I said a bit how these non-binding instruments can be very useful and can be paired with implementing tools, implementation tools that we have also developed and can influence as I said national policies, inspire institutional reforms, provide common ethical foundation. And again, to emphasise the need of having ethics in the design already, ethics in the key actions of connectivity, data governance, education, data inclusion, innovation, I mean we have heard so many sessions about data that is missing to be collected, but then what are we going to do with this data? Who is going to collect it? So ownership of data is very important. And of course, the interplay of artificial intelligence with many others as neurotechnology and data systems, etc. So I’m very honoured to have excellent and the highest quality possible of speakers with me, accompanying me. So we have Professor Ryo Takanai. He is the founder and CEO of Araya. Araya, that’s the way. Then we have Mira Wolf-Bowens. She’s Head of Initiatives Development, Geneva Science and Diplomacy Anticipator in Jesta and Open Quantum Institute. And of course, we have Dr Chaya Mitpark, Executive Director of Electronic Transactions Development Agency, ETA in Thailand. So all of them are also very good partners of us. Mira has been one of the experts that we have been consulting because one of our experts bodies have developed a report on ethics of quantum. It’s working currently on ethics of quantum computing. And Mira has been participating a lot. Ryota is one of the independent experts that participated in the drafting of the recommendation of ethics of neurotechnology. And Professor, I don’t know how to pronounce Chai. He has been with all his team. They have been the greatest hosts of the third Global Summit, Global Forum of Artificial Ethics of Artificial Intelligence two weeks ago. We had more than 100 participants and 1000 sorry. 100 is easy. No, 100. But it was not only the amount of people and important representations of different stakeholders, but the quality and the excellent organization. So thank you again. So without further ado, I want to what we will do is we will do our round of three questions. And then we would like to also open the interaction with you at the end. So you can also ask questions to the panelists. And so let me let me start if you can do like five minutes each. So first of all, I would like to ask maybe I start with Mira. But the same question is for everyone. What role should ethics play in shaping the development and deployment of emerging and converging technologies? Because this is the conversion that is that is really at the heart. How how can we make sure that this is not an afterthought? How we can ensure that there are principles like inclusion, accountability, human dignity as part of it? And what have you seen? Because you have a lot of experience. What have you seen that works and does not work? What do you suggest?
Mira Wolf-Bauwens: Thank you. First of all, thank you very much for the kind invitation. It’s always a pleasure. And I say this and I really, really mean it. These kinds of discussions where we can truly speak about ethics are the ones that I like most because I feel most at home. By background, I’m a philosopher who’s ventured into applying philosophy and tech. So I worked on digital ethics since a while before quantum and blockchain. So it’s really, really good to kind of comment on why I’m critical of how ethics is understood in the industry. So I’m always very happy when we’re in fora where we can speak about true ethics and not about the slaughtered or butchered conception of ethics in industry. So that kind of foreshadows a little bit also some of the remarks I’m going to make. So with respect to the role that ethics should play, when you were talking about UNESCO and the role of ethics and all the initiatives you’re having, you said it and I couldn’t agree more, but ethics should be foundational. In a way, to me, it is hard to understand how we could possibly develop anything without having ethics first. And I think it is more that the way that then institutionally ethics is becoming an afterthought. So in terms of the should be, it’s very, very clear. It has to be because in a way, it’s hard. How can we have a motivation without having sort of guiding principles? So if we don’t think kind of consequentialist, but more deontologist, we need guiding principles. That’s ethics. And so we need to find them before we even start. And I would argue that implicitly the tech sector does that by kind of defining what do they want to do, what is kind of the solution they want to provide. And in answering those questions, they’re all ethical. In a way, they’re all normative ethical answers. The question I think then is how do we bring in, in a way, I guess, good ethics or the right principles into these decisions? How do we ensure that it’s not in a way the sort of purely profit driven principles, but as principles that are, since we’re at the AI for good and in the context of the SDGs, but are driven by principles that are for humanity and also that are realizing that doing something for society benefit does not have to be in contrast to economic benefit. And I think that is something. we’ve been saying for a good 10 years, probably even longer, but something we still haven’t figured out how to really convince the private sector that this can be done. And I think we’re lacking actually good models. And we’re hearing of examples of companies that used to have that as their common goal. I think you all know what I’m gesturing at, and are changing at the moment that a lot of dollars are flowing in, flying in. So I think, unfortunately, the answer in principle is very clear. It should be foundational, the question of how do we resist the economic pressures and also the institutional pressures to ensure that good principles remain at the foundation of motivation. That is the challenge. And unfortunately, I think it’s the discussion we need to have. How do we ensure that? But I wanted to also comment on what works in the sense of starting to a process that I started to see how we could instill kind of ethics from the start of the development of a technology. And I know with AI, I personally came in too late. So by the time I came in, AI was already kind of in full flux, and the discussions of ethics had to be after ethics. But with quantum computing, we started this very early. And so I used to work at IBM Quantum, who are one of the leading quantum computing companies. And it doesn’t really matter if you’re not familiar with quantum computing, it doesn’t really matter, novel computing technology. But basically, they’re developing it, and they weren’t thinking about ethics. But IBM tried to say, well, we’re a global citizen, we’re trying to be good. And IBM tries to kind of also say, long standing tech. So in a way, I tried to use that motivation and said, well, okay, if you’re serious about that, and if you’re serious about now doing this truly for society, and you’re telling all of us you’re developing quantum computers for society, well, then embed ethics from the start. And so they allowed me to found a research group on quantum ethics. So I called it responsible quantum computing. And what I did is I thought, okay, the wrongest thing I could do is now come in as the philosopher and tell my colleagues, the physicists who haven’t thought about ethics and who are not trained to think about ethics, to tell them this is what you should do. Because then it’ll be the typical alienation of, oh, these are the ethical principles, they’re not embedded in my thinking, and this is what this philosopher told me, and that wouldn’t work. So instead, what I did is that I did a one-year process of interviewing my colleagues who were working on quantum, and I asked them, what is your motivation? And this goes back to what I said at the beginning, the motivation. And I found that, and it was one-on-one confidential interviews, but I can abstract from this. I found that all of them said they work on quantum computing because they’re seeing that there are challenges in the world that cannot be solved with classical computing. So among the simulation of nature, you can’t do this well with classical computing. And if we could simulate nature, we could get better, potentially better materials for carbon capture or develop drugs much faster. So this was at the heart of their motivation. They’re spending too much time in the labs. They’re spending all their leisure time in the labs. And so they very clearly said, this is why we work in quantum computing. This is why we’re doing this. We see that at the core. And so it was really across the board. Even the business developers would tell me, this is why I’m in this business, why I want to do it. And then it’s interesting that once you get to the more institutional level, and so once the individuals become the groups and the kind of institutional and power dynamics also come into play where suddenly this is a business and you have to sell, suddenly you have to tell your clients that you’re selling a machine that is in underdevelopment and you have to make promises as to what it can do, which it cannot do yet. So basically have to hype. Suddenly the same people that told me, no, we don’t want to overhype this machine. We don’t want to promise something that it cannot do. Suddenly I saw them in conferences with clients. I saw them telling clients, oh yes, you need to buy this machine because it can solve your problems now. So I think what I’m like, just for me, this was really, really insightful kind of acknowledging that as individuals. The sort of good motivations and ethical principles are there, but the kind of trick that happens once power dynamics of who someone is in a team, how much power they have in a team, the pressures of a corporate having to sell, having to make profit, once those come into play, they very clearly fade. And I think so the trick is in a way, how can we instill, it’s been said before a lot, but on the one hand, how can we instill that culture? But for me, the point is really, how can we put an ROI, return on invest, onto ethics? So I’ll stop with that open question on kind of how can we in a world that clearly is so driven by the private industry, how can we put this return on invest on ethics? Yeah, that’s the question that I haven’t figured out.
Dafna Feinholz: Thank you. Thank you very much. I think you really put the light where it has to be, but I don’t want to take more time. Maybe we continue with Ryota because you also have these two halves of academia and private sector.
Ryota Kanai: OK, yeah, so my background is neuroscience. I used to be teaching at the university and doing research on how the brain works. But then I started my company about 10 years ago because I wanted to sort of demonstrate that the kind of research I was doing is actually useful in society. So that’s how I started my company. So my company is focusing on the combination of AI and neurotechnology. So we use AI to decode what people want to say from brain activity. And we also try to sort of help people with disability by creating brain computer interface so that to help them with some sort of physical immobility. things like that. But also in this space, there’s a lot of concerns, which is the kind of general trust in science and technology. So as part of my work, I also lead a large research grant program. And in that program, we encourage researchers to translate their research in neuroscience to real-world applications. But that made the public kind of worried because, you know, laypeople don’t know what’s currently possible. So they were worried that, you know, we may implant electrodes in everyone and then control their thoughts and things like that. So I think there’s a general concern in the public about how technology might be used. And so I think in that sense, it’s very important to communicate that experts have good intention about how we want to apply our knowledge and technologies. But also, especially, I think in general, people have good intention. So there might be good, bad people, but I believe most people are good people. And but I think the tricky thing starts when there’s some sort of conflict. So especially in the commercial setting, there’s a strong demand to make profit. So I’m running a company and then I get investment. So investors push us to make money. I think that’s how our current economic system works. But because of this, as a scientist, I felt some companies are trying to sell neurotechnology products that are not scientifically validated. So that made me really worried because, as I said, I think trusting science and technology is very important. Otherwise, people may not accept new technologies. Maybe partly this kind of thing might be also related to individuals’ personality. But for example, you might see many people who are against vaccination, even if it’s reasonably validated and safety has been tested. But still, I think that kind of concern comes from the lack of trust. Personally, I really want to promote neurotechnology and AI because I think they can be beneficial for many people. But at the same time, I think it’s very important that experts and also international organizations like UNESCO and others lead discussions on ethical implications of neurotechnology. And especially when new technologies emerge, there are some gray zones. I think for most things, we can judge whether certain things are good or bad. But in certain areas, there are new kinds of uncertainties. So for example, data sharing was not even a concept maybe a hundred years ago, but now we know the benefit of sharing data, but at the same time that generate concerns and about privacy and other things. Yeah, and that’s particularly true for neurotechnology in my case, because it’s very personal. And of course, we don’t know how much information we can extract from neural signals, but it seems like, especially in combination with AI, it’s becoming more and more feasible to extract personal information from brain activities. So in that sense, there’s a lot of uncertainties. And then although the field is developing very fast, I think ethics is a kind of very fundamental thing which stays the same over time. So in that sense, like maybe we should sort of discuss the high level ethics first, and then break it down to more practical things. And then that part could be changing faster. So in that sense, yeah. So I think it’s very important to consolidate ethical foundation first.
Dafna Feinholz: Thank you. Thank you very much. Yeah, I will not be doing any comments. We just go, I think, for more probably institutional point of view.
Chaichana Mitrpant: Good morning. Good morning. My name is Chai. I’m currently the executive director of ECTA. Electronic Transactions Development Agency, which is an organization under the Ministry of Digital Economy and Society. We really commend UNESCO and UNESCO member states to really adopt the ethics of AI since 2021. I think it’s very important and the process is multi-stakeholder and engage all the parties involved into creating this framework. I think it’s very clear and give out almost instruction of what to do. But we still find difficulties of applying it to our environment because I think we have to really adapt the principle into the context of our own countries. So we are now working very hard to understand how to implement all the recommendations by UNESCO. What is good is the values, the principles and the policy areas specified in the recommendation that would serve as a tool for guiding us to make implementing decision of what we can do within that framework. So based on our belief that this is our guiding tool for Thailand to navigate through AI adoption and maybe regulation creation, we try to really make it happen. So we have a national AI committee as a body that is responsible for developing our national strategy. One pillar of the strategy is the ethics and standards. and laws and regulations. We believe that that’s based for the AI development, adoptions, deployment and use. We decided to study AI laws and regulations more than two years ago. We actually drafted our AI bill two years ago, but there were conflicting opinions of how Thailand should navigate through AI regulations at that time. People facing fraud, defect issues were supporting the law while the developers in Thailand were kind of opposing and asked a lot of questions. Why imposing duties for them? Because we are quite new in Thailand, AI developers and imposing laws and regulation would impose the cost into their activities. So we decided that to wait and see. At that time, EU AI Act was not yet in effect, but we really need a tool to monitor the risk and possibly find something that can control the risk. So we set up AI governance center to monitor the AI risk in Thailand. But at that time we did not have like a complete global understanding of AI risk landscape. So we engaged international experts and created international policy advisory panel by drawing several areas of expertise. For example, legal expertise, business expertise, technical expertise, political science, medical. healthcare. These are just examples of expertise that we try to curate to create our panel. So that serves as our advisor to navigate Thailand through different issues coming up as we adopted AI technologies. I think that works for us. It helps us understand the risk and it helps us create tools to try to control the risk. So we published AI Governance Framework and Generative AI Governance Framework at a later stage as a tool for accomplishing the ethical values specified by the recommendations. So how do we implement that? So we need to really engage different stakeholders with different mechanisms. For government agencies, we have to use enforcement because the cabinet can order government agencies to do things. So we use kind of enforcing by the government. So we proposed the AI governance guideline to the cabinet to be adopted for government agencies. That’s the work that’s still ongoing. But for private sector, that’s a bit more difficult. So we really try to break down our work into several parts. For private sectors that have regulators, we try to work with regulators to understand and adopt the AI recommendations on the ethics that UNESCO published and see how that can be implemented. customized into the context. So, with the strong belief that the recommendation should be localized, customized, but not compromised. So, we really stress on the values and the principles and see what risk is foreseen by the regulator. And then we try to propose, make sure that control the risk at the proportionate level. And that’s still ongoing. So, now our central bank is drafting AI governance guideline for the banks in Thailand. And that is based on our guidelines as well. That’s for private sectors that have regulators. So, we work with regulators. We can never imagine working alone. And I think that would not work if we think that we are responsible for AI development, AI adoption, and so on. And we try to work on that ourselves. We don’t have expertise, especially sectoral expertise in medical services, in energy sector, in banking sector. We don’t know we are not the domain expert. So, we really need to work with the domain expert in defining what are the dos and the don’ts in the particular domains. So, we facilitate the development of these localized into the sector, customized into the sectoral guideline. And that’s for the private sector with regulators. But for SMEs, we don’t have regulators for that. We have to promote them. Right. So, how can we… I’m sorry. I’m sorry. Taking more time. So, for SME, we created tools for the adoption with good governance, we organized workshops for them, and then for the general citizens, we try to educate and make awareness happen within the citizens so that we can raise the bar, because the weakest link for the AI adoption that would cause the collapse of the whole system. Thank you.
Dafna Feinholz: Thank you. Maybe what we can do is do the second round, but I will ask you to really stick to three minutes each. Like this, we can open the floor because we promised to open the floor for questions, and then before we close, I will ask you after the questions to go with a takeaway message, if that is okay. So, I think for the second question, I would like to say, well, thank you very much first for this very important inputs already and insights on the first, but so we have been listening a lot about the challenges of integrating into the policies, how to, the institutional and economical pressures and how to make sure that the right values are in the right place, because there are good intentions. But then, so my question would be how this governance can keep also paced, also with the rapid development of technologies, because this is also something that we have to face. And there are many areas, neurotech, quantum, synthetic biology, and these intersections. So, is there any kind of governance models that you think that can ensure coherence across all these converging domains? Do we really need to have one specific model for each of the new technologies, or is there anything that can… across all of them and then just some specifics. Maybe I will do the same order then.
Mira Wolf-Bauwens: I think the keeping pace is a matter also of being anticipatory. So I think what I’ve seen in particular in the tech sector is an unwillingness to anticipate the negative. And I think that is something that sort of is on all of us and not in the tech sector to put more pressure on anticipating the potential negative outcomes and unintended outcomes. And so to do that, I think that’s a good start because that way also when we put timelines to that, that way we are kind of almost by the timelines by having that. Ideally, if we can put evidence to that even better, that way we have a timeline and that way kind of the governance also sees that it needs to act and that kind of puts kind of, it’s almost by having those insights, you put the top-down pressure then on regulators also to act because otherwise I think what is currently happening is that a lot of these technological developments, they appear so far away and this is why it appears as if governance wasn’t agile enough. But I think it’s more a matter of not having communicated early enough that no, it’s actually not far away and that actually you need to put this on a priority list now. And then there is the structure. So initially I came to this and I thought, yes, we need to change the structures, but now I’m more and well, I think it’s harder to change all the governance structures. So let’s work with what we have in terms and because they are effective, I think what the benefit of these, what is often criticized as being time-lagging structures is that they’re inclusive and that they’re democratic. And so I’ve seen processes as well that are not democratic, that then put out governance principles that are not at all inclusive, that were done by a round of 20 people from sort of different sectors, but that were part of a members club. And so, yeah, I would say rather let’s kind of make sure we communicate the timelines well, the kind of unintended effects, and we put that on the priority list and then we govern for that. And I think regarding the cross-cutting or not, you mentioned, I really liked you mentioned that in Thailand you’re working on a localized customized but not compromised. And I’m wondering whether that slogan can in a way also be adopted for ethics principles. So in a way that we have, I think we need kind of tech ethics principles that are overarching because there’s a lot of commonalities. But then with regard to certain specificities of certain technologies, they need to be localized and customized to use that slogan adopted, but not everything needs to be reinvented. So issues that you mentioned, like justice, inclusivity, and so forth, accessibility, they’re cross-cutting. And then for technologies like quantum computing, there might then have to be a bit more focus on access to knowledge about quantum computing versus access to the extra hardware. The hardware question is not as predominant with AI, for instance. So I think that’s kind of, that would be an approach. And I think having that approach would also preempt us from doing, there’s a lot of work in doing all of these principles all over again. So if we could come to kind of, okay, we agree on this basic set. And you mentioned that as well, that these don’t change, right? And then we can adopt them for the specific technologies. But then also, I think the ethics gets better for the specific technologies because they have a foundation and an overarching model. So, and I think that’s what you’re doing also. You’re having the, at UNESCO, you’re having the ethics principles and the rest in a way, at least that’s chronological. It follows and it uses the insights from the processes as well. I think that’s a way forward. So I’ll stop there.
Dafna Feinholz: Thank you. And apologies for being nasty. I hate that part of being a moderator. I really hate it.
Ryota Kanai: Okay. I tried to be quick. Three minutes. All right. Yeah. About keeping pace. Yeah, I think, as Mira said, anticipation is important. So we cannot keep pace with the speed of development of technology if we just react to what’s happening now. But we need to imagine what might happen in the next two years, five years, 10 years. So I think it’s particularly important to consider also remote possibilities. So a lot of times when you ask experts, actually, they might give you some conservative estimation. For example, maybe 10 years ago, if we talk about the possibility of AGI in AI governance kind of meetings, people didn’t take you seriously. But now it seems like it’s becoming a possibility now. So in that sense, sometimes some possible consequences of new technology may feel like science fiction. But I think it also includes such considerations, but we need to make sure that we have some agreement about where we are now. But I think it’s useful to have a portfolio of different future scenarios. And so I think that way we can be prepared and keep pace with technology. Do you think for the second part, do you think that we will need kind of different ethical scenarios or, I mean, ethical backgrounds or, like Meera said, there is something like cross-cutting and then there are some specificities? Because you know now we had for AI, now for neural networks. technology? Do we have to have one for Symbio and another one for what? Yeah, I think for different scenarios, I think we might have like specific ones, but I think it’s kind of more important to have kind of more combined expertise. So, for example, like, you know, I practice with my colleagues about, you know, kind of futuristic scenarios of, applications, new technology, but often, but they are like new technology experts, and then they don’t have like a projection of how AI might develop at the same time. So, but yeah, so, so we need to think about, you know, future scenarios by combining different technologies, because they all develop fast at the same time. Thank you.
Chaichana Mitrpant: Try to compensate to what I took. Almost three minutes. Well, I would like to build on top of Mia has said is, and anticipatory is quite important. But I would like to maybe enrich some concepts based on that. First, I think we need to stick to the basic values and the principles as our guiding light. And then we need to hear voices. So, platforms that encourage dialogues are important for us to capture signal, what is what AI is going in which direction. So, we have information to adjust and to adopt tools that is necessary. The third one is, we should have knowledge and expertise, because AI is quite deep, and require good understanding to make sensible measures. So network of experts. can be helpful to share knowledge. Fourth is working in a friendly, trustworthy, multi-stakeholder environment. Work with private sector, government organization, the consumers organization, human rights, NGOs. I think we need to work with all these different stakeholders to understand the whole aspect, every aspect of the landscape.
Dafna Feinholz: Thank you very much. So now as promised, which because I think we are already late, but I think we can take at least two questions, one question, or two, can I take two? Yes, please. Can you introduce yourself, please? Thank you. Repeat with the microphone?
Audience: Yeah, yeah. The question was about, do you all call for risk management to be done during the development and the life cycle of the systems, quantum neural AI? Thank you.
Mira Wolf-Bauwens: Yes, but more than that. So risk management is kind of mitigating the unintended consequences, but I think to embed ethics, it’s also to go back then to being anticipatory also of the desired kind of goals we have with that. So on the one hand, it’s this risk management, which is really kind of on the, but it’s also, it’s bringing sort of the positive and the negative into anticipation. So yes, but more. It’s the short answer.
Ryota Kanai: I don’t have a clear answer, but I think in practice, probably we should be able to anticipate potential risk at the early phase of development. So I think that when you start developing a new product or starting a new kind of development, yeah, that may be the point where it’s good to think about potential risks. I think in research, especially in human research, it’s kind of common to have this consideration at the beginning. And sometimes for human experiment, we have ethics committee to assess potential risks and whether it’s ethical to actually carry out that research. So I think that kind of practice could be done in other domains as well.
Chaichana Mitrpant: Only a few words. Risk management assessment are very important, but that should not put too much burdens on some particular stakeholders. So we should try to synergize. For example, at a global level, we can observe threats and vulnerabilities to identify all these emerging threats. And then that can cascade down to structure way to developers or users so that not all the SME and developers have to do the risk assessment themselves.
Dafna Feinholz: Thank you very much. I will take one on this side.
Audience: If I might. Sorry, Philip Marnik. We talked about regulators do guidelines and regulations. And we talked about building ethics into the way people think and the way they do things, which is very different. regulations and laws. What do you feel is the difference between ethics and regulation as proposed from one side to the other side of the panel?
Chaichana Mitrpant: Well, ethics, as Daphna said, it has to be understood at all levels, you know, by everybody. And that’s, I think, provide vaccine to different persons, national persons or even legal persons. That’s raising, I think, the bar. If you are all vaccinated, then we are immune to all different threats. But laws are like something that require people to do vaccination because law, after all the stakeholder process, that’s already agreeable within the country. What should be the minimum practice of the ethical concepts?
Dafna Feinholz: Thank you for the question, by the way. I love it.
Mira Wolf-Bauwens: I can quickly go next. So, for me, there’s different answers to this. But in the tech sector, importantly, the difference between true ethics and the law is that ethics is much wider and that the law often is this kind of, that’s what we discussed, is the one that is sort of lacking behind and that is not capturing a lot. So, for instance, even if you hear, and I mentioned that earlier to you, in a tech sector, when you hear ethics, it is not ethics, it is compliance. And so it’s looking into complying with the existing law. And so, for instance, quantum ethics, that’s why I’d never called it quantum ethics, can’t exist because there’s no regulation around quantum. Or, I mean, it could exist, but then it means you can do whatever you want. And so I think ethics, and we need to kind of uphold an understanding of ethics that is importantly what Daphna also said at the beginning, the soft governance, and that can be more agile in that sense, that is individualized, that I think as individuals we can all relate to, and often is more broader. And I don’t think everything that is ethics needs to be law. And I think that is important that, so for instance, law typically, and this is because I’m not a lawyer, but I also think of law as disallowing us and telling us what not to do, whereas ethics often is also about the motivations and what is allowed to do, and I don’t think we need to put all the motivations into law. So that’s how I differentiate.
Ryota Kanai: Okay, yeah, so this is a very difficult question. So yeah, I think, so I’m sure that there’s a lot of theory about legal systems, but my take is ethics is a very high level thing, and then laws are one specific implementation of that ideal. And so I think in that sense, you know, laws are more practical sort of implementation of ethics. But on the other hand, I also, you know, on one hand, it’s very difficult to capture all the important aspects of ethics, because laws can be really prohibiting all kinds of freedom. So in that sense, maybe laws are not sufficient, because as a company or a private person, I feel like if something is not forbidden by the law, I should be free to do them. But on the other hand, you know, those actions can be unethical so yeah so in that sense there’s some sort of dilemma you know we can never have perfect laws but I think it’s good to have ethical principles.
Dafna Feinholz: And if I may just add to what Mira said and my colleagues, laws indeed will tell us what to do what not to do but the ethics will always be there to reflect and the answers will never be always in the law and so they will always be dilemmas and we will so we won’t have time for the for the wrapping up but so I just want to thank you so so much and can you please say join me in giving a round of applause to the speakers thank you thank you very much
Mira Wolf-Bauwens
Speech speed
184 words per minute
Speech length
2404 words
Speech time
782 seconds
Ethics should be foundational and embedded from the beginning, not an afterthought
Explanation
Wolf-Bauwens argues that ethics must be foundational to technology development because it’s hard to understand how anything can be developed without having ethics first. She emphasizes that guiding principles (ethics) are needed before starting any development, as the tech sector implicitly makes ethical decisions when defining what they want to do and what solutions they want to provide.
Evidence
She notes that implicitly the tech sector does make ethical decisions by defining what they want to do and what solutions they want to provide, and these are all normative ethical answers.
Major discussion point
Role of Ethics in Technology Development and Deployment
Topics
Legal and regulatory | Human rights principles
Agreed with
– Dafna Feinholz
Agreed on
Ethics should be foundational and embedded from the beginning of technology development
Individual researchers have good ethical motivations, but institutional and economic pressures can compromise these principles
Explanation
Through her research at IBM Quantum, Wolf-Bauwens discovered that individual researchers have strong ethical motivations for their work, but these get compromised when institutional power dynamics and corporate pressures to sell and make profit come into play. She observed the same people who privately expressed ethical concerns would make exaggerated promises to clients in business settings.
Evidence
She conducted one-year confidential interviews with quantum computing colleagues who all said they work on quantum because they see challenges that can’t be solved with classical computing, like simulating nature for better materials or faster drug development. However, she then observed these same people making unrealistic promises to clients at conferences about what the technology could currently do.
Major discussion point
Challenges in Implementing Ethical Frameworks
Topics
Economic | Legal and regulatory
Agreed with
– Ryota Kanai
Agreed on
Economic pressures compromise ethical principles in commercial settings
The challenge is putting a return on investment (ROI) on ethics to make it economically viable
Explanation
Wolf-Bauwens identifies the core challenge as figuring out how to make ethics economically attractive in a world driven by private industry. She argues that while good motivations exist at the individual level, the key question is how to resist economic pressures and ensure that good principles remain foundational when profit motives dominate.
Evidence
She observed that individuals have good ethical motivations, but once power dynamics and corporate pressures to sell and make profit come into play, these ethical principles clearly fade.
Major discussion point
Challenges in Implementing Ethical Frameworks
Topics
Economic | Legal and regulatory
Anticipatory governance is crucial – need to imagine potential consequences 2-10 years ahead rather than just reacting
Explanation
Wolf-Bauwens argues that keeping pace with technology requires being anticipatory rather than reactive. She emphasizes the need to anticipate potential negative outcomes and unintended consequences, and to communicate these with timelines to put pressure on regulators to act proactively.
Evidence
She notes that technological developments often appear far away, making governance seem insufficiently agile, but the real issue is not communicating early enough that developments are actually not far away and need to be prioritized now.
Major discussion point
Governance Models for Emerging Technologies
Topics
Legal and regulatory | Development
Agreed with
– Ryota Kanai
Agreed on
Anticipatory governance is essential for keeping pace with rapidly evolving technologies
Disagreed with
Disagreed on
Approach to changing governance structures vs. working within existing systems
Cross-cutting ethical principles should be established with technology-specific customizations rather than reinventing everything
Explanation
Wolf-Bauwens proposes adopting the Thai approach of ‘localized, customized, but not compromised’ for ethics principles. She argues for overarching tech ethics principles that address commonalities, with specific customizations for different technologies rather than reinventing principles for each new technology.
Evidence
She gives the example that issues like justice, inclusivity, and accessibility are cross-cutting, while quantum computing might need more focus on access to knowledge and hardware compared to AI, which doesn’t have the same hardware access issues.
Major discussion point
Governance Models for Emerging Technologies
Topics
Legal and regulatory | Human rights principles
Risk management should occur throughout the entire lifecycle of technology development
Explanation
Wolf-Bauwens argues that risk management should happen during development and throughout the lifecycle of systems, but emphasizes it should go beyond just mitigating unintended consequences to also anticipate desired goals and positive outcomes.
Evidence
She explains that risk management is about mitigating unintended consequences, but embedding ethics requires being anticipatory of both positive and negative aspects.
Major discussion point
Risk Management and Assessment
Topics
Legal and regulatory | Development
Agreed with
– Ryota Kanai
– Chaichana Mitrpant
Agreed on
Risk management should occur early in development phases
Ethics is broader and more agile than law, addressing motivations and positive actions, not just prohibitions
Explanation
Wolf-Bauwens distinguishes ethics from law by arguing that ethics is much wider and more agile than law, which often lags behind and doesn’t capture everything. She notes that in the tech sector, ‘ethics’ often becomes mere compliance with existing law rather than true ethical consideration.
Evidence
She points out that in the tech sector, when you hear ‘ethics,’ it’s actually compliance with existing law. She gives the example that quantum ethics technically can’t exist because there’s no regulation around quantum, meaning under a compliance-only approach, you could do whatever you want.
Major discussion point
Distinction Between Ethics and Regulation
Topics
Legal and regulatory | Human rights principles
Agreed with
– Ryota Kanai
– Dafna Feinholz
Agreed on
Ethics is broader and more fundamental than legal regulation
Dafna Feinholz
Speech speed
143 words per minute
Speech length
2086 words
Speech time
869 seconds
UNESCO promotes ethics at the center of science and technology development to protect human rights and fundamental freedoms
Explanation
Feinholz explains that UNESCO’s mandate is to put ethics at the center of science and technology development, ensuring that innovation and research do not come at the expense of protecting human rights and fundamental freedoms. UNESCO promotes reflection ahead of technology development to consider impacts, benefits, harms, and what kind of society we’re building.
Evidence
She notes that UNESCO was founded after WWII to promote peace through collaboration in education, culture, sciences, and communication. UNESCO has developed numerous documents on genomics, climate change, AI, and neurotechnology, and is working on quantum technology and space exploration ethics.
Major discussion point
Role of Ethics in Technology Development and Deployment
Topics
Human rights principles | Legal and regulatory
Agreed with
– Mira Wolf-Bauwens
Agreed on
Ethics should be foundational and embedded from the beginning of technology development
Non-binding normative instruments can be very useful and allow adaptation by different member states and stakeholders
Explanation
Feinholz argues that non-binding instruments are valuable because they’re built with high-level understanding of ethical issues rather than specific technologies, making them remain relevant as technologies change quickly. These soft law approaches allow different member states and stakeholders to adapt frameworks to their needs while still governing technologies to protect human rights.
Evidence
She explains that UNESCO’s experience shows these instruments are useful because they focus on ethical issues behind technologies rather than the technologies themselves, which change very quickly, and they allow adaptation while maintaining human rights protection.
Major discussion point
Governance Models for Emerging Technologies
Topics
Legal and regulatory | Human rights principles
Laws are specific implementations of ethical ideals but cannot capture all important aspects of ethics
Explanation
Feinholz argues that while laws tell us what to do and what not to do, ethics will always be there for reflection, and the answers won’t always be found in law. She emphasizes that there will always be dilemmas that laws cannot address, requiring ongoing ethical reflection.
Major discussion point
Distinction Between Ethics and Regulation
Topics
Legal and regulatory | Human rights principles
Agreed with
– Mira Wolf-Bauwens
– Ryota Kanai
Agreed on
Ethics is broader and more fundamental than legal regulation
Ryota Kanai
Speech speed
112 words per minute
Speech length
1328 words
Speech time
705 seconds
Trust in science and technology is crucial for public acceptance, requiring transparent communication about expert intentions
Explanation
Kanai argues that public trust in science and technology is essential for acceptance of new technologies. He emphasizes the importance of experts communicating their good intentions, as public concerns often stem from lack of understanding about what’s currently possible versus future possibilities.
Evidence
He provides examples of public worry about neurotechnology, with laypeople concerned about electrode implants and thought control, and mentions vaccine hesitancy as another example of trust issues. He also notes concerns about companies selling unvalidated neurotechnology products.
Major discussion point
Role of Ethics in Technology Development and Deployment
Topics
Sociocultural | Human rights principles
Economic pressures and profit demands can override ethical considerations in commercial settings
Explanation
Kanai explains that while most people have good intentions, conflicts arise in commercial settings due to strong demands to make profit. He describes how investors push companies to make money, and this economic pressure can lead to companies selling products that aren’t scientifically validated.
Evidence
He gives the example of companies trying to sell neurotechnology products that are not scientifically validated, which worries him because it threatens trust in science and technology.
Major discussion point
Challenges in Implementing Ethical Frameworks
Topics
Economic | Legal and regulatory
Agreed with
– Mira Wolf-Bauwens
Agreed on
Economic pressures compromise ethical principles in commercial settings
Lack of scientific validation in some commercial neurotechnology products threatens public trust
Explanation
Kanai expresses concern that some companies are selling neurotechnology products without proper scientific validation, which he believes threatens the crucial trust that the public needs to have in science and technology for widespread acceptance.
Evidence
He mentions seeing companies trying to sell neurotechnology products that are not scientifically validated, which made him worried about maintaining public trust.
Major discussion point
Challenges in Implementing Ethical Frameworks
Topics
Economic | Sociocultural
Portfolio of different future scenarios needed to prepare for various technological developments
Explanation
Kanai argues that keeping pace with technology requires anticipation and considering remote possibilities, not just reacting to current developments. He suggests having a portfolio of different future scenarios to be prepared, noting that expert predictions can sometimes be too conservative.
Evidence
He gives the example that 10 years ago, discussions about AGI (Artificial General Intelligence) weren’t taken seriously in AI governance meetings, but now it seems like a real possibility.
Major discussion point
Governance Models for Emerging Technologies
Topics
Legal and regulatory | Development
Agreed with
– Mira Wolf-Bauwens
Agreed on
Anticipatory governance is essential for keeping pace with rapidly evolving technologies
Risk assessment should happen at early phases of development, similar to ethics committees for human research
Explanation
Kanai suggests that potential risks should be anticipated at the early phase of development when starting a new product or development process. He draws parallels to human research practices where ethics committees assess potential risks before research begins.
Evidence
He notes that in human research, it’s common to have ethics committee consideration at the beginning to assess potential risks and determine if research is ethical to carry out.
Major discussion point
Risk Management and Assessment
Topics
Legal and regulatory | Human rights principles
Agreed with
– Mira Wolf-Bauwens
– Chaichana Mitrpant
Agreed on
Risk management should occur early in development phases
Laws are specific implementations of ethical ideals but cannot capture all important aspects of ethics
Explanation
Kanai views ethics as high-level principles and laws as specific implementations of those ideals. He acknowledges the dilemma that laws can be prohibitive of freedoms and cannot be perfect, but notes that actions can be legal yet still unethical.
Evidence
He explains the dilemma that if something isn’t forbidden by law, people feel free to do it, but those actions can still be unethical, showing that laws are insufficient to capture all ethical considerations.
Major discussion point
Distinction Between Ethics and Regulation
Topics
Legal and regulatory | Human rights principles
Agreed with
– Mira Wolf-Bauwens
– Dafna Feinholz
Agreed on
Ethics is broader and more fundamental than legal regulation
Chaichana Mitrpant
Speech speed
110 words per minute
Speech length
1249 words
Speech time
676 seconds
Thailand uses UNESCO’s AI ethics recommendations as a guiding framework, adapting principles to local context while maintaining core values
Explanation
Mitrpant explains that Thailand adopted UNESCO’s AI ethics recommendations as their guiding tool, but found difficulties in direct application, requiring adaptation to their local context. They established a national AI committee with ethics and standards as one pillar of their national strategy.
Evidence
Thailand created a national AI committee with ethics, standards, laws and regulations as one pillar. They studied AI laws for over two years, drafted an AI bill, but faced conflicting opinions between those supporting regulation due to fraud issues and developers opposing due to cost concerns.
Major discussion point
Role of Ethics in Technology Development and Deployment
Topics
Legal and regulatory | Development
Different stakeholders require different engagement mechanisms – enforcement for government, collaboration with regulators for private sector
Explanation
Mitrpant describes Thailand’s multi-pronged approach to implementation: using enforcement mechanisms for government agencies through cabinet orders, working with regulators for private sectors that have oversight, and using promotion and education for SMEs and citizens.
Evidence
For government agencies, they use cabinet orders for enforcement. For private sectors with regulators, they work with regulators like the central bank which is drafting AI governance guidelines. For SMEs without regulators, they create tools and workshops. For citizens, they provide education and awareness.
Major discussion point
Challenges in Implementing Ethical Frameworks
Topics
Legal and regulatory | Development
Multi-stakeholder engagement is essential but complex, requiring diverse expertise and perspectives
Explanation
Mitrpant emphasizes that they cannot work alone and need domain experts across different sectors. They created an international policy advisory panel with diverse expertise including legal, business, technical, political science, and healthcare experts to navigate different AI issues.
Evidence
They established an international policy advisory panel drawing from legal expertise, business expertise, technical expertise, political science, and medical/healthcare areas. They work with regulators because they don’t have sectoral expertise in medical services, energy, or banking.
Major discussion point
Challenges in Implementing Ethical Frameworks
Topics
Legal and regulatory | Development
Principles should be ‘localized, customized, but not compromised’ when adapting to different contexts
Explanation
Mitrpant advocates for adapting ethical principles to local contexts and specific sectors while maintaining core values and principles. This approach allows for contextual relevance while preserving fundamental ethical standards.
Evidence
Thailand works with regulators to understand and adopt UNESCO’s AI ethics recommendations, customizing them to specific contexts like banking, while stressing that values and principles should not be compromised.
Major discussion point
Governance Models for Emerging Technologies
Topics
Legal and regulatory | Development
Risk management should not place excessive burdens on individual stakeholders – global threat observation can cascade down to developers
Explanation
Mitrpant argues that while risk management and assessment are important, they shouldn’t create excessive burdens on particular stakeholders like SMEs and developers. He suggests a synergized approach where global-level threat observation can cascade down in a structured way.
Evidence
He notes that not all SMEs and developers should have to do risk assessment themselves, suggesting that global-level observation of threats and vulnerabilities can be structured to cascade down to developers and users.
Major discussion point
Risk Management and Assessment
Topics
Legal and regulatory | Development
Agreed with
– Mira Wolf-Bauwens
– Ryota Kanai
Agreed on
Risk management should occur early in development phases
Ethics raises the bar for all stakeholders like vaccination, while laws establish minimum agreed practices
Explanation
Mitrpant uses a vaccination metaphor to explain that ethics should be understood at all levels by everybody, providing immunity to threats when all stakeholders are ‘vaccinated’ with ethical understanding. Laws, in contrast, represent the minimum practices agreed upon within a country after stakeholder processes.
Evidence
He compares ethics to vaccination, saying that if all stakeholders are ‘vaccinated’ with ethical understanding, then society becomes immune to different threats, while laws are like requiring vaccination because they represent minimum agreed practices.
Major discussion point
Distinction Between Ethics and Regulation
Topics
Legal and regulatory | Human rights principles
Audience
Speech speed
123 words per minute
Speech length
92 words
Speech time
44 seconds
Risk management should be implemented during development and throughout the lifecycle of AI, quantum, and neural systems
Explanation
An audience member asked whether the panelists advocate for risk management to be conducted during the development phase and throughout the entire lifecycle of emerging technology systems. This question addresses the timing and scope of risk assessment in technology development.
Evidence
The question specifically mentioned quantum, neural, and AI systems as examples of technologies that should have lifecycle risk management.
Major discussion point
Risk Management and Assessment
Topics
Legal and regulatory | Development
There is a meaningful distinction between ethics and regulation that needs clarification
Explanation
An audience member questioned the difference between ethics and regulation, noting that the panel discussed both regulatory guidelines and building ethics into people’s thinking and practices. The question sought to understand how these two approaches differ and relate to each other.
Evidence
The questioner noted that the panel talked about regulators creating guidelines and regulations on one hand, and building ethics into the way people think and do things on the other hand, recognizing these as very different approaches.
Major discussion point
Distinction Between Ethics and Regulation
Topics
Legal and regulatory | Human rights principles
Agreements
Agreement points
Ethics should be foundational and embedded from the beginning of technology development
Speakers
– Mira Wolf-Bauwens
– Dafna Feinholz
Arguments
Ethics should be foundational and embedded from the beginning, not an afterthought
UNESCO promotes ethics at the center of science and technology development to protect human rights and fundamental freedoms
Summary
Both speakers strongly advocate that ethics must be integrated from the very start of technology development rather than being added as an afterthought. They emphasize that ethical considerations should guide the entire development process.
Topics
Human rights principles | Legal and regulatory
Anticipatory governance is essential for keeping pace with rapidly evolving technologies
Speakers
– Mira Wolf-Bauwens
– Ryota Kanai
Arguments
Anticipatory governance is crucial – need to imagine potential consequences 2-10 years ahead rather than just reacting
Portfolio of different future scenarios needed to prepare for various technological developments
Summary
Both speakers agree that governance cannot simply react to technological developments but must anticipate future scenarios and potential consequences years in advance to be effective.
Topics
Legal and regulatory | Development
Economic pressures compromise ethical principles in commercial settings
Speakers
– Mira Wolf-Bauwens
– Ryota Kanai
Arguments
Individual researchers have good ethical motivations, but institutional and economic pressures can compromise these principles
Economic pressures and profit demands can override ethical considerations in commercial settings
Summary
Both speakers identify a common pattern where individuals have good ethical intentions, but institutional and economic pressures, particularly the need to generate profit, can override these ethical considerations.
Topics
Economic | Legal and regulatory
Risk management should occur early in development phases
Speakers
– Mira Wolf-Bauwens
– Ryota Kanai
– Chaichana Mitrpant
Arguments
Risk management should occur throughout the entire lifecycle of technology development
Risk assessment should happen at early phases of development, similar to ethics committees for human research
Risk management should not place excessive burdens on individual stakeholders – global threat observation can cascade down to developers
Summary
All three speakers agree that risk management is crucial and should begin early in the development process, though they offer different perspectives on implementation approaches.
Topics
Legal and regulatory | Development
Ethics is broader and more fundamental than legal regulation
Speakers
– Mira Wolf-Bauwens
– Ryota Kanai
– Dafna Feinholz
Arguments
Ethics is broader and more agile than law, addressing motivations and positive actions, not just prohibitions
Laws are specific implementations of ethical ideals but cannot capture all important aspects of ethics
Laws are specific implementations of ethical ideals but cannot capture all important aspects of ethics
Summary
All speakers agree that ethics encompasses broader principles and motivations than what can be captured in legal frameworks, with laws being specific implementations that cannot address all ethical considerations.
Topics
Legal and regulatory | Human rights principles
Similar viewpoints
Both speakers advocate for a framework approach where core ethical principles remain consistent but are adapted to specific technologies and local contexts without compromising fundamental values.
Speakers
– Mira Wolf-Bauwens
– Chaichana Mitrpant
Arguments
Cross-cutting ethical principles should be established with technology-specific customizations rather than reinventing everything
Principles should be ‘localized, customized, but not compromised’ when adapting to different contexts
Topics
Legal and regulatory | Development
Both speakers emphasize the importance of building and maintaining public trust through transparent communication and inclusive engagement with diverse stakeholders.
Speakers
– Ryota Kanai
– Chaichana Mitrpant
Arguments
Trust in science and technology is crucial for public acceptance, requiring transparent communication about expert intentions
Multi-stakeholder engagement is essential but complex, requiring diverse expertise and perspectives
Topics
Sociocultural | Legal and regulatory
Both speakers demonstrate the practical value of non-binding international frameworks that can be adapted to local contexts while maintaining core ethical principles.
Speakers
– Dafna Feinholz
– Chaichana Mitrpant
Arguments
Non-binding normative instruments can be very useful and allow adaptation by different member states and stakeholders
Thailand uses UNESCO’s AI ethics recommendations as a guiding framework, adapting principles to local context while maintaining core values
Topics
Legal and regulatory | Human rights principles
Unexpected consensus
The effectiveness of non-binding ethical frameworks over rigid legal requirements
Speakers
– Dafna Feinholz
– Mira Wolf-Bauwens
– Chaichana Mitrpant
Arguments
Non-binding normative instruments can be very useful and allow adaptation by different member states and stakeholders
Ethics is broader and more agile than law, addressing motivations and positive actions, not just prohibitions
Thailand uses UNESCO’s AI ethics recommendations as a guiding framework, adapting principles to local context while maintaining core values
Explanation
It’s unexpected that speakers from both international organizations and national implementation perspectives would so strongly favor flexible, non-binding approaches over traditional regulatory frameworks. This consensus suggests a shift toward more adaptive governance models.
Topics
Legal and regulatory | Human rights principles
The challenge of making ethics economically viable in private sector contexts
Speakers
– Mira Wolf-Bauwens
– Ryota Kanai
Arguments
The challenge is putting a return on investment (ROI) on ethics to make it economically viable
Economic pressures and profit demands can override ethical considerations in commercial settings
Explanation
Both speakers, despite coming from different backgrounds (philosophy/policy and neuroscience/business), independently identified the same core challenge of aligning ethical principles with economic incentives, suggesting this is a fundamental systemic issue.
Topics
Economic | Legal and regulatory
Overall assessment
Summary
The speakers demonstrated strong consensus on foundational principles: ethics must be embedded from the beginning of technology development, anticipatory governance is essential, economic pressures pose significant challenges to ethical implementation, and flexible frameworks are more effective than rigid regulations. They also agreed on the need for multi-stakeholder engagement and the importance of building public trust.
Consensus level
High level of consensus with complementary perspectives rather than conflicting viewpoints. The speakers approached the same issues from different angles (international policy, academic research, national implementation, and private sector experience) but arrived at remarkably similar conclusions. This strong alignment suggests these principles represent well-established best practices in technology ethics governance, with significant implications for developing more effective and widely adoptable ethical frameworks for emerging technologies.
Differences
Different viewpoints
Approach to changing governance structures vs. working within existing systems
Speakers
– Mira Wolf-Bauwens
Arguments
Anticipatory governance is crucial – need to imagine potential consequences 2-10 years ahead rather than just reacting
Summary
Wolf-Bauwens initially thought governance structures needed to be changed but now believes it’s harder to change all governance structures, so advocates working with existing democratic and inclusive structures despite their time-lagging nature. Other speakers don’t explicitly address this structural reform question.
Topics
Legal and regulatory | Development
Unexpected differences
Scope of risk management implementation
Speakers
– Mira Wolf-Bauwens
– Chaichana Mitrpant
Arguments
Risk management should occur throughout the entire lifecycle of technology development
Risk management should not place excessive burdens on individual stakeholders – global threat observation can cascade down to developers
Explanation
While both support risk management, they have different views on implementation burden. Wolf-Bauwens advocates for comprehensive lifecycle risk management, while Mitrpant is concerned about not overburdening individual stakeholders and suggests a more distributed approach. This disagreement is unexpected because both are generally aligned on the importance of risk management.
Topics
Legal and regulatory | Development
Overall assessment
Summary
The speakers show remarkable consensus on fundamental principles but differ on implementation approaches and emphasis
Disagreement level
Low level of fundamental disagreement with moderate differences in implementation strategies. The speakers largely agree on core issues like the importance of ethics being foundational, the challenges of economic pressures, and the need for anticipatory governance. Their differences lie primarily in tactical approaches rather than strategic goals, which suggests a strong foundation for collaborative policy development while allowing for diverse implementation pathways.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers advocate for a framework approach where core ethical principles remain consistent but are adapted to specific technologies and local contexts without compromising fundamental values.
Speakers
– Mira Wolf-Bauwens
– Chaichana Mitrpant
Arguments
Cross-cutting ethical principles should be established with technology-specific customizations rather than reinventing everything
Principles should be ‘localized, customized, but not compromised’ when adapting to different contexts
Topics
Legal and regulatory | Development
Both speakers emphasize the importance of building and maintaining public trust through transparent communication and inclusive engagement with diverse stakeholders.
Speakers
– Ryota Kanai
– Chaichana Mitrpant
Arguments
Trust in science and technology is crucial for public acceptance, requiring transparent communication about expert intentions
Multi-stakeholder engagement is essential but complex, requiring diverse expertise and perspectives
Topics
Sociocultural | Legal and regulatory
Both speakers demonstrate the practical value of non-binding international frameworks that can be adapted to local contexts while maintaining core ethical principles.
Speakers
– Dafna Feinholz
– Chaichana Mitrpant
Arguments
Non-binding normative instruments can be very useful and allow adaptation by different member states and stakeholders
Thailand uses UNESCO’s AI ethics recommendations as a guiding framework, adapting principles to local context while maintaining core values
Topics
Legal and regulatory | Human rights principles
Takeaways
Key takeaways
Ethics must be foundational and embedded from the beginning of technology development, not treated as an afterthought
Individual researchers and developers generally have good ethical motivations, but institutional and economic pressures often compromise these principles in practice
Non-binding normative instruments like UNESCO’s AI ethics recommendations can be highly effective when adapted to local contexts while maintaining core values
Anticipatory governance is essential – stakeholders must imagine potential consequences 2-10 years ahead rather than merely reacting to current developments
Cross-cutting ethical principles should be established with technology-specific customizations rather than creating entirely new frameworks for each emerging technology
Multi-stakeholder engagement is crucial but requires different approaches for different groups (enforcement for government, collaboration with regulators for private sector, education for SMEs and citizens)
Trust in science and technology is fundamental for public acceptance and requires transparent communication about expert intentions and scientific validation
Ethics is broader and more agile than regulation – it addresses motivations and positive actions while laws typically focus on prohibitions and minimum standards
The challenge of putting a return on investment (ROI) on ethics remains a critical unresolved issue for making ethical practices economically viable
Resolutions and action items
Continue developing technology-specific ethical frameworks (neurotechnology, quantum computing) building on established cross-cutting principles
Maintain multi-stakeholder dialogue platforms to capture signals about AI development directions
Develop networks of experts to share knowledge across different technological domains
Implement risk assessment processes at early phases of technology development, similar to ethics committees for human research
Create tools and educational programs for SMEs and citizens to promote ethical AI adoption
Work with sectoral regulators to customize ethical guidelines for specific industries while maintaining core principles
Unresolved issues
How to put a return on investment (ROI) on ethics to make it economically viable in profit-driven environments
How to resist economic and institutional pressures that compromise ethical principles when significant funding is involved
How to balance the need for democratic, inclusive governance processes with the speed required to keep pace with rapid technological development
How to effectively anticipate and prepare for remote possibilities and science fiction-like scenarios that experts may dismiss as unrealistic
How to ensure risk management doesn’t place excessive burdens on smaller stakeholders while maintaining effective oversight
How to maintain public trust when some commercial products lack proper scientific validation
How to effectively combine expertise across multiple rapidly developing technologies (AI, neurotechnology, quantum computing, etc.)
Suggested compromises
Adopt the approach of ‘localized, customized, but not compromised’ – adapting ethical principles to specific contexts and technologies while maintaining core values
Use existing democratic governance structures rather than trying to change them entirely, but improve communication of timelines and priorities
Implement a portfolio approach with different future scenarios to prepare for various technological developments rather than trying to predict one specific outcome
Establish global-level threat and vulnerability observation that can cascade down to developers and users, reducing individual assessment burdens
Focus on cross-cutting ethical principles (justice, inclusivity, accessibility) with technology-specific adaptations rather than creating entirely separate frameworks
Balance individual ethical motivations with institutional mechanisms that can withstand economic pressures
Thought provoking comments
So I think what I’m like, just for me, this was really, really insightful kind of acknowledging that as individuals. The sort of good motivations and ethical principles are there, but the kind of trick that happens once power dynamics of who someone is in a team, how much power they have in a team, the pressures of a corporate having to sell, having to make profit, once those come into play, they very clearly fade. And I think so the trick is in a way, how can we instill, it’s been said before a lot, but on the one hand, how can we instill that culture? But for me, the point is really, how can we put an ROI, return on invest, onto ethics?
Speaker
Mira Wolf-Bauwens
Reason
This comment is deeply insightful because it identifies the core paradox in tech ethics: individual good intentions systematically fail when institutional pressures emerge. Her concrete example from quantum computing research demonstrates how the same people who privately express ethical motivations publicly compromise those values under business pressures. The ROI question reframes ethics from a philosophical ideal to a practical business challenge.
Impact
This comment fundamentally shifted the discussion from theoretical ethics to practical implementation challenges. It introduced the critical tension between individual values and institutional pressures, which became a recurring theme. Both subsequent speakers (Ryota and Chaichana) built upon this insight, with Ryota acknowledging similar pressures in his company and Chaichana addressing how Thailand tries to balance regulatory enforcement with business concerns.
But I think the tricky thing starts when there’s some sort of conflict. So especially in the commercial setting, there’s a strong demand to make profit. So I’m running a company and then I get investment. So investors push us to make money. I think that’s how our current economic system works. But because of this, as a scientist, I felt some companies are trying to sell neurotechnology products that are not scientifically validated.
Speaker
Ryota Kanai
Reason
This comment is particularly powerful because it comes from someone living the dual reality of scientist and entrepreneur. He articulates the specific mechanism by which ethical compromises occur – investor pressure leading to premature or unvalidated product claims. His concern about trust in science adds another layer, showing how individual ethical failures can undermine entire fields.
Impact
This comment validated and deepened Mira’s earlier observation about institutional pressures, but added the crucial dimension of scientific integrity. It shifted the conversation toward the specific challenge of maintaining scientific rigor under commercial pressure, and introduced the concept that ethical failures can erode public trust in entire technological domains.
So based on our belief that this is our guiding tool for Thailand to navigate through AI adoption and maybe regulation creation, we try to really make it happen… We decided to study AI laws and regulations more than two years ago. We actually drafted our AI bill two years ago, but there were conflicting opinions of how Thailand should navigate through AI regulations at that time. People facing fraud, defect issues were supporting the law while the developers in Thailand were kind of opposing and asked a lot of questions.
Speaker
Chaichana Mitrpant
Reason
This comment provides crucial real-world evidence of the implementation challenges discussed theoretically by the other speakers. It shows how even well-intentioned government efforts face the exact stakeholder conflicts that Mira and Ryota identified – those experiencing harms want regulation while developers resist it due to cost concerns.
Impact
This comment grounded the entire discussion in practical governance reality. It demonstrated that the theoretical tensions between ethics and economics play out even at the national policy level, and introduced the concept of ‘localized, customized, but not compromised’ implementation, which became a key framework referenced by other speakers.
So, for me, there’s different answers to this. But in the tech sector, importantly, the difference between true ethics and the law is that ethics is much wider and that the law often is this kind of, that’s what we discussed, is the one that is sort of lacking behind and that is not capturing a lot. So, for instance, even if you hear, and I mentioned that earlier to you, in a tech sector, when you hear ethics, it is not ethics, it is compliance.
Speaker
Mira Wolf-Bauwens
Reason
This comment is intellectually provocative because it challenges the entire premise of how ethics is understood in the technology sector. By distinguishing ‘true ethics’ from compliance-based pseudo-ethics, she exposes how the term ‘ethics’ itself has been co-opted and diluted. This reframing is crucial for understanding why many corporate ‘ethics’ initiatives fail to address real ethical concerns.
Impact
This comment elevated the entire discussion by introducing a meta-level critique of how ethics discourse itself has been corrupted. It provided a framework for understanding why many well-intentioned ethics initiatives fail – they’re actually compliance exercises rather than genuine ethical reflection. This insight influenced the final exchanges about the relationship between ethics and regulation.
Overall assessment
These key comments transformed what could have been a theoretical discussion about ethics principles into a nuanced examination of the systemic barriers to ethical technology development. Mira’s insights about institutional pressure and the corruption of ethics discourse provided the analytical framework, while Ryota’s personal experience as a scientist-entrepreneur validated these observations with concrete examples. Chaichana’s policy implementation experiences demonstrated that these challenges exist at every level, from individual companies to national governments. Together, these comments created a progression from identifying the problem (good intentions undermined by institutional pressures) to understanding its mechanisms (investor demands, regulatory conflicts) to exploring potential solutions (anticipatory governance, multi-stakeholder engagement). The discussion evolved from abstract principles to practical implementation challenges, ultimately revealing that the central question isn’t what ethical principles to adopt, but how to create systems that can maintain ethical commitments under economic and institutional pressure.
Follow-up questions
How can we put a return on investment (ROI) on ethics in a world driven by private industry?
Speaker
Mira Wolf-Bauwens
Explanation
This addresses the fundamental challenge of making ethics economically viable and attractive to profit-driven organizations, which is crucial for embedding ethics from the start of technology development.
How can we resist economic pressures and institutional pressures to ensure that good principles remain at the foundation of motivation?
Speaker
Mira Wolf-Bauwens
Explanation
This explores the systemic challenges that cause individuals with good ethical intentions to compromise when faced with corporate and financial pressures.
How much information can we extract from neural signals, especially in combination with AI?
Speaker
Ryota Kanai
Explanation
This is a critical research area for neurotechnology ethics, as the extent of information extraction capabilities directly impacts privacy concerns and regulatory needs.
How can we better anticipate remote possibilities and future scenarios in technology development?
Speaker
Ryota Kanai
Explanation
This addresses the need for more sophisticated forecasting methods to keep governance pace with rapid technological development, particularly for scenarios that may seem like science fiction but could become reality.
How can we create effective governance models that work across converging technologies while maintaining local customization?
Speaker
Implied by discussion between all speakers
Explanation
This explores whether separate ethical frameworks are needed for each technology or if cross-cutting principles can be adapted, which is important for efficient and coherent governance.
How can we better capture signals about AI development direction through dialogue platforms?
Speaker
Chaichana Mitrpant
Explanation
This addresses the need for systematic monitoring and early warning systems to track technological developments and adjust governance measures accordingly.
How can risk management be distributed fairly without placing excessive burdens on particular stakeholders like SMEs?
Speaker
Chaichana Mitrpant
Explanation
This explores how to create equitable risk assessment frameworks that don’t disadvantage smaller players while maintaining effective oversight.
What are effective models for companies that balance societal benefit with economic benefit?
Speaker
Mira Wolf-Bauwens
Explanation
This addresses the lack of successful business models that demonstrate how ethical principles can coexist with profitability, which is needed to convince the private sector.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.