Main Topic 2: Neurotechnology and privacy: Navigating human rights and regulatory challenges in the age of neural data

13 May 2025 09:00h - 10:30h

Main Topic 2: Neurotechnology and privacy: Navigating human rights and regulatory challenges in the age of neural data

Session at a glance

Summary

This discussion focused on the challenges and implications of neurotechnology and mental privacy in the digital age. The keynote speaker, Ana Brien-Nogueres, emphasized the need to recognize neurodata as a special category of personal data requiring the highest level of protection. She outlined principles for regulating neurotechnologies, including human dignity, mental privacy, and informed consent. The panelists debated whether existing legal frameworks like the GDPR and Convention 108+ adequately address the unique sensitivities of neural data. Some argued for creating a new legal category for mental data, while others suggested adapting existing privacy rights. The discussion highlighted the potential risks of neurotechnologies, including cognitive control, behavioral intervention, and surveillance. Participants stressed the importance of interdisciplinary approaches and multi-stakeholder discussions in developing appropriate governance frameworks. The need for a broader understanding of neurotechnologies beyond just brain activity was emphasized, including consideration of other physiological indicators. Questions were raised about accountability mechanisms across the lifecycle of neurodata processing. The discussion touched on the global implications of neurotechnology governance, with calls to consider diverse cultural values and avoid perpetuating colonial power dynamics in technology development. Participants also debated the relationship between neurodata and other forms of sensitive data like genetic information. Overall, the session underscored the urgency of addressing the ethical, legal, and societal challenges posed by advancing neurotechnologies to protect mental privacy and human rights.


Keypoints

Major discussion points:


– Whether existing legal frameworks like GDPR adequately address the unique challenges of neurotechnology and neural data


– If a new legal category or right for “mental privacy” is needed, or if existing privacy rights are sufficient


– The need for a broad, interdisciplinary approach to regulating neurotechnology that considers diverse cultural values


– Concerns about potential manipulation, surveillance and loss of cognitive autonomy as neurotechnology advances


– Challenges in defining and categorizing neural data for regulatory purposes


Overall purpose:


The discussion aimed to explore the legal, ethical and societal implications of emerging neurotechnology and how to effectively govern its development and use while protecting human rights, particularly mental privacy.


Tone:


The tone was primarily serious and analytical, with speakers emphasizing the gravity and complexity of the issues. There was a sense of urgency about the need to address these challenges proactively. The discussion became more nuanced and multifaceted as different perspectives were shared, moving from broad principles to more specific regulatory and definitional questions.


Speakers

– Damian Eke: Assistant Professor at the University of Nottingham, Chair of the International Brain Initiatives Data Sharing and Standards Group, founder of the African Brain Data Network and African Data Governance Initiative


– Ana Brian Nougrères: UN Special Rapporteur on privacy


– Petra Zandonella: Pre-doctoral assistant at the University of Graz law faculty, part of interdisciplinary research group on law, ethics and neurotechnologies


– Doreen Bogdan-Martin: Secretary General of the International Telecommunication Union


– Moritz Taylor: Moderator


Additional speakers:


– Jan Kleijssen


– Torsten Krause: Political scientist and child rights researcher at the Digital Opportunities Foundation


– George: From HRI, Civil Society


– Kristin: From the University of Oslo


– Ankita: Lawyer from India


– Lars Lundberger: From the World Federalist Movement


Full session report

Revised Summary of Neurotechnology and Mental Privacy Discussion


This discussion explored the challenges and implications of neurotechnology and mental privacy in the digital age, bringing together experts to examine the legal, ethical, and societal impacts of emerging neurotechnologies.


Keynote Address by Ana Brian Nougrères, UN Special Rapporteur on Privacy:


Ana Brian Nougrères opened the discussion with a keynote speech emphasizing the critical importance of protecting neurodata. She argued that neurodata should be recognized as a special category of personal data requiring the highest level of protection, stating that “neurodata constitute windows into the cognitive, emotional, and psychological fabric of the human being.” Nougrères highlighted the potential risks of neurotechnologies to mental autonomy and cognitive liberty.


She also discussed the significance of Convention 108 and its modernized version, Convention 108+, as important frameworks for data protection in the context of neurotechnology. Nougrères stressed the need for international harmonization of neurodata regulations and called for multi-stakeholder, international discussions to develop appropriate governance frameworks.


Panel Discussion and Key Themes:


1. Regulation of Neurotechnology and Neurodata


Damian Eke, Assistant Professor at the University of Nottingham, argued that current regulations are insufficient, stating that “existing regulations or ethical and legal frameworks are not addressing the issues of neural data and neurotechnology as they should be.” He noted that recognition of neural data in the GDPR and Convention 108 Plus is largely implicit rather than explicit.


In contrast, Petra Zandonella, a pre-doctoral assistant at the University of Graz law faculty, contended that there is no need for a new “right to mental privacy,” suggesting that existing privacy rights are sufficient. She highlighted the broad interpretation of the right to privacy in the European Convention on Human Rights as potentially adequate for addressing neurotechnology concerns.


The panel discussed the potential need for creating a special category of data for neurodata, similar to genetic and biometric data, to ensure appropriate protection.


2. Ethical Implications and Risks


The discussion highlighted significant concerns about the potential risks of neurotechnologies, including cognitive control, behavioral intervention, and surveillance. Specific examples of neurotechnologies mentioned included brain-computer interfaces and mood-tracking devices.


Eke raised the issue of ethics dumping in neurotechnology research, emphasizing the need for ethical considerations in cross-border studies. He also stressed the importance of decoloniality as a requirement for trustworthiness in AI and neurotechnology development.


3. Defining and Categorizing Neurodata


The challenge of legally defining neurodata emerged as a significant point of discussion. Eke highlighted the debate over how to categorize neural data within existing legal frameworks. Lars Lundberger from the World Federalist Movement suggested considering a broader definition beyond just brain activity, including other physiological indicators like muscle activity and skin conductance.


The question of whether genetic data should be included as neurodata was also raised, highlighting the complexity of defining the boundaries of this category. Zandonella emphasized the need for an interdisciplinary approach to defining neurodata.


4. Global Perspectives and Cultural Considerations


The discussion underscored the importance of considering diverse cultural values and avoiding perpetuating colonial power dynamics in technology development. Eke argued that European-centric approaches may not work globally and called for a value-based approach to governance that considers diverse cultural interpretations of human rights.


5. Ongoing Initiatives and Future Discussions


The panel mentioned the ongoing UNESCO debate on recommendations for ethics in neurotechnologies. Eke also noted an upcoming UN workshop in Berlin focused on neurotechnology, indicating continued international efforts to address these challenges.


Audience Contributions:


Audience members raised important points about the potential impact of neurotechnology on freedom of expression and freedom of thought. There was also a call for accountability mechanisms across the lifecycle of neurodata processing, highlighting the need for comprehensive governance structures.


Unresolved Issues and Future Directions:


Several key issues remained unresolved, including:


– The precise legal definition of neurodata and its scope


– Whether existing legal frameworks are sufficient or if new regulations are needed


– How to ensure truly informed consent for neurodata collection given the complexity of these technologies


– Balancing innovation with adequate protections for mental privacy and human rights


– Applying governance approaches globally while respecting diverse cultural perspectives


Conclusion:


The discussion emphasized the urgency of addressing the ethical, legal, and societal challenges posed by advancing neurotechnologies to protect mental privacy and human rights. It highlighted the need for interdisciplinary approaches, international cooperation, and consideration of diverse cultural perspectives in developing governance frameworks for neurotechnology. As the field continues to evolve, ongoing multi-stakeholder discussions and research will be crucial to navigate the complex landscape of mental privacy in the digital age.


Session transcript

Moritz Taylor: I’m going to ask you all to take a seat, come back in from your coffee break, if you can hear us outside. Welcome back, those of you who are already in main session one. Welcome back to those of you who were here yesterday and had your socks rocked off by the Mariam Chaladze band and are finally coming in. And welcome to all of you who’ve been in other sessions and workshops before and are coming into the main session, into the hemicycle for the first time today. Now I’m going to start with main session two, which is neurotechnology and mental privacy, regulating the mind in the digital age. We’ll start off with a video, followed by a keynote, followed by statements, the panel, questions and the messages. I’ll hand over now to our online moderator, Joao, who will explain to you the rules of the session. Thank you very much.


Online moderator: So, for those who join online, please always raise your hand to speak, to ask for a speaking slot. Then we’ll ask you to switch your video and unmute when the time comes. For those who join on-site, please always join with your microphone muted and your speaker from the device also disabled. Thank you. Back to you.


Moritz Taylor: Thank you, Joao. I hope that was clear for everyone. I’m here to help you. For those of you in the room, when you wish to speak, you’ll be able to press a button next to your microphone. In any case, we will begin today’s session with a video message from Doreen Bogdan-Martin, Secretary General of the International Telecommunication Union. Please play it. Thank you.


Doreen Bogdan-Martin: Hello, everyone. Let me start by thanking the Council of Europe and Luxembourg for inviting me to share a few words with you today. I would love to be with you in Strasbourg, especially at a time when digital is top of mind all over Europe and around the world. The Internet is a mirror that reflects humanity at our best and at our worst. Digital technologies evolve rapidly, as do the associated opportunities and risks. Human rights do not. As we achieve breakthroughs in science and technology, our outlook on the future changes, but human rights remain a constant. They are a guidepost for our actions across tech frontiers, especially now. With powerful technologies like like AI and quantum, poised to drive the next phase of the Internet’s evolution. To safeguard human rights, by balancing regulation and innovation, all voices are needed at the governance table. To understand how policy and regulation, whether nascent or enforced, can impact everyone’s aspirations as innovators and users. That’s agile and adaptive governance, keeping everyone involved in designing and fine-tuning our policy actions. And that’s why open, multi-stakeholder forums like the World Summit on the Information Society and EuroDIG are so important. They give everyone a say in how public policy could best reflect our shared values. We will keep this conversation moving at our back-to-back WSIS Plus 20 High-Level Meeting and our AI for Good Global Summit in July. We are reviewing 20 years of the WSIS process and looking to strengthen this multi-stakeholder framework for global action on digital development. 2025 also marks the 160th anniversary of the ITU. As our founding members, European countries are deeply experienced in building the collaboration and consensus that powers our work. Now, more than ever, the world needs to see this spirit of cooperation in action. Thank you.


Moritz Taylor: Thank you so much, Doreen. Now, I’d like to begin with the session proper. The idea that our thoughts are private has long been considered a cornerstone of personal freedom. But with neurotechnology rapidly evolving, from brain-computer interfaces to mood-tracking devices, that assumption is being challenged. This session tackles one of the most provocative frontiers of digital governance, the legal and ethical implications of decoding, collecting, or even influencing brain activity. We will explore whether our current regulatory frameworks are fit for purpose, and whether we need to rethink privacy in a world where even our minds can become data streams. I’d like to therefore invite our keynote speaker today, UN Special Rapporteur on the right to privacy, or on privacy only, I think, Ana Brien-Nogueres. Please take the floor.


Ana Brian Nougrères: Thank you so much. Thank you so much. Good morning to you all. Well, this is because I’m trying to be strict with my timing. So, distinguished guests and colleagues, it is a great honor to address you today. I’m also grateful to the organizers for their invitation. Today, I will invite you to reflect on neuroscience, neurotechnology, neurodata. So, why is privacy at the frontiers of neuroscience? As you know, neurotechnologies are devices and systems capable of monitoring, interpreting, or even modifying brain activity. They are no longer a theoretical promise. They are here, and they are advancing rapidly. What only a decade ago seemed inconceivable is now being tested in laboratories, deployed in medical settings, and increasingly explored for commercial and security-related purposes. As these technologies develop, so do the risks they pose to fundamental rights, particularly the right to privacy. We are entering a new frontier in the domain of privacy. This frontier appears like a boundary between the self and the outside world. This boundary is becoming porous. It is a frontier where our inner thoughts, intentions, and emotions may be inferred, stored, transmitted, and even altered by external technologies. The frontier demands not only technical safeguards, but a reaffirmation of the foundational principles that protect human dignity and autonomy. In my recent report to the UN Human Rights Council on Neurotechnologies and Neurodata, I have laid out a framework for action, one that calls for regulatory clarity, for ethical commitment, and for international solidarity. The stakes are high. If neurotechnologies are used wisely, they hold the potential to advance science. improve human health and empower individuals. But if misused, they could erode mental freedom, facilitate surveillance of thought, and deepen existing inequalities. We must act now, together, to ensure that our values evolve alongside our innovations. The question before us is not whether neurotechnologies will become part of our life. They are already here. The question is whether we will be governing them wisely, justly, in a way that protects what is most sacred, which is the integrity of the human mind. So, what is so important about neurotechnologies? To understand the significance of protecting personal data in the context of neuroscience, we must first define what we mean by neurotechnologies, and more importantly, why they matter for human rights in general and for the right to privacy in particular. Neurotechnologies refer to the tools, systems, and devices capable of accessing, monitoring, interpreting, or altering brain activity and the nervous system. These include invasive methods such as brain implants used in medical treatments for conditions like epilepsy or Parkinson’s disease, as well as non-invasive techniques such as electroencephalography, functional magnetic resonance imagining, or emerging wearable brain-computer interfaces. Although many of these tools are initially developed for therapeutic or research purposes, their application has expanded far beyond clinical settings. Today, neurotechnologies are entering the commercial domain, integrated into consumer products, educational platforms, WordPress monitoring systems, and even digital entertainment. They are also being explored for potential use in law enforcement. law enforcement, military operations, criminal justice, raising deeply troubling ethical and legal questions. Why do these technologies matter so much? I mean, I think it’s because the brain is not just another organ. It is the seed of our consciousness, our identity, our thoughts, emotions, memories, and intentions. The activity of our neural circuits encodes not only what we do, but who we are. And for the first time in history, we now possess tools that can reach into this most intimate and private space. Let us be clear, neurodata, the data collected through neurotechnologies, is fundamentally different from other types of personal data. While a fingerprint may identify us, and geolocalization data may track us, neurodata can reveal how we feel, what we fear, what we desire, or what we intend to do sometimes, even before we are consciously aware of it ourselves. This kind of information is not merely sensitive. It is existentially revealing. This is why the Global Privacy Assembly and numerous human rights bodies, including the UN Human Rights Council, have recognized neurodata as a special category of personal data that requires the highest level of protection. Neurotechnologies also raise the prospect of mental manipulation. We must reckon with the real possibilities that once brain activity can be decoded, it can also be influenced through electrical or digital stimulation, algorithmic feedback, or even predictive behavioral nudging. This raises profound concerns regarding free will, mental autonomy, and cognitive liberty. And yet, we must not overlook the enormous benefits that these technologies offer. Neurotechnologies can revolutionize medical diagnostics, facilitate communication for individuals with severe disabilities, and deepen our scientific understanding of the brain. They can bring hope to people suffering from neurological or psychiatrist conditions for which current treatments are inadequate. The challenge, then, is not to resist innovation, but to guide it ethically, to ensure that the development and deployment of neurotechnologies respect human rights, protect mental privacy, and empower individuals, rather than expose them to surveillance, discrimination, or exploitation. Neurodata as a special category of personal data. At the heart of the ethical and legal debate surrounding neurotechnologies lies a fundamental question. How should we treat the data generated by the human brain and nervous system? The answer is inequivocal. Neurodata must be recognized and regulated as a special category of personal sensitive data. This recognition is not merely symbolic. It reflects the unique nature, sensitivity, and potential consequences of processing neurodata. Neurodata are not just health data. They are not merely biometric data. Neurodata go far beyond what we traditionally understand as personally identifiable information. Neurodata constitute windows into the cognitive, emotional, and psychological fabric of the human being. And like other data, neurodata can provide deep insights into a person’s mood, personality, memory patterns, decision-making, and even unconscious mental states. These are not inferences drawn from online behavior or wearable devices. They are recordings of the brain’s actual electrical and psychological activity. This data can be used not only to identify a person, but to analyze, predict, or even alter their thoughts and behaviors. For these reasons, neurodata meet and exceed the criteria that define sensitive personal data under international privacy standards. As such, they require enhanced legal safeguards. They must be subject to. strict access controls, strong encryption, cybersecurity protocols, explicit, informed, and revocable consent mechanisms, clear limits on their collection, retention, and sharing. Furthermore, the mere collection of new data should trigger a high-risk processing assessment. This is particularly vital in contexts involving vulnerable populations, such as persons with disabilities, children, older persons, or individuals in institutional settings, where the potential for coercion, manipulation, or misuse is even greater. Mental privacy emerged as a necessary evolution of the right to privacy, emphasizing that thoughts and mental states, absent a compelling legal justification and strict safeguards, must remain off-limits to external surveillance or intrusion. The mind is the final frontier of human privacy, and it must be treated as such. National bodies such as the Ibero-American Data Protection Network, the Global Privacy Assembly, and the Berlin Group have all recognized the need to establish special frameworks for the processing of neurodata. Their recommendations are aligned with a precautionary principle, calling for clear legal definitions, transparency, accountability, human rights impact assessments, and prerequisites for any neurodata-driven activities. In addition, these bodies emphasize that neurodata may imply a power of anticipation, capable of revealing information not only about the current mental state of an individual, but about their future behavior, psychological predispositions, or cognitive performance. This introduces unprecedented risks of profiling, stigmatization, and discriminatory treatment, particularly in employment, insurance, education, and criminal justice settings. This may lead to a biased hiring process, unequal access to services, or unjustified exclusion from opportunities depending on social inequalities. We demand, at this, equitable access and non-discrimination procedures. To regulate neurodata adequately, we must adopt forward-looking regulatory frameworks that incorporate the complexity and the implications of this new form of personal information. Our legal systems must reflect that neurodata are not merely data about the brain, but deeply personal representations of the self. I will refer next to the principles and safeguards for the use of neurotechnologies and the processing of neurodata. In my most recent report to the Human Rights Council, I outlined a set of principles that should serve as the ethical and legal compass for regulating the use of neurotechnologies and the processing of neurodata. These principles are not abstract ideals. They are concrete tools for building legal, institutional, and technological systems that respect privacy, autonomy, and equality in the context of the brain. The first and most essential principle refers to human dignity. The mind is where human dignity resides. It is the source of self-awareness, of decision-making, of creativity, of morality. Any attempt to access or alter brain activity without the individual’s informed and voluntary consent should be considered a violation of this dignity. We need to safeguard the integrity of every individual’s cognitive and emotional self. Neurotechnologies must never be used in ways that reduce the person to a target of data extraction or behavioral engineering. We must acknowledge parental privacy as an emerging but indispensable dimension of the right to privacy. The human mind must be treated as an inviolable space. Any intrusion, whether through decoding brain signals, stimulating specific regions, or interpreting patterns of cognition, must be subject to the most rigorous legal scrutiny and ethical oversight. The processing of new data must be based on a freely given, informed, specific, and revocable consent of the individual. This ensures that individuals remain in full control of their thoughts and decisions. Then, the precautionary principle must apply. Where the risks of harm to mental integrity, cognitive liberty, or psychological well-being are not fully understood, the default must be restrained. A lack of scientific certainty should never be used as a justification for experimentation on commercial exploitation. Especially in the face of technologies that could irreversibly affect the brain, caution must be the rule, not the exception. Privacy by design and by default must be embedded at every stage of development. From the earliest research and design phases to deployment and commercialization, new technologies must be shaped by ethical values, human rights norms, and transparency requirements. This includes conducting human rights impact assessments prior to implementation and involving diverse stakeholders, including civil society persons with disabilities and neuroscientists in the process of oversight. We must also ensure accountability. Developers, manufacturers, healthcare providers, employers, and public institutions that use new technologies must be held responsible for ensuring data protection, transparency, and compliance with legal standards. Governments must establish independent regulatory bodies equipped to monitor the use of new technologies and provide accessible remedies in case of right violations. Then, we need to prohibit discrimination and manipulation. Neurodata must never be used to categorize, profile, or exclude individuals based on psychological characteristics, emotional responses, or neural patterns. Nor must they be used to manipulate thoughts, alter beliefs, or induce behaviors for purposes of political, commercial, or punitive control. The principle of free will ensures that individuals remain in full control of their thoughts and decisions. Finally, individuals must have enforceable rights. They must be able to access their neurodata, challenge unlawful processing, and seek redress. Privacy rights in this context are not a luxury. They are a shield against the commodification of the self and the erosion of mental autonomy. In fact, the regulation of neurotechnologies is not only a legal imperative, it is a moral one. If we do not act now to build a rights-based framework, we risk creating a future in which the last domain of privacy, the mind itself, is no longer protected. But if we succeed, we can ensure that neurotechnologies are developed and used to enhance human flourishing, not to diminish it. There is another issue at hand. Neurodata do not respect national borders. Brain-computer interfaces, cognitive monitoring tools, and neural wearables are being developed and deployed. Yeah, I’m okay. I’m calculating, okay. And being developed and deployed by transnational actors. Their regulation, therefore, cannot be fragmented or isolated. It must be coordinated, comprehensive, and coherent. In this context, the Council of Europe’s Convention 108 and its modernized version, Convention 108+, offers a critical foundation for global convergence. It is the only legal binding international treaty dedicated specifically to the protection of personal data. And crucially, it is open to countries beyond the Council of Europe, allowing for true international alignment. Convention 108+, embodies many of the values that are essential to the regulation of neurotechnologies. Transparency, accountability, proportionality, data minimization, and the protection of sensitive data. It recognizes that certain categories of personal data, such as those related to health, require special treatments under the law. By building on the principles of Convention 108+, we can create a shared normative baseline for regulating neurodata. This is especially important given the rapidly evolving technological landscape. National laws vary significantly in scope, substance, and enforcement capacity. Yet, if we are to protect individuals from harmful or discriminatory users of neurotechnologies, we must avoid a patchwork of weak protections and regulatory loopholes. Convention 108+, also provides mechanisms for institutional cooperation. It fosters dialogue, mutual assistance, and the exchange of best practices. These tools are essential as we confront common challenges, such as how to define neurodata in legal terms, how to apply informed consent in cognitively vulnerable populations, or how to regulate cross-border flows of neural information. In my view, Convention 108+, should serve as a platform for international leadership in shaping how we treat the privacy of the human mind. It offers a flexible yet principled framework that can inspire national reforms and influence regional and global initiatives. Already, it has helped shape modern data protection standards beyond Europe, in Latin America, Africa, and in various international fora. But we must go further. We must ensure. that the unique features of new technologies are explicitly addressed within data protection regimes. We must expand the reach of Convention 108+, by encouraging more states, especially those at the forefront of technological innovation, to ratify and implement its provisions. And we must integrate the Convention’s principles into the design of future specific instruments on emerging technologies. Let us be clear, we are not starting from zero. Convention 108+, already gives us the legal vocabulary, the ethical principles, and the cooperative tools we need. What we must do now is apply them boldly, to ensure that these protections exist also for new technology. The brain must be the final frontier of privacy, but it is one we must defend together, and with the same spirit of solidarity and shared responsibility that underpins Convention 108+. Now, a final conclusion. As we stand at the intersection of neuroscience, data protection, and human rights, we are compelled to confront a profound truth. The future of privacy is not only digital, it is mental. In a world where technology can increasingly peer into our thoughts, predict our behaviors, and influence our decisions, the protection of mental autonomy is emerging as a defining frontier of the 21st century. Faced with this reality, our path forward must be both principled and pragmatic. We must be clear that technological progress cannot come at the expense of human dignity. That innovation, to be legitimate, must be bound by law. And that privacy, in its fullest sense, includes not only the protection of our personal data, but the protection of the self, of our identity, our thoughts, our emotions, and our freedom to be who we are without interference. This is why the recognition of new data as a special category of personal data is not just a technical adjustment. It is a moral imperative. This is why informed consent must be more than a checkbox. It must be a process of genuine understanding and free will. And this is why the principles of human dignity, mental privacy, personal identity, free will, equitable access, non-discrimination, accountability, must be all woven into every law, policy, and device that touches the human mind. The challenges ahead are immense. The speed of technological innovation is outpacing legal and institutional responses in nearly every region of the world. But we are not starting from zero. We have frameworks like Convention 108+, that can serve as a foundation for international convergence. We have regional and global bodies committed to right-based governance. And we have a growing awareness across disciplines and sectors that the mind must be protected as sacred ground. Let us act with urgency, but also with care. Let us regulate, not to obstruct, but to elevate the promise of new technology. Let us educate, not to alarm, but to empower. Let us legislate, not in isolation, but in solidarity with each other, and with those whose rights are more at risk. Above all, let us remember that the right to privacy is not simply the right to be left alone. It is the right to control our personal space, our bodies, and yes, our minds. It is the right to define who we are, free from coercion, manipulation or surveillance. In this new cognitive era, that right must be defended with renewed determination. Thank you very much.


Moritz Taylor: Thank you very much, Anna. You can take a seat also. I’d invite the panellists also to come forward so that we can start introducing you. After I give them each a little chance to talk about, to answer a question, we’ll also have the statements before moving on to a question round. So if you listen to what Anna has said, if you listen to what the panellists say and have some questions, write them down so that they can be asked after the prepared statements. Thank you very much. So, I’d like to begin the session with Damien Eke. He is Assistant Professor at the University of Nottingham, Chair of the International Brain Initiatives Data Sharing and Standards Group, and founder of the African Brain Data Network and African Data Governance Initiative as well. Currently in his role as PI at the Wellcome Trust. project he is co-creating Responsible International Data Governance for Neuroscience. So the way I’m going to do this is I’m going to introduce them all and give them all a chance to answer one question. So Dr. Eke, how will do existing data protection frameworks like Conventional Norway Plus or the GDPR address the unique sensitivities of neural or cognitive data? Should we be thinking about a new legal category for mental data? And are some of the current neural rights debates distracting from more immediate and under-acknowledged risks posed by today’s neurotechnologies?


Damian Eke: Thank you very much for that question, and thanks a lot for the presentation, Ana. That was very comprehensive. The answer, I’ll give you the simple answer and maybe the complicated answer. The simple answer is that existing regulations or ethical and legal frameworks are not addressing the issues of neural data and neurotechnology as they should be. Most of the recognition of neural data in the GDPR and also the Convention 108 Plus is implied rather than explicit. There’s no specific mention of brain data or neural data or mental health data in these legal provisions. So the special category data in Article 9 of the GDPR includes data concerning health and biometric and genetic data, while EEG, fMRI, and also most of the neural data sets that are generated from neurotechnology may fall under health data in certain contexts. This is not explicitly clear in the regulation. And also, in Article 22, profiling and automated decision-making, EES, may apply if cognitive data is used. used and also principles of data minimization, purpose limitation and consent mechanisms also apply, but this is not clear to both researchers and also people in the industry on how to address some of the issues neural data raise. What I will say is that also in the second question you asked, whether the debates about neural rights are distracted from other issues, yes, the intense focus on neural rights is as a distinct new category of human rights, while it captures public imagination, could indeed overshadow some very tangible and immediate risk associated with development and application of neural technology, and I was also right to highlight the issues of ethics and ethical obligation involved here, it is important to realize that there are also other issues that we need to focus on, like ethics dumping, like the safety issues and bias issues that are involved in neural technology. Ethics dumping, what do I mean by ethics dumping? As neural technologies advance, as you mentioned, it’s not just developed in one location, it actually blows boundaries, there’s a risk that research and development that might be ethically questionable or face treated regulations in one region, may be outsourced to regions or areas with less stringent oversight, and that is a critical problem, and ethics dumping could lead to the exploitation of vulnerable populations, and also disregard for ethical principles in pursuit of scientific progress, and also, I will also mention this, exploitative level practices that characterize the extraction of resources that shape the fundamental infrastructure of neural technology. the extraction of lithium, the extraction of rare earths for the development of neurotechnology. The conditions of work in terms of mining these resources are not as they should be, so the right to human dignity should also be focused on that rather than just data. Should we have a special category for neural data? I would say yes, but not as the discussion is happening in neural rights debate, but as a special category that can be at the same level with genetic data and biometric data, because neural data can also be classed as a hidden biometrics. Datasets like fMRI or MRI have brain prints in them that are unique or maybe more unique than biometrics or genetic data, so it is important for us to discuss how we can change the language of regulations or provisions in the regulations to attribute the same level of sensitivities and sensibilities to neural data as we do to genetic data and biometric data.


Moritz Taylor: Thank you so much, Damian. I think that was a very good start for people to have their brains activated. Speaking of brains and neural activity, next I’d like to welcome Petra Zandonella, a pre-doctoral assistant at the University of Graz at the law faculty and part of an interdisciplinary research group working on the intersection of law, ethics and neurotechnologies for the last two years. In her dissertation, she focuses on the protection of health data in the EU, which has now already popped up a couple of times in the last couple of minutes. From your legal research and your interdisciplinary perspective, I’d like to hear your point of view on whether you believe the existing legal framework already adequately addresses the challenges of neurotechnologies and, of course, maybe to expand on that, do you think there are gaps perhaps in enforcement, in scope or conceptual clarity even that we still need to fill?


Petra Zandonella: Okay, thank you very much. Thank you for the invitation and the opportunity to be here today and also for the introduction. So, I will give you a legal perspective and then we can go quite in the same way as you did already. So, you heard before that there’s a call for mental privacy and in the interdisciplinary group we’ve also focused on the mental privacy issue and we do not recommend to add a new right to mental privacy. So, there is an ongoing debate if the right to mental privacy is needed or if it will overburden our existing and well-established legal system and also framework. So, the question is what will exactly, oh sorry, what exactly will the scope of the mental privacy be and is it about mental, neurocognitive or brain data? So, the existing right to privacy, not to mental privacy, it’s the right to privacy as enshrined in Article 8 of the European Convention on Human Rights. It’s really a broad right to privacy. So it’s really broad understanding about it, and it’s also interpreted in a really broad sense. So I will mention one case in front of the European Court of Human Rights, and it took place last spring, so in spring 2024, and it’s the case of the Klimaseniorin in Switzerland. So it’s not a case about new technologies, but it’s a case about how broad the understanding of the existing right to privacy within the Convention on Human Rights is already. So the Klimaseniorin claimed that Switzerland had violated the right to private and family life, so the Article 8 of the Convention on Human Rights, by failing to take measures against climate change. And the court ruled in favor of the Klimaseniorin. So you see how broad the understanding of the existing right already is. So what benefit will we gain by cutting off the right to mental privacy from the already broad existing right to privacy? And where the existing right to privacy will end, and where the mental right to privacy will start? So, of course, there will be legal uncertainty if we create a new right to mental privacy. So there is, of course, an established case law, because it’s a new right, and it’s also a question if the privacy right will be interpreted in the broad sense if we explicitly mention mental privacy in the Convention on Human Rights. So if we mention mental privacy there, there’s a question if there should be also other privacies, because the problem is that if we explicitly mention one privacy, what about the other privacies? Are they still within the broad scope of the right to privacy? We don’t know. So to summarize, the existing right to privacy in the Convention on on Human Rights is already a good foundation, I would say. So in our opinion, there shouldn’t be a split up right to mental privacy. But nevertheless, you already mentioned that there should be a discussion on how we should deal with these neurotechnologies and with neural data. So for example, in the data protection law, as you already mentioned, and there is the Convention 108 and 108 plus and also the TDPR, and there is health data inside. So if neurotechnologies are used for health purpose, of course, it will be in the scope of the health data. But nowadays, the neurotechnologies are expanding into the non-medical domain, such as human enhancement or gaming. And in this context, there’s no medical purpose. So there is a gap. And as you mentioned in the keynote, it’s really important that we also protect these data because neural data is not only specific when it’s about the medical purpose or medical data. It’s also really a specific data when it’s in another purpose used. So for example, we could implement neural data within the scope of Article 9 of the TDPR or in the Article 6 of the Convention 108 or 108 plus. And it already had been done with biometric data. So in the Convention 108, there wasn’t biometric data. And now, with the 108 plus and before in the TDPR, biometric data was added. So it isn’t a big deal to implement a new category of data there, of course. It is not that easy because you need consent in the member states. But it could be an idea how we can deal with these new challenges we have with neurotechnologies. And so I will come to a conclusion, if it’s OK. Sorry for the long statement. In our opinion or my opinion and also in the opinion of our interdisciplinary group, the Convention on Human Rights and also the Charter of the Fundamental Rights already is a good and robust legal framework. And mental privacy should explicitly not be added. So we should stick to the already existing right to privacy. But of course, we need action to tackle the challenges that arise. arise with neurotechnologies, so for example, as already mentioned, by adopting the data protection law. So, thank you.


Moritz Taylor: Thank you very much, Petra. Right. Give them all a round of applause. Thank you. I’m going to allow the statements to happen. Meanwhile, perhaps, so that you can digest that information, listen to the statements and ask one or two questions after. I’ll try to collect them because we’re starting to be a bit short on time. Classically. May I have the statements? No, there are no statements. No, no online statements at all. And so, on site, do we have UNESCO’s Women for Ethical AI present? Kokse Kobanashoy-Hizal, are you here? I’m going to assume no. Is Lazar Simona from the Union Romani Voices, the CEO here, to speak, make a statement? Berna Tepe. Jan Kleissen, a recognisable name in the building. Please, Jan. It’s number 94.


Kleijssen Jan: Good afternoon, or good morning, rather, still, with 10 minutes to go. Thank you very much for the very interesting presentations, and also drawing attention to the already existing validity of Convention 108, 108+, when coming to protecting neural rights, as they have been labelled, and the very self, as was so pointedly stated a moment ago. I have a question relating the use of, or the use of, the interpretation of neural rights when it comes to AI systems, sentient computing. There’s a big debate out whether this will remain pure fiction, science fiction, or come into reality, but what would your position be on the research guiding this, and on the limits, perhaps, on the regulation that needs to be there in time if we not want to find ourselves facing something quite abominable? Thank you.


Moritz Taylor: Thank you, Jan. Can we do this as a quickfire round? Do you want to give your quick answers, maybe each one after the other? You don’t want to. So, I’ll start with Damian, and we’ll go down the line.


Damian Eke: Okay. So, you’re right. AI complicates the ecosystem. With the convergence of neurotechnology and AI, and the predictive inferential power of neural data, when combined with AI and big data, it’s uniquely dangerous, which enables maybe preemptive profiling, maybe neuromarketing at an advanced level, and also cognitive surveillance. This is a problem that needs to be addressed also, because the question is, does it then warrant a special category of data, of convergence of data sets? Now, it’s not just neural data, but then it’s combined with other data sets, biomedical data, combined with AI. It is a problem. Thank you very much. So, we have a challenge that needs to be addressed, but just as my colleague here pointed out earlier, there are provisions in the law to address some of these things, but it’s just that the ecosystem of regulations are a bit diverse. There’s the AI Act, there’s the GDPR, and there are other data regulations in the EU. It’s a case of trying to harmonize these provisions to address a specific problem of convergence of neural data and AI.


Petra Zandonella: So maybe I can just add a sentence to this. It’s not about the technology, it should always be about the human being. So how can we make sure that the human being is still in the focus of the regulation? So it’s not that easy to regulate each technology. So we should have a broader approach to this. So of course there is an interference with other technologies, I guess that’s normal and that’s already existing, and as you mentioned there are already really good legal frameworks on that, but when we come back to the Convention on Human Rights, there is really a broad understanding and it’s really a good reflection in the human rights and also in the fundamental rights when you go on the European perspective.


Moritz Taylor: Thank you, Petra. It’s okay, only if you have a short on time, so if you have, yeah.


Ana Brian Nougrères: Okay, I believe that there are difficult topics like the one you brought now, and I believe also that there is one moment in which one can feel there is an important risk, and that the risk might manipulate society. And then I think, well, what shall we do? Because we are looking at the process, we are looking at our people, how they might be manipulated. We have concrete examples of people who received a little bit of money to have a picture of their eye. and then all that is a whole mess. But well, it’s not to come to examples, the moment is not this one. But I think that when we are seeing that problem, it is because the problem has advanced in our society. And that is a moment in which we need to do something. We need to do awareness first of all. But regulation, I won’t discard it. I think that regulation is important. It gives a before and an afterwards. But before regulation to come, we need to have a real conversation, multi-stakeholder conversation on these topics in which we can have the opinions of the different professions that are involved in these topics. So I think there is a point in neurotechnologies, there’s a point that is of our concern and that something has to be done. Perhaps it’s not the moment for a regulation, but we need to have those strong conversations. We need to attend to see how the social movements are feeling the impact of all this. And maybe the regulation appears, maybe it doesn’t. But well, we have to be open to it, I think that. I think that we as lawyers or professionals of the law, we all see that when technical advancements appear, then the law is always back, back, back. And when we decide to act, then the moment passed. So I think that that is an important thing that we have to take into consideration also in this moment. Thank you.


Moritz Taylor: We have to go through some statements and we’re running out of time. So thank you, Jan, for the insightful question. I think that already caused some more neurons to fire. Next on the list of prepared statements was Redon Pilinci from Albania. Are they in the room or are they online? Next, Torsten Krause from the Stiftung Digitale Chancen, number 61. Give me the floor, it’s on. Number 62, yeah, okay.


Panelist: Thanks, hello. Thanks for your interesting presentations and statements. I’m Torsten Krause, I’m working as a political scientist and child rights researcher at the Digital Opportunities Foundation based in Berlin, Germany. And I would like to introduce you shortly to a legal concept implemented in the Youth Protection Act in Germany with the second amendment in 2021. It was, it is personal integrity. And it put kind of a third layer to the previously existing concepts of integrity. And you know, the first layer is the physical. So it’s not allowed to beat someone because it harms the physical integrity of a person. And the second layer means that it’s also not allowed to harm someone by words or bully someone because of the mental layer. And the legislator implemented the third layer. And it means also to protect the data in the digital environment because the data we’re presenting ourselves. So if someone is violating my data in the digital environment, he is violating me. So that’s the concept of personal integrity. And it was implemented four years ago. So it was in mind to regulate, well, time is running really fast. It’s handling with really, with existing collecting data and then to prohibit, to influence, to use this data to influence you in a special direction. When we think about a newer technologies that level it up because it’s not existing data, it’s data that when they arise, when I’m not, maybe not recognizing yet that I have the thought or this feeling, in this moment I can be manipulated. And so I think, I’m not sure if we need to have a special category, but I think we need a kind of guarantee of a really broad understanding to protect the data and yeah, the personality of us as human beings. I hope that was helpful. Thanks.


Moritz Taylor: Thank you. Okay. Thank you, Torsten, for this great thing. I think a broad understanding of protection is clearly one of the things that is coming up, whether it is broadly understood specifically in the national legislation, it seems, or on a wider international context is also one of the questions that comes up quite often is how can national legislators interpret international rules. The next speaker prepared statement is from Amira Saber from the. I will swiftly move on then to Sana Bhatia from VIPS-TC, Kuram Shuktai from Enox Centre of Innovation, Transformation and Intelligence, Souheila Soulkia, and last on the list before I can open the floor is Karin Kaunas from DigiHumanism, Centre for AI and Digital Humanism. Well, Joao mentioned that someone would like to participate and ask a question from online, so I’d like to give them the floor please.


Online moderator: Yes indeed, so it was both a question during the keynote presentation and then some comments added to the panel. I will be the one reading the points raised from Siva Supramanian Muthusamy, and I’m sorry if I pronounced it incorrectly. The question was, will these frameworks and safeguards or even regulation work adequately well to sufficiently prevent negative aspects such as cognitive control and behavioral intervention? These positive aspects of neurotechnologies are not summarily opposed in this question. And then to the panelists, it was a remark that in the future, once neural electronics are in place, it merely requires the access or technical expertise to get into the network, and as easy as sending ones and zeros to someone’s brain or into that of a group of people to alter their behavior or even to trigger them, in theory, at least. Yeah, there was some buzzing, so we didn’t really hear a question as such. Let’s give it one go. Yeah, perfect. So I will straight go into the point again, and I’ll repeat, will these frameworks and safeguards or even regulation work adequately well to sufficiently prevent negative aspects such as cognitive control and behavioral intervention? And the follow-up remark was, in the future, once neural electronics are in place, it merely requires the access or technical expertise to get into the network, and as easy as sending ones or zeros to someone’s brain or into that of a group to alter their behavior or even to trigger them, in theory, at least.


Moritz Taylor: Thank you for your contribution from online. Before we answer questions, I was thinking that we can open the floor and collect one or two, so that we’re not constantly going back and forth. Were there any other people who wanted to ask a question? At this point, number 007, James Bond, please.


Panelist: Hello, I am George from HRI, Civil Society. I would like to ask, as brain data becomes increasingly valuable for governments and tech companies, how do we avoid a future where the rights to mental privacy is sacrificed for profit or control? And on the other hand, if someone’s thought can be decoded and stored, where do we draw the line between consent and surveillance in a neural age? Thank you.


Moritz Taylor: Actually, that sounds like a very exciting question I want to hear answers to immediately. It shows very good risks.


Damian Eke: Yeah, maybe I will go first, but I will also try to address the first question from online, which was, are these legal frameworks adequate enough to address some of the risks? some of the issues that we have on neurotechnology raises. And this is maybe a question against a rights-based approach to governance. Is it always the best approach to governing neurotechnology or any technology at all, including AI? There are so many strategic paradigms of governance of technology. One is rights-based approach. Another is value-based approach. Because technologies are value-laden. They’re not neutral. But the question is, whose values are embedded in these technologies? Looking at the value-based approach in some regions might be the best way to govern these technologies rather than the rights-based approach because of the diversity of interpretations or implementations of human rights. So when we look at the values that should inform the technology, if they are adequately embedded in the systems, then they can address the issues. One reason why I’m pointing this out is oftentimes when we have these discussions in Europe or the global North, we forget that some of the values that shape technologies are not understood the same way in all regions. They are not interpreted the same way in all regions. Whether it is privacy, in Europe we’ll think about individual privacy. Maybe in some communities in Africa we’ll think about collective privacy. But the GDPR is informed by individual privacy, the concept of individual privacy. So understanding governance of these technologies from the value-based approach is sometimes something that we need to consider in order to address some of the culturally aligned issues that these technologies raise.


Moritz Taylor: Thank you, Damien. I think sitting in Europe, and it’s a regional and European approach generally in preparation for the IGF, etc., I think there’s always a danger of forgetting that the global majority is not Europe and that the approaches are indeed very, very different, not even that far away from Europe. If we want to have global standards, then we need to take other people’s standards into account as well. So definitely a good first answer. Petra, if you wanted to add something.


Petra Zandonella: Thank you also for pointing out that we live in the North and in Europe. But I will come back to the questions first of all. If we address the legal challenges, I guess yes and no. And as you mentioned before, there’s actually really a need that we have a discussion and a debate such as we have the opportunity here, for example, or as it is also in UNESCO or in the UN. And we really need an interdisciplinary exchange. So coming back to your question, you said it’s about brain data. In our interdisciplinary group, we discussed a lot what the data should be named because our neuropsychologists say the best way to address most of the mental and cognitive and whatever states is to call them neurodata and then to talk about mental states because cognitive states are part of mental states but are not every state. So I’m not the perfect person to answer that question what the name should be because there should be an interdisciplinary question for that. And then you pointed out the consent. And that’s really a big issue. It’s already an issue when it comes to health data, because if you need something, it’s quite obvious that you will say, yeah, go for it, because I need it. And with brain, or with neurodata, it’s a similar point. And our neuropsychologists also say, or told us, that when there is something about the brain, we tend to believe everything. So if there is neuromarketing or neuroenchantment, we tend to believe the promises, and therefore there is no real consent, because if we don’t know about the limitations, there is no informed. Because it’s about the opportunities and about the limitations as well. And if we don’t know the limitations, that’s a big issue. And with the surveillance part, yeah, of course, absolutely. We are just about to finish a project on neurotechnologies in dementia. And of course, it’s in the healthcare sector and not in the commercial sector. And already there is a big issue that surveillance will be, and is, when it comes to neurodata. And in the commercial part, it’s even worse, because there is no actual need for neurotechnologies. So maybe I can also add European law, may I?


Moritz Taylor: Sure, but quickly.


Petra Zandonella: Yeah. And that’s, for example, the medical device regulation. I don’t know who of you are familiar with that regulation. And of course, neurotechnologies, when there is a medical purpose, are in the regulation. But the medical device regulation already addresses that neurotechnologies are a bit special, I would say, because in its annex 16, in the point 6, yeah, they also mention that neurotechnologies, but only specific category of neurotechnologies. So, non-invasive and simulation neurotechnologies are also part within the scope of the medical device regulation. And that is an example that the regulators are a bit aware of the specificity of neurotechnologies. So thank you.


Moritz Taylor: Thank you very much. I’ll just move on because you have to collect others. Number 118, you have the floor. I’ll also collect another question, 195, afterwards. Thank you.


Panelist: Thank you. Lars Lundberger from the World Federalist Movement. I speak in my personal capacity. Three quick thoughts. Thank you for the insights and especially the definition of neurotechnologies. Your focus on the brain activity, I think that might be a bit too narrow. If you would record my finger muscles, you would see that I’m a bit nervous. If you would record my skin conductance, you would see that I’m sweating a bit. And if you would look at the pupil of my eye, you probably would see that I’m also a bit nervous. And so brain activity in the neural sense of recording of intracellular and extracellular recordings will be insufficient to address the mental aspect. Part of the discussion or large part of the discussion was about privacy. So the protection of data being read out. There was a remark on the altering of the brain activity, which would be manipulation. I think that should be emphasized a bit more and also there is the border to more traditional or conventional technologies like visual or acoustic stimuli. In total, I liked a lot the discussion about whether it’s a new human right or whether it’s a new challenge to existing human rights. And I think that discussion has to be continued in a multi-stakeholder approach that includes practitioners and policy makers, so engineers, neuroscientists and lawyers. Thank you.


Moritz Taylor: Thank you very much. 195, please.


Panelist: Thank you for giving me the floor and thank you for a very interesting debate and worrisome debate. I’m Kristin from the University of Oslo and my question relates to the human rights lens that you’ve analyzed this through, the privacy one, which is obviously there are major issues here connected to privacy. My question is if any of you in your work on this also have encountered discussions on this in other rights as well, such as the freedom of expression, mainly the right to freely form opinions, and the freedom of thought, which is an absolute right. So my question is if you’ve seen any discussions on this and specifically seeing that what was presented earlier, that these technologies also can influence the mind and not only extract data from the mind. So that’s my question.


Moritz Taylor: And just before I let you answer, I’ll also take the floor from number 100, please. You just press the round button next to the microphone.


Panelist: Am I audible? Yeah. Yeah. Hi. So I’m Ankita. I’m from India. I’m a lawyer. And many of you mentioned that it is crucial to recognize the complex ethical and societal implications of neurotechnology and the processing of personal neurodata. But I would also like to hear the thoughts of the speakers and the panelists on accountability mechanisms across the entire lifecycle of neurodata processing from collection to storage to analysis. Do you think that there should be any particular stage which should bear greater accountability than others? And if yes, then why? Thank you.


Moritz Taylor: All right. Are you ready to answer already, Damien?


Damian Eke: I’ll try to answer the first one. Okay. Okay. So the discussion on neuro rights and also whether we need to have a special category of data called neurodata also involves discussions about freedom of thought and freedom of expression because manipulation of neurodata can surely breach


Petra Zandonella: I totally agree with you. We also wrote a study for the Stoa, and we also partially mentioned these rights, although the focus was of course on privacy. So thank you over there for pointing out that we had a really limited understanding so far in the discussion on neurotechnologies, because it’s really needed that we had a broader understanding of neurotechnologies. I totally agree with you. For the right understanding of neurotechnologies, also the UNESCO have a really broad understanding of these technologies, and there is now at the moment an ongoing debate on recommendations on ethics on neurotechnologies in Paris, and we will see the outcomes soon, I guess. Fingers crossed that they will stay inside.


Moritz Taylor: Do you want to add anything?


Damian Eke: I wanted to add something in terms of definition of neurodata or brain data. It’s a difficult concept to conceptualize actually, and I don’t think the debate on the definition of neurodata is going to end very quickly, because everything can be neurodata. Everything can be neurodata, when you combine it with other sets of data. So having the limitations of what we refer to as neurodata is important to governing it, and is a critical discussion that we all continue to have.


Petra Zandonella: Although if you have a legal definition on neurodata, it can also limit neurodata. So that’s also why we need an interdisciplinary approach, also with the ethics part. And you mentioned before that also values should be added, and I totally agree. So maybe you can go further in that, if I may ask a question to Damian.


Moritz Taylor: Sure. Please add your, if you wanted to add something. Yeah, please do. Okay, great. Okay, answer please.


Damian Eke: So in terms of values, just as I mentioned earlier, all technologies are value-laden. There is, if it is not a value of the designers or developers, it will be the values of the deployers or the users embedded in these systems. But whose values are embedded in neurotechnologies used worldwide? Understanding that these technologies are being developed mainly in the Global North, and values embedded in them are from the Global North. An instance would be EEG. EEG devices. I don’t think the value of usability within the African population was considered when it was being developed. So that is an important one. We might develop these technologies, but they are not generalizable to all populations of the world. And the idea that people’s values are neglected or maybe relegated to the background in the development of these technologies raises the questions of coloniality embedded in these systems. And I always point out that in terms of principles of trustworthiness, of AI, of technologies, what is missing is decoloniality as a requirement for trustworthiness. Because if these technologies are seen as tools of epistemic dominance by the global South, it can lead to rejection or non-acceptability of these technologies. So we need to consider all these values.


Moritz Taylor: Thank you.


Ana Brian Nougrères: Okay, so I want to say thank you to everybody who made some comments. They are all very, very welcome. Thank you so much. And just additional remarks. I totally agree with 118’s comment. I totally agree and feel that it is a good moment to begin that discussion. Your 061 opinion was very interesting. Yours were quite to the point of the risk, very interesting too. And there was 195 that I would like to say something special about. Oh, your opinion was 094, very interesting. So thank you. But well, you asked if there has been another sort of important issue that brought to the discussion in similar terms as neurotechnology. And I would say no. I don’t know it at least. But I would say that when we came to terms with artificial intelligence and when we saw how it changed the world, then we noticed that we could have done something before. At least we could have considered the possibility of putting ethics in any moment that it was possible. At least that. We didn’t do it. So when this important topic as neurotechnology comes to stake, I personally think that we should not wait. We should try to study the point, try to consider possibilities and try to come to terms in a multi-stakeholder way. That’s what I think. And as law, as people who work with the law, I feel like that we have to do the effort not to do that, not to make the law come. last. I mean, the law has to try to be updated, and we have to try and be at the correct moment in the correct place. That’s what I think. Artificial intelligence changed the world. It changed it and continues changing it. And, well, we don’t know when it’s going to finish and what is going to happen. So if you consider that plus neurotechnologies, you might get into the risk, absolute risk opinion of our colleague. So it is important that people know that awareness is very important. And a way of getting to the point of awareness is to bring these topics to our agendas. Not to our agendas only, but to our agenda in general of all those who are involved.


Moritz Taylor: The microphone is close to the mouth, the microphone.


Ana Brian Nougrères: Oh, sorry.


Moritz Taylor: I think for the participants online, it’s difficult to understand. OK, it was just the end. I still have a statement or question from online, which will go first, and then also 463. My stigmatism is failing me. I think it’s 463. OK, I invite Samir Gallo if he still wants to raise the question. I’m asking to unmute. His mic may be not working.


Online moderator: I had a comment from him that since it’s at the end, he is more than welcome to connect virtually. So perhaps he’s already on that phase.


Moritz Taylor: OK, well, then let’s go give the floor to 463 first, and then maybe they’ll come back by the time.


Panelist: Thank you. Thank you very much. And I want to thank the keynote speaker. It was a great intervention. Another frontier of knowledge, not so recent, but very important, is the human genome mapping. And I want to know if data from DNA can or is classified as neurodata and could be protected under the same kind of laws or not? Thank you.


Ana Brian Nougrères: You know what I think? I think that lawmakers, we all use lots of definitions. We need definitions to produce a law. But I think that we are not yet in the moment to decide which definition is the correct one and which not. We need previously the multi-stakeholder study. In my personal opinion, I would say maybe, maybe, I don’t discard it. But I think that previously to the classifications, we need to do discussions with everybody.


Moritz Taylor: Also, so just before you answer, did the online participant come back? Then maybe just a very quick one, because then we have to move on.


Damian Eke: Technically, no. If I can imagine what will happen in the neuroscience research community if we introduce the idea that genetic data is now also neural data, there will be open arms. So, but I think what we are saying is the special recognition genetic data has in regulation, neural data needs to have the same, because it’s still very confusing for a lot of people. If you have been involved in big projects, I was a data governance coordinator for the EU Human Brain Project, which was a big EU project with over 500 neuroscientists. So, when you introduce the idea of protecting neural data as special category data, they resist it. So, for them, this is not genetic data, this is just research data. It’s difficult for them to understand that. But when we have specific awardees in the regulation…


Ana Brian Nougrères: It’s very difficult to qualify if we don’t decide yet the definition. And that is why many times you can see that in the laws that refer to technological aspects, you always need to introduce some sort of thesaurus. So it is difficult, it is difficult.


Damian Eke: I completely agree with you, and there are so many things happening in this space at the moment. UNESCO with the guidance, WHO, OECD, now the United Nations is also assembling interdisciplinary team to discuss this. There will be a three-day workshop actually in Berlin on this, I will be part of that by the United Nations. So these different initiatives are offering definitions, different definitions, which might become a little bit of a problem. There needs to be harmonization.


Petra Zandonella: Yeah, absolutely. If there is no harmonization, it will be a struggle. That’s also why we need discussions forward and not afterwards, because then it’s too late.


Moritz Taylor: Okay. Well, thank you so much to the panelists and our keynote speaker for your contributions and participation in the panel. Thank you to the audience for their very active participation. We’ll move on before we finish the session at half past. We’ll move on to the messages of this session. Minda looks less than delighted with the messages she’s put together, but well, either, you know, we’ll get some preliminary messages and then we’ll move on, yeah? I think everyone could take their seats again.


Ana Brian Nougrères: There’s a short version of my speech if you want to, if you are interested, we have it here, okay?


Moritz Taylor: Thank you. And please don’t leave, we’ll take a photo afterwards. Later though.


D

Damian Eke

Speech speed

122 words per minute

Speech length

1612 words

Speech time

790 seconds

Existing frameworks inadequate for neurotechnology challenges

Explanation

Damian Eke argues that current legal and ethical frameworks are not sufficient to address the issues raised by neurotechnology. He suggests that existing regulations do not explicitly cover neural data or the unique challenges posed by brain-computer interfaces and other neurotechnologies.


Evidence

Eke mentions his experience as a data governance coordinator for the EU Human Brain Project, where he encountered resistance from neuroscientists to treating neural data as a special category.


Major discussion point

Regulation of neurotechnology and neurodata


Agreed with

– Ana Brian Nougrères
– Petra Zandonella

Agreed on

Need for special protection of neurodata


Disagreed with

– Petra Zandonella

Disagreed on

Need for new legal rights or frameworks


Need for international harmonization of neurodata regulations

Explanation

Eke emphasizes the importance of harmonizing definitions and regulations for neurodata across different international initiatives. He points out that various organizations are offering different definitions, which could lead to problems in governance.


Evidence

Eke mentions ongoing initiatives by UNESCO, WHO, OECD, and the United Nations to discuss and define neurodata and related concepts.


Major discussion point

Regulation of neurotechnology and neurodata


Consider value-based approach to governance, not just rights-based

Explanation

Eke suggests that a value-based approach to governing neurotechnology might be more effective than a purely rights-based approach. He argues that technologies are value-laden and that the values embedded in them should be considered in governance frameworks.


Evidence

Eke points out that technologies developed in the Global North may not reflect the values or usability needs of populations in other parts of the world, such as Africa.


Major discussion point

Global perspectives on neurotechnology governance


Agreed with

– Ana Brian Nougrères
– Petra Zandonella

Agreed on

Need for interdisciplinary and multi-stakeholder approach


Disagreed with

– Ana Brian Nougrères

Disagreed on

Approach to neurotechnology governance


P

Petra Zandonella

Speech speed

130 words per minute

Speech length

1670 words

Speech time

765 seconds

No need for new “right to mental privacy” – existing privacy rights sufficient

Explanation

Zandonella argues against creating a new specific right to mental privacy. She suggests that the existing right to privacy, as enshrined in human rights conventions, is already broad enough to cover mental privacy concerns.


Evidence

Zandonella cites the case of Klimaseniorin in Switzerland, where the European Court of Human Rights interpreted the right to privacy broadly to include climate change impacts.


Major discussion point

Regulation of neurotechnology and neurodata


Disagreed with

– Damian Eke

Disagreed on

Need for new legal rights or frameworks


Importance of informed consent for neurodata collection

Explanation

Zandonella emphasizes the critical nature of informed consent in the context of neurodata collection. She points out that the complexity of neurotechnology and its potential impacts make it challenging to ensure truly informed consent.


Evidence

Zandonella mentions that people tend to believe promises related to brain technology, which can compromise their ability to give informed consent.


Major discussion point

Ethical implications of neurotechnology


Agreed with

– Damian Eke
– Ana Brian Nougrères

Agreed on

Need for special protection of neurodata


Need interdisciplinary approach to defining neurodata

Explanation

Zandonella advocates for an interdisciplinary approach to defining neurodata. She suggests that input from various fields, including neuroscience, psychology, law, and ethics, is necessary to develop a comprehensive understanding of neurodata.


Major discussion point

Defining and categorizing neurodata


Agreed with

– Damian Eke
– Ana Brian Nougrères

Agreed on

Need for interdisciplinary and multi-stakeholder approach


A

Ana Brian Nougrères

Speech speed

115 words per minute

Speech length

3727 words

Speech time

1941 seconds

Neurotechnology poses risks to mental autonomy and cognitive liberty

Explanation

Nougrères highlights the potential dangers of neurotechnology to individual mental autonomy and cognitive freedom. She argues that these technologies could allow unprecedented access to and manipulation of people’s thoughts and mental states.


Major discussion point

Ethical implications of neurotechnology


Neurodata should be recognized as special category of sensitive data

Explanation

Nougrères argues for the classification of neurodata as a special category of sensitive personal data. She emphasizes that neurodata is fundamentally different from other types of personal data due to its intimate and revealing nature.


Evidence

Nougrères mentions that neurodata can reveal emotions, fears, desires, and intentions, sometimes before the individual is consciously aware of them.


Major discussion point

Regulation of neurotechnology and neurodata


Agreed with

– Damian Eke
– Petra Zandonella

Agreed on

Need for special protection of neurodata


Call for multi-stakeholder, international discussions

Explanation

Nougrères emphasizes the need for broad, multi-stakeholder discussions on neurotechnology governance at an international level. She argues that these conversations should happen proactively, before the technology becomes widely implemented.


Evidence

Nougrères draws a parallel with artificial intelligence, suggesting that earlier ethical considerations could have been beneficial in AI development.


Major discussion point

Global perspectives on neurotechnology governance


Agreed with

– Damian Eke
– Petra Zandonella

Agreed on

Need for interdisciplinary and multi-stakeholder approach


Disagreed with

– Damian Eke

Disagreed on

Approach to neurotechnology governance


D

Doreen Bogdan-Martin

Speech speed

124 words per minute

Speech length

304 words

Speech time

146 seconds

Need to balance innovation with human rights protections

Explanation

Bogdan-Martin emphasizes the importance of balancing technological innovation with the protection of human rights. She argues that as digital technologies evolve rapidly, human rights must remain a constant guidepost for actions across technological frontiers.


Evidence

Bogdan-Martin mentions powerful technologies like AI and quantum computing as examples of innovations that need to be balanced with human rights considerations.


Major discussion point

Ethical implications of neurotechnology


P

Panelist

Speech speed

137 words per minute

Speech length

908 words

Speech time

395 seconds

Concerns about manipulation and surveillance of thoughts

Explanation

A panelist raises concerns about the potential for neurotechnology to be used for manipulation and surveillance of thoughts. They suggest that once neural interfaces are in place, it could become relatively easy to alter behavior or trigger responses in individuals or groups.


Major discussion point

Ethical implications of neurotechnology


Consider broader impacts beyond just privacy, like freedom of thought

Explanation

A panelist suggests that the discussion on neurotechnology should extend beyond privacy concerns to include other fundamental rights. They specifically mention the right to freely form opinions and the freedom of thought as areas that could be impacted by neurotechnology.


Major discussion point

Ethical implications of neurotechnology


Consider broader definition beyond just brain activity

Explanation

A panelist argues for a broader definition of neurotechnology and neurodata that goes beyond just brain activity. They suggest that other physiological indicators, such as muscle activity or skin conductance, can also provide insights into mental states.


Evidence

The panelist gives examples of finger muscle activity, skin conductance, and pupil dilation as indicators of nervousness.


Major discussion point

Defining and categorizing neurodata


Question of whether genetic data should be included as neurodata

Explanation

A panelist raises the question of whether genetic data, particularly from DNA mapping, should be classified as neurodata. They ask if such data could or should be protected under the same laws as other forms of neurodata.


Major discussion point

Defining and categorizing neurodata


K

Kleijssen Jan

Speech speed

135 words per minute

Speech length

138 words

Speech time

61 seconds

Concerns about AI systems and sentient computing

Explanation

Kleijssen raises concerns about the implications of AI systems and sentient computing in relation to neural rights. He questions the need for research guidelines and regulatory limits to prevent potentially harmful outcomes.


Major discussion point

Ethical implications of neurotechnology


O

Online moderator

Speech speed

130 words per minute

Speech length

341 words

Speech time

156 seconds

Concerns about cognitive control and behavioral intervention

Explanation

The online moderator relays a question about whether existing frameworks and safeguards can adequately prevent negative aspects of neurotechnology such as cognitive control and behavioral intervention. They highlight the potential ease of altering behavior through neural electronics in the future.


Evidence

The example of sending ones and zeros to someone’s brain to alter their behavior is mentioned.


Major discussion point

Ethical implications of neurotechnology


M

Moritz Taylor

Speech speed

142 words per minute

Speech length

1623 words

Speech time

683 seconds

Neurotechnology challenges privacy assumptions

Explanation

Taylor introduces the session by highlighting how neurotechnology is challenging long-held assumptions about the privacy of thoughts. He emphasizes that this is a provocative frontier of digital governance with significant legal and ethical implications.


Evidence

Taylor mentions brain-computer interfaces and mood-tracking devices as examples of evolving neurotechnology.


Major discussion point

Regulation of neurotechnology and neurodata


Agreements

Agreement points

Need for special protection of neurodata

Speakers

– Damian Eke
– Ana Brian Nougrères
– Petra Zandonella

Arguments

Existing frameworks inadequate for neurotechnology challenges


Neurodata should be recognized as special category of sensitive data


Importance of informed consent for neurodata collection


Summary

All speakers agree that neurodata requires special protection and consideration in legal and ethical frameworks, beyond current regulations.


Need for interdisciplinary and multi-stakeholder approach

Speakers

– Damian Eke
– Ana Brian Nougrères
– Petra Zandonella

Arguments

Need interdisciplinary approach to defining neurodata


Call for multi-stakeholder, international discussions


Consider value-based approach to governance, not just rights-based


Summary

Speakers agree on the importance of involving diverse stakeholders and disciplines in discussions and decision-making around neurotechnology governance.


Similar viewpoints

Both express concerns about the potential for neurotechnology to be used for manipulation and surveillance of thoughts, posing risks to mental autonomy.

Speakers

– Ana Brian Nougrères
– Panelist

Arguments

Neurotechnology poses risks to mental autonomy and cognitive liberty


Concerns about manipulation and surveillance of thoughts


Both emphasize the need for international cooperation and harmonization in addressing neurotechnology governance challenges.

Speakers

– Damian Eke
– Ana Brian Nougrères

Arguments

Need for international harmonization of neurodata regulations


Call for multi-stakeholder, international discussions


Unexpected consensus

Broad definition of neurotechnology and neurodata

Speakers

– Panelist
– Petra Zandonella

Arguments

Consider broader definition beyond just brain activity


Need interdisciplinary approach to defining neurodata


Explanation

There was unexpected agreement on the need for a broader, more inclusive definition of neurotechnology and neurodata, extending beyond just brain activity to include other physiological indicators.


Overall assessment

Summary

Main areas of agreement include the need for special protection of neurodata, interdisciplinary and multi-stakeholder approaches to governance, and concerns about potential misuse of neurotechnology.


Consensus level

Moderate consensus on core issues, with some divergence on specific approaches. This suggests a shared recognition of the challenges posed by neurotechnology, but ongoing debate on how best to address them.


Differences

Different viewpoints

Need for new legal rights or frameworks

Speakers

– Damian Eke
– Petra Zandonella

Arguments

Existing frameworks inadequate for neurotechnology challenges


No need for new “right to mental privacy” – existing privacy rights sufficient


Summary

Eke argues that current frameworks are inadequate for neurotechnology, while Zandonella believes existing privacy rights are sufficient and no new right to mental privacy is needed.


Approach to neurotechnology governance

Speakers

– Damian Eke
– Ana Brian Nougrères

Arguments

Consider value-based approach to governance, not just rights-based


Call for multi-stakeholder, international discussions


Summary

Eke suggests a value-based approach to governance, while Nougrères emphasizes the need for multi-stakeholder, international discussions.


Unexpected differences

Definition and scope of neurodata

Speakers

– Damian Eke
– Panelist

Arguments

Need for international harmonization of neurodata regulations


Consider broader definition beyond just brain activity


Explanation

While most discussions focused on brain activity, a panelist unexpectedly argued for including other physiological indicators in the definition of neurodata, which could significantly expand the scope of regulation.


Overall assessment

Summary

The main areas of disagreement revolve around the adequacy of existing legal frameworks, the approach to neurotechnology governance, and the definition and scope of neurodata.


Disagreement level

The level of disagreement is moderate. While there is general consensus on the importance of addressing neurotechnology challenges, speakers differ on specific approaches and definitions. These disagreements have significant implications for how neurotechnology and neurodata will be regulated and protected in the future.


Partial agreements

Partial agreements

Similar viewpoints

Both express concerns about the potential for neurotechnology to be used for manipulation and surveillance of thoughts, posing risks to mental autonomy.

Speakers

– Ana Brian Nougrères
– Panelist

Arguments

Neurotechnology poses risks to mental autonomy and cognitive liberty


Concerns about manipulation and surveillance of thoughts


Both emphasize the need for international cooperation and harmonization in addressing neurotechnology governance challenges.

Speakers

– Damian Eke
– Ana Brian Nougrères

Arguments

Need for international harmonization of neurodata regulations


Call for multi-stakeholder, international discussions


Takeaways

Key takeaways

Existing legal frameworks are inadequate to fully address the challenges posed by neurotechnology and neurodata


There is debate over whether a new ‘right to mental privacy’ is needed or if existing privacy rights are sufficient


Neurodata should be recognized as a special category of sensitive data requiring strong protections


International harmonization of neurodata regulations is needed


A value-based approach to governance, not just rights-based, should be considered


Neurotechnology poses risks to mental autonomy and cognitive liberty that must be addressed


There needs to be a balance between innovation and human rights protections in neurotechnology development


Informed consent for neurodata collection is crucial but challenging


The definition and categorization of neurodata requires further interdisciplinary discussion


Global perspectives and diverse cultural values must be considered in neurotechnology governance


Resolutions and action items

Continue multi-stakeholder, international discussions on neurotechnology governance


Work towards harmonization of definitions and regulations for neurodata across different initiatives


Conduct further interdisciplinary research to define neurodata and its scope


Unresolved issues

Precise legal definition of neurodata


Whether genetic data should be included as neurodata


How to ensure truly informed consent for neurodata collection


How to balance innovation with adequate protections


How to address potential manipulation and surveillance of thoughts


How to apply European-centric approaches globally


Suggested compromises

Use existing privacy rights frameworks but recognize neurodata as a special category requiring enhanced protections


Combine rights-based and value-based approaches to neurotechnology governance


Develop broad, flexible definitions of neurotechnology and neurodata to allow for future developments


Thought provoking comments

Neurodata constitute windows into the cognitive, emotional, and psychological fabric of the human being. And like other data, neurodata can provide deep insights into a person’s mood, personality, memory patterns, decision-making, and even unconscious mental states.

Speaker

Ana Brian Nougrères


Reason

This comment highlights the unique and sensitive nature of neurodata, distinguishing it from other types of personal data.


Impact

It set the tone for the discussion by emphasizing the high stakes and ethical concerns around neurotechnology and neurodata. This led to further exploration of regulatory needs and privacy protections.


Existing regulations or ethical and legal frameworks are not addressing the issues of neural data and neurotechnology as they should be. Most of the recognition of neural data in the GDPR and also the Convention 108 Plus is implied rather than explicit.

Speaker

Damian Eke


Reason

This comment identifies a key gap in current regulatory frameworks regarding neurotechnology.


Impact

It sparked discussion about whether new legal categories or rights are needed for neurodata, or if existing frameworks can be adapted.


We do not recommend to add a new right to mental privacy. So, there is an ongoing debate if the right to mental privacy is needed or if it will overburden our existing and well-established legal system and also framework.

Speaker

Petra Zandonella


Reason

This comment challenges the idea that new rights are needed, offering a counterpoint to calls for special neurodata protections.


Impact

It shifted the discussion towards how existing rights and legal frameworks could be interpreted or adapted to address neurotechnology concerns.


Looking at the value-based approach in some regions might be the best way to govern these technologies rather than the rights-based approach because of the diversity of interpretations or implementations of human rights.

Speaker

Damian Eke


Reason

This comment introduces a new perspective on governance approaches, highlighting cultural differences in understanding privacy and rights.


Impact

It broadened the discussion beyond a purely Western/European legal framework, considering global implications and diverse cultural values.


Your focus on the brain activity, I think that might be a bit too narrow. If you would record my finger muscles, you would see that I’m a bit nervous. If you would record my skin conductance, you would see that I’m sweating a bit.

Speaker

Lars Lundberger


Reason

This comment challenges the narrow definition of neurotechnology focused solely on brain activity.


Impact

It prompted reconsideration of how to define neurotechnology and neurodata, expanding the scope of the discussion.


Overall assessment

These key comments shaped the discussion by highlighting the unique challenges posed by neurotechnology, questioning the adequacy of existing legal frameworks, debating the need for new rights versus adapting existing ones, considering diverse cultural perspectives on governance, and expanding the definition of neurotechnology beyond just brain activity. The discussion evolved from initially focusing on the need for special protections for neurodata to a more nuanced exploration of how to govern these technologies within existing systems while accounting for global and cultural differences.


Follow-up questions

How can we ensure that the development and deployment of neurotechnologies respect human rights, protect mental privacy, and empower individuals rather than expose them to surveillance, discrimination, or exploitation?

Speaker

Ana Brian Nougrères


Explanation

This question addresses the core ethical and legal challenges posed by neurotechnologies and is crucial for developing appropriate regulatory frameworks.


How should we treat the data generated by the human brain and nervous system in legal and ethical terms?

Speaker

Ana Brian Nougrères


Explanation

This question is fundamental to establishing appropriate protections and regulations for neurodata.


How can we address the issue of ethics dumping in neurotechnology research and development?

Speaker

Damian Eke


Explanation

This question highlights the need to consider global ethical implications and prevent exploitation of vulnerable populations in neurotechnology research.


Should neural data be classified as a special category of data, similar to genetic and biometric data in existing regulations?

Speaker

Damian Eke and Petra Zandonella


Explanation

This question is important for determining the appropriate level of protection for neural data in legal frameworks.


How can we ensure genuine informed consent in the context of neurotechnologies, especially given the tendency to believe promises related to brain-related technologies?

Speaker

Petra Zandonella


Explanation

This question addresses a critical ethical concern in the deployment of neurotechnologies.


How do we balance the potential benefits of neurotechnologies with the risks of cognitive control and behavioral intervention?

Speaker

Online participant (Siva Supramanian Muthusamy)


Explanation

This question highlights the need to consider both positive and negative aspects of neurotechnologies in regulatory frameworks.


How do we avoid a future where the right to mental privacy is sacrificed for profit or control?

Speaker

George from HRI, Civil Society


Explanation

This question addresses a key concern about the potential misuse of neurotechnologies by governments and tech companies.


Where do we draw the line between consent and surveillance in a neural age?

Speaker

George from HRI, Civil Society


Explanation

This question highlights the complex ethical issues surrounding consent and privacy in the context of neurotechnologies.


How do neurotechnologies impact other human rights beyond privacy, such as freedom of expression and freedom of thought?

Speaker

Kristin from the University of Oslo


Explanation

This question broadens the discussion to consider the wider human rights implications of neurotechnologies.


What accountability mechanisms should be in place across the entire lifecycle of neurodata processing, and should any particular stage bear greater accountability?

Speaker

Ankita from India


Explanation

This question addresses the need for comprehensive governance and accountability in neurodata processing.


How can we ensure that the values embedded in neurotechnologies are not solely from the Global North, and how can we address issues of coloniality in these systems?

Speaker

Damian Eke


Explanation

This question highlights the importance of considering diverse global perspectives in the development and regulation of neurotechnologies.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.