Main Topic 2: Neurotechnology and privacy: Navigating human rights and regulatory challenges in the age of neural data

13 May 2025 09:00h - 10:30h

Main Topic 2: Neurotechnology and privacy: Navigating human rights and regulatory challenges in the age of neural data

Session at a glance

Summary

This discussion focused on neurotechnology and mental privacy, examining how to regulate the mind in the digital age as part of a main session at what appears to be a European digital governance conference. The session opened with a video message from ITU Secretary General Doreen Bogdan-Martin emphasizing the need for balanced regulation and innovation in emerging technologies like AI and quantum computing.


UN Special Rapporteur on Privacy Ana Brian-Nougrères delivered the keynote address, arguing that neurotechnologies represent a new frontier for privacy rights since they can access, monitor, and potentially alter brain activity. She emphasized that neurodata should be classified as a special category of personal data requiring the highest level of protection, as it reveals not just what we do but who we are at the most fundamental level. Brian-Nougrères advocated for principles including human dignity, mental privacy, informed consent, and the precautionary principle, while suggesting that Convention 108+ provides a foundation for international cooperation on these issues.


The panel discussion featured contrasting perspectives on whether new legal frameworks are needed. Assistant Professor Damien Eke argued that existing regulations like GDPR don’t explicitly address neural data and warned about “ethics dumping” where questionable research moves to regions with weaker oversight. He also emphasized that neurotechnology governance must consider diverse cultural values, particularly those from the Global South. Legal researcher Petra Zandonella countered that existing privacy rights under the European Convention on Human Rights are already broad enough and that creating new “mental privacy” rights could create legal uncertainty.


The discussion revealed ongoing debates about definitions, with participants struggling to clearly delineate what constitutes neurodata and whether it should include broader physiological indicators beyond direct brain activity. Questions from the audience addressed concerns about cognitive control, behavioral manipulation, and the intersection of neurotechnology with AI systems. The session concluded with calls for continued multi-stakeholder dialogue and international harmonization of approaches before these technologies become more widespread.


Keypoints

## Major Discussion Points:


– **Neurotechnology and Mental Privacy Regulation**: The session focused on whether current legal frameworks like GDPR and Convention 108+ are adequate to address the unique challenges posed by neurotechnologies that can decode, collect, or influence brain activity, with debate over whether new “mental privacy” rights are needed.


– **Classification and Protection of Neurodata**: Extensive discussion on whether neural/brain data should be recognized as a special category of sensitive personal data, similar to genetic or biometric data, given its ability to reveal thoughts, emotions, intentions, and even unconscious mental states.


– **Global vs. Regional Approaches to Governance**: Panelists highlighted the tension between European/Global North perspectives on individual privacy rights versus other cultural approaches (such as collective privacy concepts in some African communities), emphasizing the need for inclusive, multi-stakeholder governance that doesn’t impose Western values globally.


– **Immediate vs. Future Risks**: Discussion of whether current debates about “neural rights” distract from more pressing immediate concerns like ethics dumping, exploitation in resource extraction for neurotechnology manufacturing, safety issues, and bias in current applications.


– **Interdisciplinary Collaboration and Definitions**: Strong emphasis on the need for interdisciplinary dialogue between lawyers, neuroscientists, ethicists, and policymakers to properly define terms like “neurodata” and establish harmonized international standards before fragmented regulations create loopholes.


## Overall Purpose:


The discussion aimed to examine the legal and ethical implications of rapidly advancing neurotechnologies, assess whether existing privacy and data protection frameworks are sufficient, and explore what new regulatory approaches might be needed to protect mental privacy and human dignity in the digital age.


## Overall Tone:


The discussion maintained a serious, academic tone throughout, with speakers expressing both fascination with the technological possibilities and genuine concern about the risks. There was a sense of urgency about the need for proactive rather than reactive regulation, with multiple references to lessons learned from AI development where ethical considerations came too late. The tone was collaborative and respectful, even when panelists disagreed on approaches, and there was consistent emphasis on the need for inclusive, global dialogue rather than top-down regulatory solutions.


Speakers

**Speakers from the provided list:**


– **Ana Brian Nougrères** – UN Special Rapporteur on the right to privacy


– **Online moderator** – Joao (mentioned as online moderator)


– **Petra Zandonella** – Pre-doctoral assistant at the University of Graz law faculty, part of interdisciplinary research group on law, ethics and neurotechnologies, focuses on protection of health data in the EU


– **Panelist** – Multiple unidentified panelists who asked questions from the audience


– **Moritz Taylor** – Session moderator/chair


– **Kleijssen Jan** – (Role/title not specified in transcript)


– **Doreen Bogdan-Martin** – Secretary General of the International Telecommunication Union


– **Damian Eke** – Assistant Professor at the University of Nottingham, Chair of the International Brain Initiatives Data Sharing and Standards Group, founder of the African Brain Data Network and African Data Governance Initiative, PI at Wellcome Trust project on Responsible International Data Governance for Neuroscience


**Additional speakers:**


– **Torsten Krause** – Political scientist and child rights researcher at the Digital Opportunities Foundation based in Berlin, Germany


– **George** – From HRI, Civil Society


– **Lars Lundberger** – From the World Federalist Movement (speaking in personal capacity)


– **Kristin** – From the University of Oslo


– **Ankita** – Lawyer from India


– **Siva Supramanian Muthusamy** – Online participant (question read by moderator)


– **Samir Gallo** – Online participant (attempted to participate but had technical issues)


– Various other audience members identified only by numbers (007, 118, 195, 100, 463, etc.)


Full session report

# Neurotechnology and Mental Privacy: Regulating the Mind in the Digital Age


## Executive Summary


This session at a European digital governance conference examined the challenges posed by neurotechnologies to fundamental human rights, particularly privacy. The discussion brought together experts in law, neuroscience, and digital rights to address whether current regulatory frameworks are adequate to govern technologies that can access and monitor human brain activity.


The session revealed significant disagreements about regulatory approaches. While speakers agreed that neurotechnologies pose unique risks, they diverged on whether to create new “mental privacy” rights or build upon existing frameworks, and on how to address cultural differences in privacy concepts globally.


## Opening Context


The session opened with a video message from Doreen Bogdan-Martin, Secretary General of the International Telecommunication Union, who emphasized that “multi-stakeholder governance involving all voices is essential for balancing regulation and innovation in digital technologies.” She mentioned the challenges posed by artificial intelligence and quantum computing alongside neurotechnologies.


Session moderator Moritz Taylor framed the discussion around the tension between technological advancement and human rights protection, noting that neurotechnologies represent an intimate frontier as they directly interface with human consciousness.


## Keynote Address: The Case for Enhanced Neural Data Protection


UN Special Rapporteur on the right to privacy Ana Brian Nougrères delivered the keynote address, arguing that neurotechnologies represent “a new frontier for privacy rights since they can access, monitor, and potentially alter brain activity.” She distinguished neural data from other personal data by emphasizing that while traditional data reveals what we do, neural data reveals thoughts, emotions, and unconscious mental states.


Brian Nougrères proposed treating neurodata as “a special category of personal data requiring the highest level of protection, similar to genetic and biometric data.” However, she was cautious about immediate regulatory action, stating: “Perhaps it’s not the moment for a regulation, but we need to have those strong conversations” and emphasized the need for multi-stakeholder discussions before deciding on definitions.


She suggested that Convention 108+ could provide a foundation for international cooperation, noting that “international cooperation through frameworks like Convention 108+ is crucial since neurodata crosses national borders.”


## Panel Discussion: Contrasting Regulatory Approaches


### Building on Existing Frameworks


Legal researcher Petra Zandonella from the University of Graz presented a different perspective, arguing that existing privacy rights might be sufficient. She noted that her interdisciplinary group recommends against adding mental privacy as a new right, questioning: “What benefit will we gain by cutting off the right to mental privacy from the already broad existing right to privacy?”


However, Zandonella acknowledged gaps in current frameworks and suggested that “adding neural data to existing special categories in GDPR Article 9 would be feasible, as was done previously with biometric data.” She also mentioned that the EU Medical Device Regulation (Annex 16, point 6) already recognizes neurotechnologies as requiring special consideration.


### Global Justice and Cultural Perspectives


Assistant Professor Damian Eke from the University of Nottingham introduced often-overlooked global perspectives. He challenged the focus on neural rights by highlighting immediate risks: “The intense focus on neural rights as a distinct new category of human rights… could indeed overshadow some very tangible and immediate risks associated with development and application of neural technology… like ethics dumping, like the safety issues and bias issues.”


Eke raised concerns about “ethics dumping” – conducting ethically questionable research in regions with less stringent oversight. He also questioned whether rights-based approaches are universally appropriate, noting cultural differences: “in Europe we’ll think about individual privacy. Maybe in some communities in Africa we’ll think about collective privacy. But the GDPR is informed by individual privacy.”


He highlighted technical challenges, explaining that neural data can function as “hidden biometrics” and warned about convergence with artificial intelligence creating “uniquely dangerous capabilities for predictive profiling and cognitive surveillance.”


## Definitional Challenges


A significant portion of the discussion centered on what constitutes “neurodata.” Brian Nougrères offered a technology-focused definition focusing on “brain activity and the nervous system,” but this faced challenges from participants arguing for broader inclusion of physiological indicators.


Lars Lundberger from the World Federalist Movement illustrated the complexity: “If you would record my finger muscles, you would see that I’m a bit nervous. If you would record my skin conductance, you would see that I’m sweating a bit… brain activity in the neural sense of recording of intracellular and extracellular recordings will be insufficient to address the mental aspect.”


Eke further complicated matters by observing that “everything can be neurodata when you combine it with other sets of data,” highlighting how data fusion techniques can transform seemingly innocuous information into neural insights.


## Additional Perspectives and Contributions


Torsten Krause from the Digital Opportunities Foundation contributed insights about Germany’s approach, mentioning the concept of “personal integrity” in the Youth Protection Act, which adds a third layer of protection beyond physical and mental integrity to include data protection.


Audience questions revealed broader concerns about children’s rights, accountability mechanisms across the neurodata lifecycle, and impacts on rights beyond privacy such as freedom of thought and expression.


## International Initiatives


The discussion revealed multiple ongoing international initiatives. UNESCO is developing recommendations on ethics of neurotechnologies, and various UN bodies are working on frameworks. Brian Nougrères mentioned that the United Nations is organizing an interdisciplinary workshop in Berlin to discuss neurotechnology governance.


However, speakers noted the risk of fragmentation without proper coordination between these various initiatives.


## Unresolved Questions


Several critical questions remain unresolved. The definitional challenge of what constitutes neurodata emerged as fundamental – without clear definitions, regulatory frameworks risk being either too narrow or too broad.


The question of informed consent in neurotechnology contexts remains problematic, particularly when dealing with cognitively vulnerable populations or when the implications of neural data collection are not fully understood.


The relationship between neurotechnology and artificial intelligence presents ongoing challenges, as current regulatory frameworks address these technologies separately despite their increasing convergence.


## Key Takeaways


The discussion revealed both the urgency and complexity of neurotechnology governance challenges. While there was recognition of unprecedented risks posed by technologies that can access human consciousness, significant disagreements remain about regulatory approaches.


Key areas requiring further work include:


– Harmonizing definitions of neurodata across international initiatives


– Addressing cultural differences in privacy concepts for global governance


– Preventing “ethics dumping” in neurotechnology research


– Developing frameworks that can address the convergence of neurotechnology with AI


– Balancing innovation with protection of fundamental rights


The path forward requires continued multi-stakeholder dialogue and international cooperation, though the specific mechanisms for achieving this remain under discussion. As Brian Nougrères emphasized, the focus should be on “having those strong conversations” before rushing to regulatory solutions.


Session transcript

Moritz Taylor: I’m going to ask you all to take a seat, come back in from your coffee break, if you can hear us outside. Welcome back, those of you who are already in main session one. Welcome back to those of you who were here yesterday and had your socks rocked off by the Mariam Chaladze band and are finally coming in. And welcome to all of you who’ve been in other sessions and workshops before and are coming into the main session, into the hemicycle for the first time today. Now I’m going to start with main session two, which is neurotechnology and mental privacy, regulating the mind in the digital age. We’ll start off with a video, followed by a keynote, followed by statements, the panel, questions and the messages. I’ll hand over now to our online moderator, Joao, who will explain to you the rules of the session. Thank you very much.


Online moderator: So, for those who join online, please always raise your hand to speak, to ask for a speaking slot. Then we’ll ask you to switch your video and unmute when the time comes. For those who join on-site, please always join with your microphone muted and your speaker from the device also disabled. Thank you. Back to you.


Moritz Taylor: Thank you, Joao. I hope that was clear for everyone. I’m here to help you. For those of you in the room, when you wish to speak, you’ll be able to press a button next to your microphone. In any case, we will begin today’s session with a video message from Doreen Bogdan-Martin, Secretary General of the International Telecommunication Union. Please play it. Thank you.


Doreen Bogdan-Martin: Hello, everyone. Let me start by thanking the Council of Europe and Luxembourg for inviting me to share a few words with you today. I would love to be with you in Strasbourg, especially at a time when digital is top of mind all over Europe and around the world. The Internet is a mirror that reflects humanity at our best and at our worst. Digital technologies evolve rapidly, as do the associated opportunities and risks. Human rights do not. As we achieve breakthroughs in science and technology, our outlook on the future changes, but human rights remain a constant. They are a guidepost for our actions across tech frontiers, especially now. With powerful technologies like like AI and quantum, poised to drive the next phase of the Internet’s evolution. To safeguard human rights, by balancing regulation and innovation, all voices are needed at the governance table. To understand how policy and regulation, whether nascent or enforced, can impact everyone’s aspirations as innovators and users. That’s agile and adaptive governance, keeping everyone involved in designing and fine-tuning our policy actions. And that’s why open, multi-stakeholder forums like the World Summit on the Information Society and EuroDIG are so important. They give everyone a say in how public policy could best reflect our shared values. We will keep this conversation moving at our back-to-back WSIS Plus 20 High-Level Meeting and our AI for Good Global Summit in July. We are reviewing 20 years of the WSIS process and looking to strengthen this multi-stakeholder framework for global action on digital development. 2025 also marks the 160th anniversary of the ITU. As our founding members, European countries are deeply experienced in building the collaboration and consensus that powers our work. Now, more than ever, the world needs to see this spirit of cooperation in action. Thank you.


Moritz Taylor: Thank you so much, Doreen. Now, I’d like to begin with the session proper. The idea that our thoughts are private has long been considered a cornerstone of personal freedom. But with neurotechnology rapidly evolving, from brain-computer interfaces to mood-tracking devices, that assumption is being challenged. This session tackles one of the most provocative frontiers of digital governance, the legal and ethical implications of decoding, collecting, or even influencing brain activity. We will explore whether our current regulatory frameworks are fit for purpose, and whether we need to rethink privacy in a world where even our minds can become data streams. I’d like to therefore invite our keynote speaker today, UN Special Rapporteur on the right to privacy, or on privacy only, I think, Ana Brien-Nogueres. Please take the floor.


Ana Brian Nougrères: Thank you so much. Thank you so much. Good morning to you all. Well, this is because I’m trying to be strict with my timing. So, distinguished guests and colleagues, it is a great honor to address you today. I’m also grateful to the organizers for their invitation. Today, I will invite you to reflect on neuroscience, neurotechnology, neurodata. So, why is privacy at the frontiers of neuroscience? As you know, neurotechnologies are devices and systems capable of monitoring, interpreting, or even modifying brain activity. They are no longer a theoretical promise. They are here, and they are advancing rapidly. What only a decade ago seemed inconceivable is now being tested in laboratories, deployed in medical settings, and increasingly explored for commercial and security-related purposes. As these technologies develop, so do the risks they pose to fundamental rights, particularly the right to privacy. We are entering a new frontier in the domain of privacy. This frontier appears like a boundary between the self and the outside world. This boundary is becoming porous. It is a frontier where our inner thoughts, intentions, and emotions may be inferred, stored, transmitted, and even altered by external technologies. The frontier demands not only technical safeguards, but a reaffirmation of the foundational principles that protect human dignity and autonomy. In my recent report to the UN Human Rights Council on Neurotechnologies and Neurodata, I have laid out a framework for action, one that calls for regulatory clarity, for ethical commitment, and for international solidarity. The stakes are high. If neurotechnologies are used wisely, they hold the potential to advance science. improve human health and empower individuals. But if misused, they could erode mental freedom, facilitate surveillance of thought, and deepen existing inequalities. We must act now, together, to ensure that our values evolve alongside our innovations. The question before us is not whether neurotechnologies will become part of our life. They are already here. The question is whether we will be governing them wisely, justly, in a way that protects what is most sacred, which is the integrity of the human mind. So, what is so important about neurotechnologies? To understand the significance of protecting personal data in the context of neuroscience, we must first define what we mean by neurotechnologies, and more importantly, why they matter for human rights in general and for the right to privacy in particular. Neurotechnologies refer to the tools, systems, and devices capable of accessing, monitoring, interpreting, or altering brain activity and the nervous system. These include invasive methods such as brain implants used in medical treatments for conditions like epilepsy or Parkinson’s disease, as well as non-invasive techniques such as electroencephalography, functional magnetic resonance imagining, or emerging wearable brain-computer interfaces. Although many of these tools are initially developed for therapeutic or research purposes, their application has expanded far beyond clinical settings. Today, neurotechnologies are entering the commercial domain, integrated into consumer products, educational platforms, WordPress monitoring systems, and even digital entertainment. They are also being explored for potential use in law enforcement. law enforcement, military operations, criminal justice, raising deeply troubling ethical and legal questions. Why do these technologies matter so much? I mean, I think it’s because the brain is not just another organ. It is the seed of our consciousness, our identity, our thoughts, emotions, memories, and intentions. The activity of our neural circuits encodes not only what we do, but who we are. And for the first time in history, we now possess tools that can reach into this most intimate and private space. Let us be clear, neurodata, the data collected through neurotechnologies, is fundamentally different from other types of personal data. While a fingerprint may identify us, and geolocalization data may track us, neurodata can reveal how we feel, what we fear, what we desire, or what we intend to do sometimes, even before we are consciously aware of it ourselves. This kind of information is not merely sensitive. It is existentially revealing. This is why the Global Privacy Assembly and numerous human rights bodies, including the UN Human Rights Council, have recognized neurodata as a special category of personal data that requires the highest level of protection. Neurotechnologies also raise the prospect of mental manipulation. We must reckon with the real possibilities that once brain activity can be decoded, it can also be influenced through electrical or digital stimulation, algorithmic feedback, or even predictive behavioral nudging. This raises profound concerns regarding free will, mental autonomy, and cognitive liberty. And yet, we must not overlook the enormous benefits that these technologies offer. Neurotechnologies can revolutionize medical diagnostics, facilitate communication for individuals with severe disabilities, and deepen our scientific understanding of the brain. They can bring hope to people suffering from neurological or psychiatrist conditions for which current treatments are inadequate. The challenge, then, is not to resist innovation, but to guide it ethically, to ensure that the development and deployment of neurotechnologies respect human rights, protect mental privacy, and empower individuals, rather than expose them to surveillance, discrimination, or exploitation. Neurodata as a special category of personal data. At the heart of the ethical and legal debate surrounding neurotechnologies lies a fundamental question. How should we treat the data generated by the human brain and nervous system? The answer is inequivocal. Neurodata must be recognized and regulated as a special category of personal sensitive data. This recognition is not merely symbolic. It reflects the unique nature, sensitivity, and potential consequences of processing neurodata. Neurodata are not just health data. They are not merely biometric data. Neurodata go far beyond what we traditionally understand as personally identifiable information. Neurodata constitute windows into the cognitive, emotional, and psychological fabric of the human being. And like other data, neurodata can provide deep insights into a person’s mood, personality, memory patterns, decision-making, and even unconscious mental states. These are not inferences drawn from online behavior or wearable devices. They are recordings of the brain’s actual electrical and psychological activity. This data can be used not only to identify a person, but to analyze, predict, or even alter their thoughts and behaviors. For these reasons, neurodata meet and exceed the criteria that define sensitive personal data under international privacy standards. As such, they require enhanced legal safeguards. They must be subject to. strict access controls, strong encryption, cybersecurity protocols, explicit, informed, and revocable consent mechanisms, clear limits on their collection, retention, and sharing. Furthermore, the mere collection of new data should trigger a high-risk processing assessment. This is particularly vital in contexts involving vulnerable populations, such as persons with disabilities, children, older persons, or individuals in institutional settings, where the potential for coercion, manipulation, or misuse is even greater. Mental privacy emerged as a necessary evolution of the right to privacy, emphasizing that thoughts and mental states, absent a compelling legal justification and strict safeguards, must remain off-limits to external surveillance or intrusion. The mind is the final frontier of human privacy, and it must be treated as such. National bodies such as the Ibero-American Data Protection Network, the Global Privacy Assembly, and the Berlin Group have all recognized the need to establish special frameworks for the processing of neurodata. Their recommendations are aligned with a precautionary principle, calling for clear legal definitions, transparency, accountability, human rights impact assessments, and prerequisites for any neurodata-driven activities. In addition, these bodies emphasize that neurodata may imply a power of anticipation, capable of revealing information not only about the current mental state of an individual, but about their future behavior, psychological predispositions, or cognitive performance. This introduces unprecedented risks of profiling, stigmatization, and discriminatory treatment, particularly in employment, insurance, education, and criminal justice settings. This may lead to a biased hiring process, unequal access to services, or unjustified exclusion from opportunities depending on social inequalities. We demand, at this, equitable access and non-discrimination procedures. To regulate neurodata adequately, we must adopt forward-looking regulatory frameworks that incorporate the complexity and the implications of this new form of personal information. Our legal systems must reflect that neurodata are not merely data about the brain, but deeply personal representations of the self. I will refer next to the principles and safeguards for the use of neurotechnologies and the processing of neurodata. In my most recent report to the Human Rights Council, I outlined a set of principles that should serve as the ethical and legal compass for regulating the use of neurotechnologies and the processing of neurodata. These principles are not abstract ideals. They are concrete tools for building legal, institutional, and technological systems that respect privacy, autonomy, and equality in the context of the brain. The first and most essential principle refers to human dignity. The mind is where human dignity resides. It is the source of self-awareness, of decision-making, of creativity, of morality. Any attempt to access or alter brain activity without the individual’s informed and voluntary consent should be considered a violation of this dignity. We need to safeguard the integrity of every individual’s cognitive and emotional self. Neurotechnologies must never be used in ways that reduce the person to a target of data extraction or behavioral engineering. We must acknowledge parental privacy as an emerging but indispensable dimension of the right to privacy. The human mind must be treated as an inviolable space. Any intrusion, whether through decoding brain signals, stimulating specific regions, or interpreting patterns of cognition, must be subject to the most rigorous legal scrutiny and ethical oversight. The processing of new data must be based on a freely given, informed, specific, and revocable consent of the individual. This ensures that individuals remain in full control of their thoughts and decisions. Then, the precautionary principle must apply. Where the risks of harm to mental integrity, cognitive liberty, or psychological well-being are not fully understood, the default must be restrained. A lack of scientific certainty should never be used as a justification for experimentation on commercial exploitation. Especially in the face of technologies that could irreversibly affect the brain, caution must be the rule, not the exception. Privacy by design and by default must be embedded at every stage of development. From the earliest research and design phases to deployment and commercialization, new technologies must be shaped by ethical values, human rights norms, and transparency requirements. This includes conducting human rights impact assessments prior to implementation and involving diverse stakeholders, including civil society persons with disabilities and neuroscientists in the process of oversight. We must also ensure accountability. Developers, manufacturers, healthcare providers, employers, and public institutions that use new technologies must be held responsible for ensuring data protection, transparency, and compliance with legal standards. Governments must establish independent regulatory bodies equipped to monitor the use of new technologies and provide accessible remedies in case of right violations. Then, we need to prohibit discrimination and manipulation. Neurodata must never be used to categorize, profile, or exclude individuals based on psychological characteristics, emotional responses, or neural patterns. Nor must they be used to manipulate thoughts, alter beliefs, or induce behaviors for purposes of political, commercial, or punitive control. The principle of free will ensures that individuals remain in full control of their thoughts and decisions. Finally, individuals must have enforceable rights. They must be able to access their neurodata, challenge unlawful processing, and seek redress. Privacy rights in this context are not a luxury. They are a shield against the commodification of the self and the erosion of mental autonomy. In fact, the regulation of neurotechnologies is not only a legal imperative, it is a moral one. If we do not act now to build a rights-based framework, we risk creating a future in which the last domain of privacy, the mind itself, is no longer protected. But if we succeed, we can ensure that neurotechnologies are developed and used to enhance human flourishing, not to diminish it. There is another issue at hand. Neurodata do not respect national borders. Brain-computer interfaces, cognitive monitoring tools, and neural wearables are being developed and deployed. Yeah, I’m okay. I’m calculating, okay. And being developed and deployed by transnational actors. Their regulation, therefore, cannot be fragmented or isolated. It must be coordinated, comprehensive, and coherent. In this context, the Council of Europe’s Convention 108 and its modernized version, Convention 108+, offers a critical foundation for global convergence. It is the only legal binding international treaty dedicated specifically to the protection of personal data. And crucially, it is open to countries beyond the Council of Europe, allowing for true international alignment. Convention 108+, embodies many of the values that are essential to the regulation of neurotechnologies. Transparency, accountability, proportionality, data minimization, and the protection of sensitive data. It recognizes that certain categories of personal data, such as those related to health, require special treatments under the law. By building on the principles of Convention 108+, we can create a shared normative baseline for regulating neurodata. This is especially important given the rapidly evolving technological landscape. National laws vary significantly in scope, substance, and enforcement capacity. Yet, if we are to protect individuals from harmful or discriminatory users of neurotechnologies, we must avoid a patchwork of weak protections and regulatory loopholes. Convention 108+, also provides mechanisms for institutional cooperation. It fosters dialogue, mutual assistance, and the exchange of best practices. These tools are essential as we confront common challenges, such as how to define neurodata in legal terms, how to apply informed consent in cognitively vulnerable populations, or how to regulate cross-border flows of neural information. In my view, Convention 108+, should serve as a platform for international leadership in shaping how we treat the privacy of the human mind. It offers a flexible yet principled framework that can inspire national reforms and influence regional and global initiatives. Already, it has helped shape modern data protection standards beyond Europe, in Latin America, Africa, and in various international fora. But we must go further. We must ensure. that the unique features of new technologies are explicitly addressed within data protection regimes. We must expand the reach of Convention 108+, by encouraging more states, especially those at the forefront of technological innovation, to ratify and implement its provisions. And we must integrate the Convention’s principles into the design of future specific instruments on emerging technologies. Let us be clear, we are not starting from zero. Convention 108+, already gives us the legal vocabulary, the ethical principles, and the cooperative tools we need. What we must do now is apply them boldly, to ensure that these protections exist also for new technology. The brain must be the final frontier of privacy, but it is one we must defend together, and with the same spirit of solidarity and shared responsibility that underpins Convention 108+. Now, a final conclusion. As we stand at the intersection of neuroscience, data protection, and human rights, we are compelled to confront a profound truth. The future of privacy is not only digital, it is mental. In a world where technology can increasingly peer into our thoughts, predict our behaviors, and influence our decisions, the protection of mental autonomy is emerging as a defining frontier of the 21st century. Faced with this reality, our path forward must be both principled and pragmatic. We must be clear that technological progress cannot come at the expense of human dignity. That innovation, to be legitimate, must be bound by law. And that privacy, in its fullest sense, includes not only the protection of our personal data, but the protection of the self, of our identity, our thoughts, our emotions, and our freedom to be who we are without interference. This is why the recognition of new data as a special category of personal data is not just a technical adjustment. It is a moral imperative. This is why informed consent must be more than a checkbox. It must be a process of genuine understanding and free will. And this is why the principles of human dignity, mental privacy, personal identity, free will, equitable access, non-discrimination, accountability, must be all woven into every law, policy, and device that touches the human mind. The challenges ahead are immense. The speed of technological innovation is outpacing legal and institutional responses in nearly every region of the world. But we are not starting from zero. We have frameworks like Convention 108+, that can serve as a foundation for international convergence. We have regional and global bodies committed to right-based governance. And we have a growing awareness across disciplines and sectors that the mind must be protected as sacred ground. Let us act with urgency, but also with care. Let us regulate, not to obstruct, but to elevate the promise of new technology. Let us educate, not to alarm, but to empower. Let us legislate, not in isolation, but in solidarity with each other, and with those whose rights are more at risk. Above all, let us remember that the right to privacy is not simply the right to be left alone. It is the right to control our personal space, our bodies, and yes, our minds. It is the right to define who we are, free from coercion, manipulation or surveillance. In this new cognitive era, that right must be defended with renewed determination. Thank you very much.


Moritz Taylor: Thank you very much, Anna. You can take a seat also. I’d invite the panellists also to come forward so that we can start introducing you. After I give them each a little chance to talk about, to answer a question, we’ll also have the statements before moving on to a question round. So if you listen to what Anna has said, if you listen to what the panellists say and have some questions, write them down so that they can be asked after the prepared statements. Thank you very much. So, I’d like to begin the session with Damien Eke. He is Assistant Professor at the University of Nottingham, Chair of the International Brain Initiatives Data Sharing and Standards Group, and founder of the African Brain Data Network and African Data Governance Initiative as well. Currently in his role as PI at the Wellcome Trust. project he is co-creating Responsible International Data Governance for Neuroscience. So the way I’m going to do this is I’m going to introduce them all and give them all a chance to answer one question. So Dr. Eke, how will do existing data protection frameworks like Conventional Norway Plus or the GDPR address the unique sensitivities of neural or cognitive data? Should we be thinking about a new legal category for mental data? And are some of the current neural rights debates distracting from more immediate and under-acknowledged risks posed by today’s neurotechnologies?


Damian Eke: Thank you very much for that question, and thanks a lot for the presentation, Ana. That was very comprehensive. The answer, I’ll give you the simple answer and maybe the complicated answer. The simple answer is that existing regulations or ethical and legal frameworks are not addressing the issues of neural data and neurotechnology as they should be. Most of the recognition of neural data in the GDPR and also the Convention 108 Plus is implied rather than explicit. There’s no specific mention of brain data or neural data or mental health data in these legal provisions. So the special category data in Article 9 of the GDPR includes data concerning health and biometric and genetic data, while EEG, fMRI, and also most of the neural data sets that are generated from neurotechnology may fall under health data in certain contexts. This is not explicitly clear in the regulation. And also, in Article 22, profiling and automated decision-making, EES, may apply if cognitive data is used. used and also principles of data minimization, purpose limitation and consent mechanisms also apply, but this is not clear to both researchers and also people in the industry on how to address some of the issues neural data raise. What I will say is that also in the second question you asked, whether the debates about neural rights are distracted from other issues, yes, the intense focus on neural rights is as a distinct new category of human rights, while it captures public imagination, could indeed overshadow some very tangible and immediate risk associated with development and application of neural technology, and I was also right to highlight the issues of ethics and ethical obligation involved here, it is important to realize that there are also other issues that we need to focus on, like ethics dumping, like the safety issues and bias issues that are involved in neural technology. Ethics dumping, what do I mean by ethics dumping? As neural technologies advance, as you mentioned, it’s not just developed in one location, it actually blows boundaries, there’s a risk that research and development that might be ethically questionable or face treated regulations in one region, may be outsourced to regions or areas with less stringent oversight, and that is a critical problem, and ethics dumping could lead to the exploitation of vulnerable populations, and also disregard for ethical principles in pursuit of scientific progress, and also, I will also mention this, exploitative level practices that characterize the extraction of resources that shape the fundamental infrastructure of neural technology. the extraction of lithium, the extraction of rare earths for the development of neurotechnology. The conditions of work in terms of mining these resources are not as they should be, so the right to human dignity should also be focused on that rather than just data. Should we have a special category for neural data? I would say yes, but not as the discussion is happening in neural rights debate, but as a special category that can be at the same level with genetic data and biometric data, because neural data can also be classed as a hidden biometrics. Datasets like fMRI or MRI have brain prints in them that are unique or maybe more unique than biometrics or genetic data, so it is important for us to discuss how we can change the language of regulations or provisions in the regulations to attribute the same level of sensitivities and sensibilities to neural data as we do to genetic data and biometric data.


Moritz Taylor: Thank you so much, Damian. I think that was a very good start for people to have their brains activated. Speaking of brains and neural activity, next I’d like to welcome Petra Zandonella, a pre-doctoral assistant at the University of Graz at the law faculty and part of an interdisciplinary research group working on the intersection of law, ethics and neurotechnologies for the last two years. In her dissertation, she focuses on the protection of health data in the EU, which has now already popped up a couple of times in the last couple of minutes. From your legal research and your interdisciplinary perspective, I’d like to hear your point of view on whether you believe the existing legal framework already adequately addresses the challenges of neurotechnologies and, of course, maybe to expand on that, do you think there are gaps perhaps in enforcement, in scope or conceptual clarity even that we still need to fill?


Petra Zandonella: Okay, thank you very much. Thank you for the invitation and the opportunity to be here today and also for the introduction. So, I will give you a legal perspective and then we can go quite in the same way as you did already. So, you heard before that there’s a call for mental privacy and in the interdisciplinary group we’ve also focused on the mental privacy issue and we do not recommend to add a new right to mental privacy. So, there is an ongoing debate if the right to mental privacy is needed or if it will overburden our existing and well-established legal system and also framework. So, the question is what will exactly, oh sorry, what exactly will the scope of the mental privacy be and is it about mental, neurocognitive or brain data? So, the existing right to privacy, not to mental privacy, it’s the right to privacy as enshrined in Article 8 of the European Convention on Human Rights. It’s really a broad right to privacy. So it’s really broad understanding about it, and it’s also interpreted in a really broad sense. So I will mention one case in front of the European Court of Human Rights, and it took place last spring, so in spring 2024, and it’s the case of the Klimaseniorin in Switzerland. So it’s not a case about new technologies, but it’s a case about how broad the understanding of the existing right to privacy within the Convention on Human Rights is already. So the Klimaseniorin claimed that Switzerland had violated the right to private and family life, so the Article 8 of the Convention on Human Rights, by failing to take measures against climate change. And the court ruled in favor of the Klimaseniorin. So you see how broad the understanding of the existing right already is. So what benefit will we gain by cutting off the right to mental privacy from the already broad existing right to privacy? And where the existing right to privacy will end, and where the mental right to privacy will start? So, of course, there will be legal uncertainty if we create a new right to mental privacy. So there is, of course, an established case law, because it’s a new right, and it’s also a question if the privacy right will be interpreted in the broad sense if we explicitly mention mental privacy in the Convention on Human Rights. So if we mention mental privacy there, there’s a question if there should be also other privacies, because the problem is that if we explicitly mention one privacy, what about the other privacies? Are they still within the broad scope of the right to privacy? We don’t know. So to summarize, the existing right to privacy in the Convention on on Human Rights is already a good foundation, I would say. So in our opinion, there shouldn’t be a split up right to mental privacy. But nevertheless, you already mentioned that there should be a discussion on how we should deal with these neurotechnologies and with neural data. So for example, in the data protection law, as you already mentioned, and there is the Convention 108 and 108 plus and also the TDPR, and there is health data inside. So if neurotechnologies are used for health purpose, of course, it will be in the scope of the health data. But nowadays, the neurotechnologies are expanding into the non-medical domain, such as human enhancement or gaming. And in this context, there’s no medical purpose. So there is a gap. And as you mentioned in the keynote, it’s really important that we also protect these data because neural data is not only specific when it’s about the medical purpose or medical data. It’s also really a specific data when it’s in another purpose used. So for example, we could implement neural data within the scope of Article 9 of the TDPR or in the Article 6 of the Convention 108 or 108 plus. And it already had been done with biometric data. So in the Convention 108, there wasn’t biometric data. And now, with the 108 plus and before in the TDPR, biometric data was added. So it isn’t a big deal to implement a new category of data there, of course. It is not that easy because you need consent in the member states. But it could be an idea how we can deal with these new challenges we have with neurotechnologies. And so I will come to a conclusion, if it’s OK. Sorry for the long statement. In our opinion or my opinion and also in the opinion of our interdisciplinary group, the Convention on Human Rights and also the Charter of the Fundamental Rights already is a good and robust legal framework. And mental privacy should explicitly not be added. So we should stick to the already existing right to privacy. But of course, we need action to tackle the challenges that arise. arise with neurotechnologies, so for example, as already mentioned, by adopting the data protection law. So, thank you.


Moritz Taylor: Thank you very much, Petra. Right. Give them all a round of applause. Thank you. I’m going to allow the statements to happen. Meanwhile, perhaps, so that you can digest that information, listen to the statements and ask one or two questions after. I’ll try to collect them because we’re starting to be a bit short on time. Classically. May I have the statements? No, there are no statements. No, no online statements at all. And so, on site, do we have UNESCO’s Women for Ethical AI present? Kokse Kobanashoy-Hizal, are you here? I’m going to assume no. Is Lazar Simona from the Union Romani Voices, the CEO here, to speak, make a statement? Berna Tepe. Jan Kleissen, a recognisable name in the building. Please, Jan. It’s number 94.


Kleijssen Jan: Good afternoon, or good morning, rather, still, with 10 minutes to go. Thank you very much for the very interesting presentations, and also drawing attention to the already existing validity of Convention 108, 108+, when coming to protecting neural rights, as they have been labelled, and the very self, as was so pointedly stated a moment ago. I have a question relating the use of, or the use of, the interpretation of neural rights when it comes to AI systems, sentient computing. There’s a big debate out whether this will remain pure fiction, science fiction, or come into reality, but what would your position be on the research guiding this, and on the limits, perhaps, on the regulation that needs to be there in time if we not want to find ourselves facing something quite abominable? Thank you.


Moritz Taylor: Thank you, Jan. Can we do this as a quickfire round? Do you want to give your quick answers, maybe each one after the other? You don’t want to. So, I’ll start with Damian, and we’ll go down the line.


Damian Eke: Okay. So, you’re right. AI complicates the ecosystem. With the convergence of neurotechnology and AI, and the predictive inferential power of neural data, when combined with AI and big data, it’s uniquely dangerous, which enables maybe preemptive profiling, maybe neuromarketing at an advanced level, and also cognitive surveillance. This is a problem that needs to be addressed also, because the question is, does it then warrant a special category of data, of convergence of data sets? Now, it’s not just neural data, but then it’s combined with other data sets, biomedical data, combined with AI. It is a problem. Thank you very much. So, we have a challenge that needs to be addressed, but just as my colleague here pointed out earlier, there are provisions in the law to address some of these things, but it’s just that the ecosystem of regulations are a bit diverse. There’s the AI Act, there’s the GDPR, and there are other data regulations in the EU. It’s a case of trying to harmonize these provisions to address a specific problem of convergence of neural data and AI.


Petra Zandonella: So maybe I can just add a sentence to this. It’s not about the technology, it should always be about the human being. So how can we make sure that the human being is still in the focus of the regulation? So it’s not that easy to regulate each technology. So we should have a broader approach to this. So of course there is an interference with other technologies, I guess that’s normal and that’s already existing, and as you mentioned there are already really good legal frameworks on that, but when we come back to the Convention on Human Rights, there is really a broad understanding and it’s really a good reflection in the human rights and also in the fundamental rights when you go on the European perspective.


Moritz Taylor: Thank you, Petra. It’s okay, only if you have a short on time, so if you have, yeah.


Ana Brian Nougrères: Okay, I believe that there are difficult topics like the one you brought now, and I believe also that there is one moment in which one can feel there is an important risk, and that the risk might manipulate society. And then I think, well, what shall we do? Because we are looking at the process, we are looking at our people, how they might be manipulated. We have concrete examples of people who received a little bit of money to have a picture of their eye. and then all that is a whole mess. But well, it’s not to come to examples, the moment is not this one. But I think that when we are seeing that problem, it is because the problem has advanced in our society. And that is a moment in which we need to do something. We need to do awareness first of all. But regulation, I won’t discard it. I think that regulation is important. It gives a before and an afterwards. But before regulation to come, we need to have a real conversation, multi-stakeholder conversation on these topics in which we can have the opinions of the different professions that are involved in these topics. So I think there is a point in neurotechnologies, there’s a point that is of our concern and that something has to be done. Perhaps it’s not the moment for a regulation, but we need to have those strong conversations. We need to attend to see how the social movements are feeling the impact of all this. And maybe the regulation appears, maybe it doesn’t. But well, we have to be open to it, I think that. I think that we as lawyers or professionals of the law, we all see that when technical advancements appear, then the law is always back, back, back. And when we decide to act, then the moment passed. So I think that that is an important thing that we have to take into consideration also in this moment. Thank you.


Moritz Taylor: We have to go through some statements and we’re running out of time. So thank you, Jan, for the insightful question. I think that already caused some more neurons to fire. Next on the list of prepared statements was Redon Pilinci from Albania. Are they in the room or are they online? Next, Torsten Krause from the Stiftung Digitale Chancen, number 61. Give me the floor, it’s on. Number 62, yeah, okay.


Panelist: Thanks, hello. Thanks for your interesting presentations and statements. I’m Torsten Krause, I’m working as a political scientist and child rights researcher at the Digital Opportunities Foundation based in Berlin, Germany. And I would like to introduce you shortly to a legal concept implemented in the Youth Protection Act in Germany with the second amendment in 2021. It was, it is personal integrity. And it put kind of a third layer to the previously existing concepts of integrity. And you know, the first layer is the physical. So it’s not allowed to beat someone because it harms the physical integrity of a person. And the second layer means that it’s also not allowed to harm someone by words or bully someone because of the mental layer. And the legislator implemented the third layer. And it means also to protect the data in the digital environment because the data we’re presenting ourselves. So if someone is violating my data in the digital environment, he is violating me. So that’s the concept of personal integrity. And it was implemented four years ago. So it was in mind to regulate, well, time is running really fast. It’s handling with really, with existing collecting data and then to prohibit, to influence, to use this data to influence you in a special direction. When we think about a newer technologies that level it up because it’s not existing data, it’s data that when they arise, when I’m not, maybe not recognizing yet that I have the thought or this feeling, in this moment I can be manipulated. And so I think, I’m not sure if we need to have a special category, but I think we need a kind of guarantee of a really broad understanding to protect the data and yeah, the personality of us as human beings. I hope that was helpful. Thanks.


Moritz Taylor: Thank you. Okay. Thank you, Torsten, for this great thing. I think a broad understanding of protection is clearly one of the things that is coming up, whether it is broadly understood specifically in the national legislation, it seems, or on a wider international context is also one of the questions that comes up quite often is how can national legislators interpret international rules. The next speaker prepared statement is from Amira Saber from the. I will swiftly move on then to Sana Bhatia from VIPS-TC, Kuram Shuktai from Enox Centre of Innovation, Transformation and Intelligence, Souheila Soulkia, and last on the list before I can open the floor is Karin Kaunas from DigiHumanism, Centre for AI and Digital Humanism. Well, Joao mentioned that someone would like to participate and ask a question from online, so I’d like to give them the floor please.


Online moderator: Yes indeed, so it was both a question during the keynote presentation and then some comments added to the panel. I will be the one reading the points raised from Siva Supramanian Muthusamy, and I’m sorry if I pronounced it incorrectly. The question was, will these frameworks and safeguards or even regulation work adequately well to sufficiently prevent negative aspects such as cognitive control and behavioral intervention? These positive aspects of neurotechnologies are not summarily opposed in this question. And then to the panelists, it was a remark that in the future, once neural electronics are in place, it merely requires the access or technical expertise to get into the network, and as easy as sending ones and zeros to someone’s brain or into that of a group of people to alter their behavior or even to trigger them, in theory, at least. Yeah, there was some buzzing, so we didn’t really hear a question as such. Let’s give it one go. Yeah, perfect. So I will straight go into the point again, and I’ll repeat, will these frameworks and safeguards or even regulation work adequately well to sufficiently prevent negative aspects such as cognitive control and behavioral intervention? And the follow-up remark was, in the future, once neural electronics are in place, it merely requires the access or technical expertise to get into the network, and as easy as sending ones or zeros to someone’s brain or into that of a group to alter their behavior or even to trigger them, in theory, at least.


Moritz Taylor: Thank you for your contribution from online. Before we answer questions, I was thinking that we can open the floor and collect one or two, so that we’re not constantly going back and forth. Were there any other people who wanted to ask a question? At this point, number 007, James Bond, please.


Panelist: Hello, I am George from HRI, Civil Society. I would like to ask, as brain data becomes increasingly valuable for governments and tech companies, how do we avoid a future where the rights to mental privacy is sacrificed for profit or control? And on the other hand, if someone’s thought can be decoded and stored, where do we draw the line between consent and surveillance in a neural age? Thank you.


Moritz Taylor: Actually, that sounds like a very exciting question I want to hear answers to immediately. It shows very good risks.


Damian Eke: Yeah, maybe I will go first, but I will also try to address the first question from online, which was, are these legal frameworks adequate enough to address some of the risks? some of the issues that we have on neurotechnology raises. And this is maybe a question against a rights-based approach to governance. Is it always the best approach to governing neurotechnology or any technology at all, including AI? There are so many strategic paradigms of governance of technology. One is rights-based approach. Another is value-based approach. Because technologies are value-laden. They’re not neutral. But the question is, whose values are embedded in these technologies? Looking at the value-based approach in some regions might be the best way to govern these technologies rather than the rights-based approach because of the diversity of interpretations or implementations of human rights. So when we look at the values that should inform the technology, if they are adequately embedded in the systems, then they can address the issues. One reason why I’m pointing this out is oftentimes when we have these discussions in Europe or the global North, we forget that some of the values that shape technologies are not understood the same way in all regions. They are not interpreted the same way in all regions. Whether it is privacy, in Europe we’ll think about individual privacy. Maybe in some communities in Africa we’ll think about collective privacy. But the GDPR is informed by individual privacy, the concept of individual privacy. So understanding governance of these technologies from the value-based approach is sometimes something that we need to consider in order to address some of the culturally aligned issues that these technologies raise.


Moritz Taylor: Thank you, Damien. I think sitting in Europe, and it’s a regional and European approach generally in preparation for the IGF, etc., I think there’s always a danger of forgetting that the global majority is not Europe and that the approaches are indeed very, very different, not even that far away from Europe. If we want to have global standards, then we need to take other people’s standards into account as well. So definitely a good first answer. Petra, if you wanted to add something.


Petra Zandonella: Thank you also for pointing out that we live in the North and in Europe. But I will come back to the questions first of all. If we address the legal challenges, I guess yes and no. And as you mentioned before, there’s actually really a need that we have a discussion and a debate such as we have the opportunity here, for example, or as it is also in UNESCO or in the UN. And we really need an interdisciplinary exchange. So coming back to your question, you said it’s about brain data. In our interdisciplinary group, we discussed a lot what the data should be named because our neuropsychologists say the best way to address most of the mental and cognitive and whatever states is to call them neurodata and then to talk about mental states because cognitive states are part of mental states but are not every state. So I’m not the perfect person to answer that question what the name should be because there should be an interdisciplinary question for that. And then you pointed out the consent. And that’s really a big issue. It’s already an issue when it comes to health data, because if you need something, it’s quite obvious that you will say, yeah, go for it, because I need it. And with brain, or with neurodata, it’s a similar point. And our neuropsychologists also say, or told us, that when there is something about the brain, we tend to believe everything. So if there is neuromarketing or neuroenchantment, we tend to believe the promises, and therefore there is no real consent, because if we don’t know about the limitations, there is no informed. Because it’s about the opportunities and about the limitations as well. And if we don’t know the limitations, that’s a big issue. And with the surveillance part, yeah, of course, absolutely. We are just about to finish a project on neurotechnologies in dementia. And of course, it’s in the healthcare sector and not in the commercial sector. And already there is a big issue that surveillance will be, and is, when it comes to neurodata. And in the commercial part, it’s even worse, because there is no actual need for neurotechnologies. So maybe I can also add European law, may I?


Moritz Taylor: Sure, but quickly.


Petra Zandonella: Yeah. And that’s, for example, the medical device regulation. I don’t know who of you are familiar with that regulation. And of course, neurotechnologies, when there is a medical purpose, are in the regulation. But the medical device regulation already addresses that neurotechnologies are a bit special, I would say, because in its annex 16, in the point 6, yeah, they also mention that neurotechnologies, but only specific category of neurotechnologies. So, non-invasive and simulation neurotechnologies are also part within the scope of the medical device regulation. And that is an example that the regulators are a bit aware of the specificity of neurotechnologies. So thank you.


Moritz Taylor: Thank you very much. I’ll just move on because you have to collect others. Number 118, you have the floor. I’ll also collect another question, 195, afterwards. Thank you.


Panelist: Thank you. Lars Lundberger from the World Federalist Movement. I speak in my personal capacity. Three quick thoughts. Thank you for the insights and especially the definition of neurotechnologies. Your focus on the brain activity, I think that might be a bit too narrow. If you would record my finger muscles, you would see that I’m a bit nervous. If you would record my skin conductance, you would see that I’m sweating a bit. And if you would look at the pupil of my eye, you probably would see that I’m also a bit nervous. And so brain activity in the neural sense of recording of intracellular and extracellular recordings will be insufficient to address the mental aspect. Part of the discussion or large part of the discussion was about privacy. So the protection of data being read out. There was a remark on the altering of the brain activity, which would be manipulation. I think that should be emphasized a bit more and also there is the border to more traditional or conventional technologies like visual or acoustic stimuli. In total, I liked a lot the discussion about whether it’s a new human right or whether it’s a new challenge to existing human rights. And I think that discussion has to be continued in a multi-stakeholder approach that includes practitioners and policy makers, so engineers, neuroscientists and lawyers. Thank you.


Moritz Taylor: Thank you very much. 195, please.


Panelist: Thank you for giving me the floor and thank you for a very interesting debate and worrisome debate. I’m Kristin from the University of Oslo and my question relates to the human rights lens that you’ve analyzed this through, the privacy one, which is obviously there are major issues here connected to privacy. My question is if any of you in your work on this also have encountered discussions on this in other rights as well, such as the freedom of expression, mainly the right to freely form opinions, and the freedom of thought, which is an absolute right. So my question is if you’ve seen any discussions on this and specifically seeing that what was presented earlier, that these technologies also can influence the mind and not only extract data from the mind. So that’s my question.


Moritz Taylor: And just before I let you answer, I’ll also take the floor from number 100, please. You just press the round button next to the microphone.


Panelist: Am I audible? Yeah. Yeah. Hi. So I’m Ankita. I’m from India. I’m a lawyer. And many of you mentioned that it is crucial to recognize the complex ethical and societal implications of neurotechnology and the processing of personal neurodata. But I would also like to hear the thoughts of the speakers and the panelists on accountability mechanisms across the entire lifecycle of neurodata processing from collection to storage to analysis. Do you think that there should be any particular stage which should bear greater accountability than others? And if yes, then why? Thank you.


Moritz Taylor: All right. Are you ready to answer already, Damien?


Damian Eke: I’ll try to answer the first one. Okay. Okay. So the discussion on neuro rights and also whether we need to have a special category of data called neurodata also involves discussions about freedom of thought and freedom of expression because manipulation of neurodata can surely breach


Petra Zandonella: I totally agree with you. We also wrote a study for the Stoa, and we also partially mentioned these rights, although the focus was of course on privacy. So thank you over there for pointing out that we had a really limited understanding so far in the discussion on neurotechnologies, because it’s really needed that we had a broader understanding of neurotechnologies. I totally agree with you. For the right understanding of neurotechnologies, also the UNESCO have a really broad understanding of these technologies, and there is now at the moment an ongoing debate on recommendations on ethics on neurotechnologies in Paris, and we will see the outcomes soon, I guess. Fingers crossed that they will stay inside.


Moritz Taylor: Do you want to add anything?


Damian Eke: I wanted to add something in terms of definition of neurodata or brain data. It’s a difficult concept to conceptualize actually, and I don’t think the debate on the definition of neurodata is going to end very quickly, because everything can be neurodata. Everything can be neurodata, when you combine it with other sets of data. So having the limitations of what we refer to as neurodata is important to governing it, and is a critical discussion that we all continue to have.


Petra Zandonella: Although if you have a legal definition on neurodata, it can also limit neurodata. So that’s also why we need an interdisciplinary approach, also with the ethics part. And you mentioned before that also values should be added, and I totally agree. So maybe you can go further in that, if I may ask a question to Damian.


Moritz Taylor: Sure. Please add your, if you wanted to add something. Yeah, please do. Okay, great. Okay, answer please.


Damian Eke: So in terms of values, just as I mentioned earlier, all technologies are value-laden. There is, if it is not a value of the designers or developers, it will be the values of the deployers or the users embedded in these systems. But whose values are embedded in neurotechnologies used worldwide? Understanding that these technologies are being developed mainly in the Global North, and values embedded in them are from the Global North. An instance would be EEG. EEG devices. I don’t think the value of usability within the African population was considered when it was being developed. So that is an important one. We might develop these technologies, but they are not generalizable to all populations of the world. And the idea that people’s values are neglected or maybe relegated to the background in the development of these technologies raises the questions of coloniality embedded in these systems. And I always point out that in terms of principles of trustworthiness, of AI, of technologies, what is missing is decoloniality as a requirement for trustworthiness. Because if these technologies are seen as tools of epistemic dominance by the global South, it can lead to rejection or non-acceptability of these technologies. So we need to consider all these values.


Moritz Taylor: Thank you.


Ana Brian Nougrères: Okay, so I want to say thank you to everybody who made some comments. They are all very, very welcome. Thank you so much. And just additional remarks. I totally agree with 118’s comment. I totally agree and feel that it is a good moment to begin that discussion. Your 061 opinion was very interesting. Yours were quite to the point of the risk, very interesting too. And there was 195 that I would like to say something special about. Oh, your opinion was 094, very interesting. So thank you. But well, you asked if there has been another sort of important issue that brought to the discussion in similar terms as neurotechnology. And I would say no. I don’t know it at least. But I would say that when we came to terms with artificial intelligence and when we saw how it changed the world, then we noticed that we could have done something before. At least we could have considered the possibility of putting ethics in any moment that it was possible. At least that. We didn’t do it. So when this important topic as neurotechnology comes to stake, I personally think that we should not wait. We should try to study the point, try to consider possibilities and try to come to terms in a multi-stakeholder way. That’s what I think. And as law, as people who work with the law, I feel like that we have to do the effort not to do that, not to make the law come. last. I mean, the law has to try to be updated, and we have to try and be at the correct moment in the correct place. That’s what I think. Artificial intelligence changed the world. It changed it and continues changing it. And, well, we don’t know when it’s going to finish and what is going to happen. So if you consider that plus neurotechnologies, you might get into the risk, absolute risk opinion of our colleague. So it is important that people know that awareness is very important. And a way of getting to the point of awareness is to bring these topics to our agendas. Not to our agendas only, but to our agenda in general of all those who are involved.


Moritz Taylor: The microphone is close to the mouth, the microphone.


Ana Brian Nougrères: Oh, sorry.


Moritz Taylor: I think for the participants online, it’s difficult to understand. OK, it was just the end. I still have a statement or question from online, which will go first, and then also 463. My stigmatism is failing me. I think it’s 463. OK, I invite Samir Gallo if he still wants to raise the question. I’m asking to unmute. His mic may be not working.


Online moderator: I had a comment from him that since it’s at the end, he is more than welcome to connect virtually. So perhaps he’s already on that phase.


Moritz Taylor: OK, well, then let’s go give the floor to 463 first, and then maybe they’ll come back by the time.


Panelist: Thank you. Thank you very much. And I want to thank the keynote speaker. It was a great intervention. Another frontier of knowledge, not so recent, but very important, is the human genome mapping. And I want to know if data from DNA can or is classified as neurodata and could be protected under the same kind of laws or not? Thank you.


Ana Brian Nougrères: You know what I think? I think that lawmakers, we all use lots of definitions. We need definitions to produce a law. But I think that we are not yet in the moment to decide which definition is the correct one and which not. We need previously the multi-stakeholder study. In my personal opinion, I would say maybe, maybe, I don’t discard it. But I think that previously to the classifications, we need to do discussions with everybody.


Moritz Taylor: Also, so just before you answer, did the online participant come back? Then maybe just a very quick one, because then we have to move on.


Damian Eke: Technically, no. If I can imagine what will happen in the neuroscience research community if we introduce the idea that genetic data is now also neural data, there will be open arms. So, but I think what we are saying is the special recognition genetic data has in regulation, neural data needs to have the same, because it’s still very confusing for a lot of people. If you have been involved in big projects, I was a data governance coordinator for the EU Human Brain Project, which was a big EU project with over 500 neuroscientists. So, when you introduce the idea of protecting neural data as special category data, they resist it. So, for them, this is not genetic data, this is just research data. It’s difficult for them to understand that. But when we have specific awardees in the regulation…


Ana Brian Nougrères: It’s very difficult to qualify if we don’t decide yet the definition. And that is why many times you can see that in the laws that refer to technological aspects, you always need to introduce some sort of thesaurus. So it is difficult, it is difficult.


Damian Eke: I completely agree with you, and there are so many things happening in this space at the moment. UNESCO with the guidance, WHO, OECD, now the United Nations is also assembling interdisciplinary team to discuss this. There will be a three-day workshop actually in Berlin on this, I will be part of that by the United Nations. So these different initiatives are offering definitions, different definitions, which might become a little bit of a problem. There needs to be harmonization.


Petra Zandonella: Yeah, absolutely. If there is no harmonization, it will be a struggle. That’s also why we need discussions forward and not afterwards, because then it’s too late.


Moritz Taylor: Okay. Well, thank you so much to the panelists and our keynote speaker for your contributions and participation in the panel. Thank you to the audience for their very active participation. We’ll move on before we finish the session at half past. We’ll move on to the messages of this session. Minda looks less than delighted with the messages she’s put together, but well, either, you know, we’ll get some preliminary messages and then we’ll move on, yeah? I think everyone could take their seats again.


Ana Brian Nougrères: There’s a short version of my speech if you want to, if you are interested, we have it here, okay?


Moritz Taylor: Thank you. And please don’t leave, we’ll take a photo afterwards. Later though.


A

Ana Brian Nougrères

Speech speed

115 words per minute

Speech length

3727 words

Speech time

1941 seconds

Neurotechnologies pose unprecedented risks to fundamental rights, particularly privacy, as they can access the most intimate space of human consciousness

Explanation

Ana argues that neurotechnologies represent a new frontier in privacy where the boundary between self and outside world becomes porous. She emphasizes that the brain is not just another organ but the seat of consciousness, identity, thoughts, emotions, memories, and intentions, making neurodata fundamentally different from other personal data.


Evidence

She explains that neurodata can reveal how we feel, what we fear, what we desire, or what we intend to do sometimes even before we are consciously aware of it ourselves. She cites recognition by the Global Privacy Assembly and UN Human Rights Council of neurodata as a special category requiring highest protection.


Major discussion point

Neurotechnology and Mental Privacy Regulation


Topics

Human rights | Privacy and data protection


Agreed with

– Damian Eke
– Petra Zandonella

Agreed on

Inadequacy of current regulatory frameworks for neurotechnology


Neurodata should be recognized as a special category of personal data requiring the highest level of protection, similar to genetic and biometric data

Explanation

Ana contends that neurodata constitutes windows into the cognitive, emotional, and psychological fabric of human beings, going far beyond traditional personally identifiable information. She argues they require enhanced legal safeguards including strict access controls, strong encryption, and explicit informed consent mechanisms.


Evidence

She notes that neurodata are recordings of the brain’s actual electrical and physiological activity, not just inferences from behavior. She references recommendations from the Ibero-American Data Protection Network, Global Privacy Assembly, and Berlin Group calling for special frameworks.


Major discussion point

Classification and Protection of Neurodata


Topics

Human rights | Privacy and data protection | Legal and regulatory


Agreed with

– Damian Eke
– Petra Zandonella

Agreed on

Need for special recognition and protection of neurodata


International cooperation through frameworks like Convention 108+ is crucial since neurodata crosses national borders and involves transnational actors

Explanation

Ana emphasizes that neurodata regulation cannot be fragmented or isolated but must be coordinated, comprehensive, and coherent. She argues Convention 108+ provides a critical foundation for global convergence as the only legally binding international treaty dedicated to personal data protection.


Evidence

She points out that brain-computer interfaces and neural wearables are being developed by transnational actors. She notes Convention 108+ is open to countries beyond Council of Europe and has already helped shape data protection standards in Latin America, Africa, and various international fora.


Major discussion point

Global Governance and Cultural Perspectives


Topics

Legal and regulatory | Data governance


Agreed with

– Damian Eke
– Doreen Bogdan-Martin

Agreed on

Need for international cooperation and harmonization


Disagreed with

– Damian Eke

Disagreed on

Rights-based versus value-based approaches to neurotechnology governance


Proactive regulatory action is needed now to avoid repeating mistakes made with artificial intelligence governance

Explanation

Ana argues that when AI changed the world, regulators failed to act proactively and could have considered putting ethics in place earlier. She emphasizes the importance of not letting law come last and being updated at the correct moment and place.


Evidence

She references how AI changed the world and continues changing it, noting that we don’t know when it will finish or what will happen. She warns that combining AI with neurotechnologies creates absolute risk scenarios.


Major discussion point

Immediate Risks vs Future Rights Debates


Topics

Legal and regulatory | Human rights principles


Agreed with

– Kleijssen Jan

Agreed on

Urgency of proactive action


D

Damian Eke

Speech speed

122 words per minute

Speech length

1612 words

Speech time

790 seconds

Current data protection frameworks like GDPR and Convention 108+ do not explicitly address neural data, creating regulatory gaps that need to be filled

Explanation

Damian explains that existing regulations recognize neural data only implicitly rather than explicitly, with no specific mention of brain data or neural data in legal provisions. While neural data may fall under health data in certain contexts, this is not explicitly clear in regulations.


Evidence

He notes that special category data in GDPR Article 9 includes health, biometric and genetic data, and while EEG, fMRI and neural datasets may fall under health data, this classification is unclear to both researchers and industry professionals.


Major discussion point

Neurotechnology and Mental Privacy Regulation


Topics

Legal and regulatory | Privacy and data protection | Data governance


Agreed with

– Ana Brian Nougrères
– Petra Zandonella

Agreed on

Inadequacy of current regulatory frameworks for neurotechnology


Neural data can function as “hidden biometrics” with brain prints that may be more unique than traditional biometric identifiers

Explanation

Damian argues that datasets like fMRI or MRI contain brain prints that are unique or potentially more unique than traditional biometrics or genetic data. He suggests neural data should be classified at the same level of sensitivity as genetic and biometric data.


Evidence

He explains that brain prints in neural datasets have uniqueness properties that may exceed those of conventional biometric identifiers, warranting similar regulatory treatment.


Major discussion point

Classification and Protection of Neurodata


Topics

Privacy and data protection | Legal and regulatory


Agreed with

– Ana Brian Nougrères
– Petra Zandonella

Agreed on

Need for special recognition and protection of neurodata


Value-based governance approaches may be more appropriate than rights-based approaches, considering cultural differences in privacy concepts between regions

Explanation

Damian suggests that technologies are value-laden and not neutral, questioning whose values are embedded in neurotechnologies. He argues that value-based governance might be better than rights-based approaches due to diverse interpretations of human rights across regions.


Evidence

He provides the example that in Europe privacy is understood as individual privacy, while in some African communities it’s understood as collective privacy, yet GDPR is informed by individual privacy concepts.


Major discussion point

Global Governance and Cultural Perspectives


Topics

Human rights principles | Cultural diversity | Interdisciplinary approaches


Disagreed with

– Ana Brian Nougrères

Disagreed on

Rights-based versus value-based approaches to neurotechnology governance


European regulatory discussions must consider that global majority populations may have different values and interpretations of privacy rights

Explanation

Damian emphasizes that discussions in Europe or the Global North often forget that values shaping technologies are not understood or interpreted the same way in all regions. He argues for considering culturally aligned issues that technologies raise.


Evidence

He mentions that neurotechnologies are developed mainly in the Global North with embedded values from that region, citing EEG devices as an example where usability within African populations wasn’t considered during development.


Major discussion point

Global Governance and Cultural Perspectives


Topics

Cultural diversity | Human rights principles | Development


Focus on neural rights as distinct human rights may distract from immediate tangible risks like ethics dumping and exploitation of vulnerable populations

Explanation

Damian argues that intense focus on neural rights as a new category of human rights, while capturing public imagination, could overshadow tangible and immediate risks. He highlights issues like ethics dumping, safety concerns, and bias in neural technology development.


Evidence

He explains ethics dumping as research that might be ethically questionable in one region being outsourced to areas with less stringent oversight, leading to exploitation of vulnerable populations. He also mentions exploitative practices in extracting lithium and rare earths for neurotechnology infrastructure.


Major discussion point

Immediate Risks vs Future Rights Debates


Topics

Human rights principles | Development | Capacity development


The convergence of neurotechnology with AI creates uniquely dangerous capabilities for predictive profiling and cognitive surveillance

Explanation

Damian explains that when neural data is combined with AI and big data, it creates uniquely dangerous capabilities enabling preemptive profiling, advanced neuromarketing, and cognitive surveillance. This convergence raises questions about whether it warrants a special category of data.


Evidence

He notes the challenge of harmonizing diverse regulations including the AI Act, GDPR, and other data regulations in the EU to address the specific problem of neural data and AI convergence.


Major discussion point

Technical and Enforcement Challenges


Topics

Privacy and data protection | Legal and regulatory


Harmonization of different international initiatives and definitions is essential to avoid regulatory fragmentation

Explanation

Damian points out that multiple organizations including UNESCO, WHO, OECD, and the United Nations are developing different definitions for neural data and neurotechnologies. He warns that this could create problems without proper harmonization.


Evidence

He mentions his involvement in a three-day UN workshop in Berlin and notes that these different initiatives are offering various definitions which might become problematic without coordination.


Major discussion point

Technical and Enforcement Challenges


Topics

Legal and regulatory | Data governance


Agreed with

– Ana Brian Nougrères
– Doreen Bogdan-Martin

Agreed on

Need for international cooperation and harmonization


P

Petra Zandonella

Speech speed

130 words per minute

Speech length

1670 words

Speech time

765 seconds

Existing privacy rights under Article 8 of the European Convention on Human Rights are already broad enough to cover mental privacy without creating new rights categories

Explanation

Petra argues against adding a new right to mental privacy, contending that the existing right to privacy in Article 8 is already broad and interpreted expansively. She questions what benefit would be gained by separating mental privacy from the established broad privacy right.


Evidence

She cites the 2024 Klimaseniorin case in Switzerland where the European Court of Human Rights ruled that climate change failures violated Article 8 privacy rights, demonstrating the broad interpretation already possible under existing frameworks.


Major discussion point

Neurotechnology and Mental Privacy Regulation


Topics

Human rights | Privacy and data protection | Legal and regulatory


Disagreed with

– Ana Brian Nougrères

Disagreed on

Whether to create a new right to mental privacy or rely on existing privacy frameworks


Adding neural data to existing special categories in GDPR Article 9 would be feasible, as was done previously with biometric data

Explanation

Petra suggests that neural data could be implemented within the scope of Article 9 of GDPR or Article 6 of Convention 108+, noting that biometric data was successfully added to these frameworks. She acknowledges this requires consent from member states but considers it achievable.


Evidence

She explains that biometric data wasn’t originally in Convention 108 but was added in 108+ and GDPR, demonstrating precedent for expanding special data categories.


Major discussion point

Classification and Protection of Neurodata


Topics

Legal and regulatory | Privacy and data protection


Agreed with

– Ana Brian Nougrères
– Damian Eke

Agreed on

Need for special recognition and protection of neurodata


Current neurotechnology applications already pose real challenges that existing legal frameworks struggle to address adequately

Explanation

Petra acknowledges that while existing frameworks provide a foundation, there are gaps particularly when neurotechnologies expand into non-medical domains like gaming and human enhancement. She emphasizes the need for action to tackle emerging challenges.


Evidence

She notes that when neurotechnologies are used for health purposes they fall under health data protection, but in non-medical contexts there’s no medical purpose, creating regulatory gaps.


Major discussion point

Immediate Risks vs Future Rights Debates


Topics

Legal and regulatory | Privacy and data protection


Agreed with

– Ana Brian Nougrères
– Damian Eke

Agreed on

Inadequacy of current regulatory frameworks for neurotechnology


Informed consent becomes problematic when people tend to believe neurotechnology promises without understanding limitations

Explanation

Petra explains that informed consent is already challenging with health data, and becomes more complex with neurodata because people tend to believe everything related to the brain. She argues that without understanding limitations, there cannot be truly informed consent.


Evidence

She references insights from neuropsychologists in her interdisciplinary group who noted that people tend to believe promises of neuromarketing or neuroenhancement without understanding the limitations involved.


Major discussion point

Technical and Enforcement Challenges


Topics

Privacy and data protection | Human rights principles


D

Doreen Bogdan-Martin

Speech speed

124 words per minute

Speech length

304 words

Speech time

146 seconds

Multi-stakeholder governance involving all voices is essential for balancing regulation and innovation in digital technologies

Explanation

Doreen emphasizes that safeguarding human rights while balancing regulation and innovation requires all voices at the governance table. She advocates for agile and adaptive governance that keeps everyone involved in designing and fine-tuning policy actions.


Evidence

She highlights the importance of open, multi-stakeholder forums like the World Summit on the Information Society and EuroDIG, noting upcoming WSIS Plus 20 High-Level Meeting and AI for Good Global Summit as examples of inclusive policy-making processes.


Major discussion point

Neurotechnology and Mental Privacy Regulation


Topics

Legal and regulatory | Human rights principles | Interdisciplinary approaches


Agreed with

– Ana Brian Nougrères
– Damian Eke

Agreed on

Need for international cooperation and harmonization


O

Online moderator

Speech speed

130 words per minute

Speech length

341 words

Speech time

156 seconds

Clear procedural rules are essential for effective hybrid participation in neurotechnology governance discussions

Explanation

The online moderator emphasizes the importance of establishing clear protocols for both online and on-site participants to ensure orderly and inclusive participation. This includes specific instructions for raising hands, managing audio/video settings, and coordinating between different participation modes.


Evidence

Instructions given include ‘please always raise your hand to speak, to ask for a speaking slot’ for online participants and ‘please always join with your microphone muted and your speaker from the device also disabled’ for on-site participants.


Major discussion point

Technical and Enforcement Challenges


Topics

Interdisciplinary approaches | Legal and regulatory


K

Kleijssen Jan

Speech speed

135 words per minute

Speech length

138 words

Speech time

61 seconds

The convergence of neurotechnology with AI systems and sentient computing requires proactive regulatory limits to prevent abominable outcomes

Explanation

Jan raises concerns about the intersection of neural rights with AI systems and sentient computing, questioning whether current research guidance and regulatory frameworks are adequate. He emphasizes the need for timely regulation to prevent potentially harmful developments in this converging field.


Evidence

He references the ongoing debate about whether sentient computing will remain science fiction or become reality, and asks about ‘the limits, perhaps, on the regulation that needs to be there in time if we not want to find ourselves facing something quite abominable.’


Major discussion point

Immediate Risks vs Future Rights Debates


Topics

Legal and regulatory | Human rights principles


Agreed with

– Ana Brian Nougrères

Agreed on

Urgency of proactive action


P

Panelist

Speech speed

137 words per minute

Speech length

908 words

Speech time

395 seconds

Personal integrity as a legal concept provides a three-layered protection framework that could be extended to neurotechnology contexts

Explanation

A panelist from Germany introduces the concept of personal integrity implemented in German Youth Protection Act, which adds a third layer of protection beyond physical and mental integrity. This digital layer recognizes that violating someone’s data in the digital environment constitutes violating the person themselves, as data represents our digital selves.


Evidence

The concept was implemented in Germany’s Youth Protection Act in 2021, establishing three layers: physical integrity (protection from physical harm), mental integrity (protection from verbal harm/bullying), and personal integrity (protection of data in digital environments).


Major discussion point

Classification and Protection of Neurodata


Topics

Legal and regulatory | Privacy and data protection | Human rights principles


Mental privacy rights must be protected against profit-driven exploitation by governments and tech companies

Explanation

A panelist raises concerns about the increasing value of brain data for governments and tech companies, questioning how to prevent the sacrifice of mental privacy rights for profit or control. They emphasize the need to establish clear boundaries between legitimate consent and surveillance in the neural age.


Evidence

The panelist asks ‘how do we avoid a future where the rights to mental privacy is sacrificed for profit or control?’ and ‘where do we draw the line between consent and surveillance in a neural age?’


Major discussion point

Neurotechnology and Mental Privacy Regulation


Topics

Human rights | Privacy and data protection | Economic


Neurotechnology definitions should encompass broader physiological indicators beyond just brain activity

Explanation

A panelist argues that focusing solely on brain activity recordings may be too narrow for defining neurotechnologies. They suggest that other physiological indicators like muscle activity, skin conductance, and pupil dilation can also reveal mental states and should be considered in regulatory frameworks.


Evidence

Examples provided include finger muscle recordings showing nervousness, skin conductance revealing sweating, and pupil dilation indicating emotional states, demonstrating that ‘brain activity in the neural sense of recording of intracellular and extracellular recordings will be insufficient to address the mental aspect.’


Major discussion point

Classification and Protection of Neurodata


Topics

Legal and regulatory | Privacy and data protection


Disagreed with

– Ana Brian Nougrères

Disagreed on

Scope and definition of neurotechnology and neurodata


Neurotechnology governance should focus on protecting human beings rather than regulating individual technologies

Explanation

A panelist emphasizes that regulatory approaches should maintain focus on human protection rather than attempting to regulate each technology separately. They advocate for broader approaches that recognize the interference between different technologies while keeping human rights at the center.


Evidence

The panelist states ‘it’s not about the technology, it should always be about the human being’ and notes that ‘there is an interference with other technologies, I guess that’s normal and that’s already existing.’


Major discussion point

Neurotechnology and Mental Privacy Regulation


Topics

Human rights principles | Legal and regulatory


Accountability mechanisms should be implemented across the entire lifecycle of neurodata processing with particular attention to critical stages

Explanation

A panelist from India emphasizes the need for comprehensive accountability mechanisms spanning from data collection to storage to analysis. They question whether certain stages should bear greater accountability than others and seek clarification on where regulatory focus should be concentrated.


Evidence

The panelist asks about ‘accountability mechanisms across the entire lifecycle of neurodata processing from collection to storage to analysis’ and whether ‘there should be any particular stage which should bear greater accountability than others.’


Major discussion point

Technical and Enforcement Challenges


Topics

Legal and regulatory | Privacy and data protection | Data governance


Freedom of thought and expression rights are equally important as privacy rights in neurotechnology governance

Explanation

A panelist from University of Oslo argues that while privacy concerns are significant, other fundamental rights like freedom of expression and the right to freely form opinions are also at stake. They emphasize that freedom of thought is an absolute right that must be protected, especially given neurotechnology’s capacity to influence minds.


Evidence

The panelist specifically mentions ‘the freedom of expression, mainly the right to freely form opinions, and the freedom of thought, which is an absolute right’ and notes the importance of protecting these rights given that neurotechnologies ‘can influence the mind and not only extract data from the mind.’


Major discussion point

Neurotechnology and Mental Privacy Regulation


Topics

Human rights | Freedom of expression | Human rights principles


The relationship between genetic data and neurodata requires clarification in regulatory frameworks

Explanation

A panelist questions whether DNA data should be classified as neurodata and protected under similar legal frameworks. This raises important questions about the boundaries and definitions of different types of sensitive data in the context of emerging neurotechnologies.


Evidence

The panelist asks ‘if data from DNA can or is classified as neurodata and could be protected under the same kind of laws or not?’ drawing parallels to human genome mapping as another frontier of knowledge.


Major discussion point

Classification and Protection of Neurodata


Topics

Legal and regulatory | Privacy and data protection


M

Moritz Taylor

Speech speed

142 words per minute

Speech length

1623 words

Speech time

683 seconds

Global standards for neurotechnology governance must account for diverse regional approaches and values beyond European perspectives

Explanation

Moritz acknowledges the danger of European-centric thinking in developing global neurotechnology standards, emphasizing that the global majority is not Europe and approaches differ significantly even within close geographic proximity. He stresses the need to incorporate other regions’ standards and values for truly global frameworks.


Evidence

He states ‘sitting in Europe, and it’s a regional and European approach generally in preparation for the IGF, etc., I think there’s always a danger of forgetting that the global majority is not Europe and that the approaches are indeed very, very different, not even that far away from Europe.’


Major discussion point

Global Governance and Cultural Perspectives


Topics

Cultural diversity | Human rights principles | Legal and regulatory


The assumption that thoughts are private is being fundamentally challenged by rapidly evolving neurotechnology

Explanation

Moritz frames the session by highlighting how neurotechnology is challenging the long-held cornerstone of personal freedom – the privacy of thoughts. He emphasizes that current regulatory frameworks may not be fit for purpose in a world where minds can become data streams.


Evidence

He notes that neurotechnology ranges ‘from brain-computer interfaces to mood-tracking devices’ and questions ‘whether our current regulatory frameworks are fit for purpose, and whether we need to rethink privacy in a world where even our minds can become data streams.’


Major discussion point

Neurotechnology and Mental Privacy Regulation


Topics

Human rights | Privacy and data protection | Legal and regulatory


Agreements

Agreement points

Need for special recognition and protection of neurodata

Speakers

– Ana Brian Nougrères
– Damian Eke
– Petra Zandonella

Arguments

Neurodata should be recognized as a special category of personal data requiring the highest level of protection, similar to genetic and biometric data


Neural data can function as “hidden biometrics” with brain prints that may be more unique than traditional biometric identifiers


Adding neural data to existing special categories in GDPR Article 9 would be feasible, as was done previously with biometric data


Summary

All three main speakers agree that neurodata requires special legal recognition and enhanced protection, though they differ on implementation approaches. Ana advocates for treating it as a special category requiring highest protection, Damian suggests it should be classified at the same level as genetic and biometric data, and Petra proposes adding it to existing GDPR Article 9 categories.


Topics

Privacy and data protection | Legal and regulatory


Inadequacy of current regulatory frameworks for neurotechnology

Speakers

– Ana Brian Nougrères
– Damian Eke
– Petra Zandonella

Arguments

Current data protection frameworks like GDPR and Convention 108+ do not explicitly address neural data, creating regulatory gaps that need to be filled


Current neurotechnology applications already pose real challenges that existing legal frameworks struggle to address adequately


Neurotechnologies pose unprecedented risks to fundamental rights, particularly privacy, as they can access the most intimate space of human consciousness


Summary

There is strong consensus that existing legal frameworks are insufficient to address the unique challenges posed by neurotechnologies. All speakers acknowledge significant gaps in current regulations, though they propose different solutions for addressing these inadequacies.


Topics

Legal and regulatory | Privacy and data protection | Human rights


Need for international cooperation and harmonization

Speakers

– Ana Brian Nougrères
– Damian Eke
– Doreen Bogdan-Martin

Arguments

International cooperation through frameworks like Convention 108+ is crucial since neurodata crosses national borders and involves transnational actors


Harmonization of different international initiatives and definitions is essential to avoid regulatory fragmentation


Multi-stakeholder governance involving all voices is essential for balancing regulation and innovation in digital technologies


Summary

Speakers agree that neurotechnology governance requires coordinated international action and multi-stakeholder approaches. They emphasize the transnational nature of these technologies and the need for harmonized definitions and standards to avoid regulatory fragmentation.


Topics

Legal and regulatory | Data governance | Interdisciplinary approaches


Urgency of proactive action

Speakers

– Ana Brian Nougrères
– Kleijssen Jan

Arguments

Proactive regulatory action is needed now to avoid repeating mistakes made with artificial intelligence governance


The convergence of neurotechnology with AI systems and sentient computing requires proactive regulatory limits to prevent abominable outcomes


Summary

Both speakers emphasize the critical importance of acting now rather than waiting for problems to emerge. They draw lessons from AI governance failures and stress the need for timely intervention to prevent harmful outcomes.


Topics

Legal and regulatory | Human rights principles


Similar viewpoints

Both speakers support enhancing legal protection for neurodata within existing frameworks, though Ana emphasizes the unique nature requiring highest protection while Petra focuses on practical implementation through existing GDPR structures.

Speakers

– Ana Brian Nougrères
– Petra Zandonella

Arguments

Neurodata should be recognized as a special category of personal data requiring the highest level of protection, similar to genetic and biometric data


Adding neural data to existing special categories in GDPR Article 9 would be feasible, as was done previously with biometric data


Topics

Privacy and data protection | Legal and regulatory


Both emphasize the importance of considering diverse cultural perspectives and values in neurotechnology governance, warning against European-centric approaches that may not reflect global majority viewpoints.

Speakers

– Damian Eke
– Moritz Taylor

Arguments

Value-based governance approaches may be more appropriate than rights-based approaches, considering cultural differences in privacy concepts between regions


Global standards for neurotechnology governance must account for diverse regional approaches and values beyond European perspectives


Topics

Cultural diversity | Human rights principles | Legal and regulatory


Both speakers recognize the particular risks posed by the convergence of neurotechnology and AI, emphasizing the need for proactive governance to address these combined threats.

Speakers

– Ana Brian Nougrères
– Damian Eke

Arguments

The convergence of neurotechnology with AI creates uniquely dangerous capabilities for predictive profiling and cognitive surveillance


Proactive regulatory action is needed now to avoid repeating mistakes made with artificial intelligence governance


Topics

Privacy and data protection | Legal and regulatory | Human rights principles


Unexpected consensus

Rejection of new mental privacy rights in favor of existing frameworks

Speakers

– Petra Zandonella
– Ana Brian Nougrères

Arguments

Existing privacy rights under Article 8 of the European Convention on Human Rights are already broad enough to cover mental privacy without creating new rights categories


International cooperation through frameworks like Convention 108+ is crucial since neurodata crosses national borders and involves transnational actors


Explanation

Despite the revolutionary nature of neurotechnology, there is unexpected consensus that existing human rights frameworks are sufficiently broad and flexible to address new challenges. This is surprising given the unprecedented nature of brain data access, yet both legal experts prefer building on established foundations rather than creating new rights categories.


Topics

Human rights | Legal and regulatory | Privacy and data protection


Prioritizing immediate risks over futuristic neural rights debates

Speakers

– Damian Eke
– Petra Zandonella

Arguments

Focus on neural rights as distinct human rights may distract from immediate tangible risks like ethics dumping and exploitation of vulnerable populations


Current neurotechnology applications already pose real challenges that existing legal frameworks struggle to address adequately


Explanation

There is unexpected consensus that the focus should be on addressing current, tangible risks rather than debating abstract future rights. This pragmatic approach is surprising given the transformative potential of neurotechnology, but speakers agree that immediate harms like exploitation and ethics dumping require more urgent attention.


Topics

Human rights principles | Development | Legal and regulatory


Overall assessment

Summary

The discussion reveals strong consensus on the fundamental challenges posed by neurotechnology and the need for enhanced protection of neural data, international cooperation, and proactive governance. However, speakers differ on implementation approaches, with some favoring new categories while others prefer building on existing frameworks.


Consensus level

High level of consensus on problem identification and urgency, moderate consensus on solutions. The agreement on core issues (inadequate current frameworks, need for special protection, international cooperation) provides a solid foundation for policy development, though implementation details require further negotiation. The unexpected consensus on using existing rights frameworks rather than creating new ones suggests a pragmatic approach that could facilitate faster regulatory action.


Differences

Different viewpoints

Whether to create a new right to mental privacy or rely on existing privacy frameworks

Speakers

– Ana Brian Nougrères
– Petra Zandonella

Arguments

Mental privacy emerged as a necessary evolution of the right to privacy, emphasizing that thoughts and mental states, absent a compelling legal justification and strict safeguards, must remain off-limits to external surveillance or intrusion


Existing privacy rights under Article 8 of the European Convention on Human Rights are already broad enough to cover mental privacy without creating new rights categories


Summary

Ana advocates for mental privacy as a necessary evolution of privacy rights, while Petra argues that existing Article 8 privacy rights are already broad enough and warns against legal uncertainty from creating new rights categories


Topics

Human rights | Privacy and data protection | Legal and regulatory


Rights-based versus value-based approaches to neurotechnology governance

Speakers

– Ana Brian Nougrères
– Damian Eke

Arguments

International cooperation through frameworks like Convention 108+ is crucial since neurodata crosses national borders and involves transnational actors


Value-based governance approaches may be more appropriate than rights-based approaches, considering cultural differences in privacy concepts between regions


Summary

Ana emphasizes rights-based international frameworks like Convention 108+, while Damian questions whether rights-based approaches are always best, suggesting value-based governance might better account for cultural differences in privacy understanding


Topics

Human rights principles | Cultural diversity | Legal and regulatory


Scope and definition of neurotechnology and neurodata

Speakers

– Ana Brian Nougrères
– Panelist

Arguments

Neurotechnologies refer to the tools, systems, and devices capable of accessing, monitoring, interpreting, or altering brain activity and the nervous system


Neurotechnology definitions should encompass broader physiological indicators beyond just brain activity


Summary

Ana focuses on brain activity and nervous system as the core of neurotechnology definition, while a panelist argues this is too narrow and should include other physiological indicators like muscle activity, skin conductance, and pupil dilation


Topics

Legal and regulatory | Privacy and data protection


Unexpected differences

Whether genetic data should be classified as neurodata

Speakers

– Ana Brian Nougrères
– Damian Eke
– Panelist

Arguments

You know what I think? I think that lawmakers, we all use lots of definitions. We need definitions to produce a law. But I think that we are not yet in the moment to decide which definition is the correct one and which not


Technically, no. If I can imagine what will happen in the neuroscience research community if we introduce the idea that genetic data is now also neural data, there will be open arms


The relationship between genetic data and neurodata requires clarification in regulatory frameworks


Explanation

This disagreement emerged unexpectedly when a panelist asked about DNA data classification. Ana remained diplomatically non-committal, Damian gave a technical ‘no’ while acknowledging research community resistance, revealing underlying tensions about expanding neurodata definitions and practical implementation challenges


Topics

Legal and regulatory | Privacy and data protection


Overall assessment

Summary

The main disagreements centered on regulatory approach (new rights vs. existing frameworks), governance philosophy (rights-based vs. value-based), and definitional scope of neurotechnology. Despite these differences, speakers showed substantial agreement on the need for special protection of neural data and proactive governance.


Disagreement level

Moderate disagreement with significant implications – while speakers agreed on core problems and urgency, their different approaches to solutions (new rights vs. existing frameworks, global vs. culturally-sensitive governance) could lead to fragmented regulatory responses and implementation challenges in the rapidly evolving neurotechnology field


Partial agreements

Partial agreements

Similar viewpoints

Both speakers support enhancing legal protection for neurodata within existing frameworks, though Ana emphasizes the unique nature requiring highest protection while Petra focuses on practical implementation through existing GDPR structures.

Speakers

– Ana Brian Nougrères
– Petra Zandonella

Arguments

Neurodata should be recognized as a special category of personal data requiring the highest level of protection, similar to genetic and biometric data


Adding neural data to existing special categories in GDPR Article 9 would be feasible, as was done previously with biometric data


Topics

Privacy and data protection | Legal and regulatory


Both emphasize the importance of considering diverse cultural perspectives and values in neurotechnology governance, warning against European-centric approaches that may not reflect global majority viewpoints.

Speakers

– Damian Eke
– Moritz Taylor

Arguments

Value-based governance approaches may be more appropriate than rights-based approaches, considering cultural differences in privacy concepts between regions


Global standards for neurotechnology governance must account for diverse regional approaches and values beyond European perspectives


Topics

Cultural diversity | Human rights principles | Legal and regulatory


Both speakers recognize the particular risks posed by the convergence of neurotechnology and AI, emphasizing the need for proactive governance to address these combined threats.

Speakers

– Ana Brian Nougrères
– Damian Eke

Arguments

The convergence of neurotechnology with AI creates uniquely dangerous capabilities for predictive profiling and cognitive surveillance


Proactive regulatory action is needed now to avoid repeating mistakes made with artificial intelligence governance


Topics

Privacy and data protection | Legal and regulatory | Human rights principles


Takeaways

Key takeaways

Neurotechnologies pose unprecedented risks to fundamental human rights, particularly privacy, as they can access and potentially manipulate the most intimate aspects of human consciousness including thoughts, emotions, and intentions


Neurodata should be classified as a special category of personal data requiring the highest level of protection, similar to genetic and biometric data, due to its unique ability to reveal existential information about individuals


Current data protection frameworks like GDPR and Convention 108+ have regulatory gaps regarding neural data, as they only implicitly rather than explicitly address brain-related information


A multi-stakeholder approach involving neuroscientists, lawyers, ethicists, and policymakers is essential for developing appropriate governance frameworks for neurotechnologies


Global governance coordination is crucial since neurotechnologies cross national borders, with Convention 108+ serving as a potential foundation for international alignment


Cultural differences in privacy concepts (individual vs. collective privacy) must be considered when developing global standards, as European-centric approaches may not be universally applicable


Proactive regulatory action is needed now to avoid repeating the delayed response that occurred with artificial intelligence governance


Resolutions and action items

UNESCO is developing recommendations on ethics of neurotechnologies with outcomes expected soon


The United Nations is assembling an interdisciplinary team for a three-day workshop in Berlin to discuss neurotechnology governance


Multiple international initiatives (WHO, OECD, UN) are working on developing definitions and frameworks for neurotechnology governance


Continued multi-stakeholder discussions and forums like EuroDIG and WSIS are needed to maintain dialogue on digital governance


Unresolved issues

Whether to create a new ‘mental privacy’ right or rely on existing broad privacy rights under Article 8 of the European Convention on Human Rights


How to define ‘neurodata’ precisely, as different organizations and initiatives are proposing varying definitions that need harmonization


Whether genetic data should be classified as neurodata and protected under the same frameworks


How to ensure informed consent when people tend to believe neurotechnology promises without understanding limitations


How to address the convergence of neurotechnology with AI and its enhanced risks for cognitive surveillance and behavioral manipulation


How to prevent ‘ethics dumping’ where questionable research is outsourced to regions with less stringent oversight


Which governance approach (rights-based vs. value-based) is most appropriate for different cultural contexts


How to establish accountability mechanisms across the entire lifecycle of neurodata processing


Suggested compromises

Adding neural data as a special category within existing data protection frameworks (like GDPR Article 9) rather than creating entirely new legal structures


Using Convention 108+ as a flexible foundation that can be adapted for neurotechnology governance while encouraging broader international adoption


Focusing on immediate tangible risks and practical protections rather than getting distracted by theoretical neural rights debates


Adopting a precautionary principle where restraint is the default when risks to mental integrity are not fully understood


Implementing privacy by design and by default principles in neurotechnology development from the earliest stages


Thought provoking comments

The simple answer is that existing regulations or ethical and legal frameworks are not addressing the issues of neural data and neurotechnology as they should be. Most of the recognition of neural data in the GDPR and also the Convention 108 Plus is implied rather than explicit… the intense focus on neural rights as a distinct new category of human rights, while it captures public imagination, could indeed overshadow some very tangible and immediate risk associated with development and application of neural technology… like ethics dumping, like the safety issues and bias issues that are involved in neural technology.

Speaker

Damian Eke


Reason

This comment is particularly insightful because it challenges the dominant narrative focus on neural rights by highlighting overlooked practical issues like ‘ethics dumping’ – the practice of conducting ethically questionable research in regions with less stringent oversight. It also introduces the concept of exploitative resource extraction (lithium, rare earths) that underlies neurotechnology infrastructure, broadening the discussion beyond data protection to include global justice concerns.


Impact

This comment fundamentally shifted the discussion from a primarily European/legal perspective to a global justice framework. It introduced the concept of ‘coloniality’ in technology development and forced other panelists to acknowledge that neurotechnology governance cannot be divorced from broader issues of global inequality and resource exploitation.


We do not recommend to add a new right to mental privacy… The existing right to privacy, not to mental privacy, it’s the right to privacy as enshrined in Article 8 of the European Convention on Human Rights. It’s really a broad right to privacy… So what benefit will we gain by cutting off the right to mental privacy from the already broad existing right to privacy?

Speaker

Petra Zandonella


Reason

This comment is thought-provoking because it directly challenges the keynote speaker’s framework and the emerging consensus around ‘neural rights.’ By using the climate change case (Klimaseniorin) as an example of how broadly existing privacy rights can be interpreted, she demonstrates that current legal frameworks may be more adaptable than assumed.


Impact

This created a productive tension in the discussion between those advocating for new rights categories and those arguing for evolutionary interpretation of existing rights. It forced the conversation to become more nuanced about the relationship between legal innovation and legal stability, and led to deeper exploration of whether the problem is definitional or enforcement-based.


There are so many strategic paradigms of governance of technology… rights-based approach… value-based approach. Because technologies are value-laden. They’re not neutral… whose values are embedded in these technologies?… in Europe we’ll think about individual privacy. Maybe in some communities in Africa we’ll think about collective privacy. But the GDPR is informed by individual privacy, the concept of individual privacy.

Speaker

Damian Eke


Reason

This comment is profoundly insightful because it exposes the cultural assumptions embedded in supposedly universal frameworks. By contrasting individual versus collective privacy concepts, it reveals how European-centric approaches may be inadequate for global governance of neurotechnology.


Impact

This comment fundamentally challenged the universality assumptions underlying the entire discussion. It forced participants to acknowledge that their European-focused legal frameworks might not be globally applicable and introduced the need for decolonial approaches to technology governance. This led to Ana Brian-Nougrères acknowledging the importance of multi-stakeholder approaches that go beyond European perspectives.


Everything can be neurodata. Everything can be neurodata, when you combine it with other sets of data… having the limitations of what we refer to as neurodata is important to governing it, and is a critical discussion that we all continue to have.

Speaker

Damian Eke


Reason

This comment is particularly thought-provoking because it reveals the fundamental definitional challenge that undermines much of the regulatory discussion. If the boundaries of what constitutes ‘neurodata’ are infinitely expandable, then the entire framework for special protection becomes problematic.


Impact

This comment exposed a critical weakness in the regulatory approach being discussed and led to acknowledgment from other panelists that definitional harmonization is essential. It shifted the conversation from assuming we know what we’re regulating to recognizing that the definitional challenge itself may be the primary obstacle to effective governance.


If you would record my finger muscles, you would see that I’m a bit nervous. If you would record my skin conductance, you would see that I’m sweating a bit. And if you would look at the pupil of my eye, you probably would see that I’m also a bit nervous… brain activity in the neural sense of recording of intracellular and extracellular recordings will be insufficient to address the mental aspect.

Speaker

Lars Lundberger


Reason

This comment is insightful because it challenges the brain-centric definition of neurotechnology by demonstrating how mental states can be inferred from various physiological indicators. It reveals that focusing solely on brain activity creates artificial boundaries that may not reflect the reality of how mental states can be monitored and inferred.


Impact

This comment forced the discussion to confront the inadequacy of narrow definitions and contributed to the growing recognition that the definitional challenges are more complex than initially assumed. It supported the emerging theme that current approaches may be too limited in scope.


Overall assessment

These key comments fundamentally transformed what began as a relatively straightforward discussion about extending European data protection frameworks to neurotechnology into a much more complex examination of the cultural, definitional, and global justice dimensions of neurotechnology governance. Damian Eke’s interventions were particularly impactful in introducing decolonial perspectives and challenging European-centric assumptions, while Petra Zandonella’s legal analysis created productive tension about whether new rights categories are necessary. The definitional challenges raised by multiple speakers revealed that the regulatory community may be attempting to govern something they haven’t yet adequately defined. Overall, these comments elevated the discussion from technical legal questions to fundamental questions about whose values shape technology, how cultural differences affect privacy concepts, and whether current regulatory approaches are fit for purpose in a global context.


Follow-up questions

How should we define neurodata in legal terms and what are the boundaries of this definition?

Speaker

Damian Eke, Petra Zandonella, Ana Brian Nougrères


Explanation

Multiple speakers emphasized the difficulty and importance of establishing a clear, harmonized definition of neurodata, as everything could potentially be considered neurodata when combined with other datasets, and different organizations are offering different definitions


How do we apply informed consent mechanisms in cognitively vulnerable populations when dealing with neurotechnologies?

Speaker

Ana Brian Nougrères


Explanation

This was identified as a critical challenge in regulating cross-border flows of neural information and ensuring proper consent processes for vulnerable groups


How can we harmonize the diverse regulatory frameworks (AI Act, GDPR, Convention 108+) to address the convergence of neural data and AI?

Speaker

Damian Eke


Explanation

The convergence of neurotechnology and AI creates unique risks that require coordination across multiple existing regulatory frameworks


Should governance of neurotechnologies follow a rights-based approach or a value-based approach, and whose values should be embedded?

Speaker

Damian Eke


Explanation

This addresses concerns about cultural differences in interpreting rights like privacy (individual vs. collective) and the risk of embedding only Global North values in technologies used worldwide


How do we address ethics dumping in neurotechnology research and development?

Speaker

Damian Eke


Explanation

There’s a risk that ethically questionable research may be outsourced to regions with less stringent oversight, leading to exploitation of vulnerable populations


What accountability mechanisms should exist across the entire lifecycle of neurodata processing, and should certain stages bear greater accountability?

Speaker

Ankita (participant from India)


Explanation

This addresses the need for comprehensive accountability frameworks covering collection, storage, and analysis of neurodata


How do neurotechnologies impact other human rights beyond privacy, such as freedom of thought and freedom of expression?

Speaker

Kristin from University of Oslo


Explanation

The discussion focused heavily on privacy but other fundamental rights may also be affected, particularly given that these technologies can influence the mind


Should genetic data be classified as neurodata and protected under the same legal frameworks?

Speaker

Participant 463


Explanation

This explores the boundaries between different types of sensitive data and whether existing classifications need to be expanded or redefined


How do we regulate the research and development of sentient computing and AI systems to prevent abominable outcomes?

Speaker

Jan Kleissen


Explanation

This addresses the intersection of neurotechnology with advanced AI systems and the need for proactive regulation before potentially dangerous developments occur


How can we ensure neurotechnologies are generalizable and usable across all global populations, not just those from the Global North?

Speaker

Damian Eke


Explanation

Current neurotechnologies may embed values and design assumptions from their developers, potentially making them less effective or appropriate for other populations


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.