Global Standards for a Sustainable Digital Future
8 Jul 2025 11:30h - 12:30h
Global Standards for a Sustainable Digital Future
Session at a glance
Summary
This discussion focused on how technical standards can drive progress in sustainability, digital connectivity, and accessibility within a global multi-stakeholder framework. The session featured three expert speakers from IEEE’s Standards Association discussing different aspects of standards development for emerging technologies, particularly artificial intelligence and digital health applications.
Maike Luiken emphasized that standards serve as bridges between academic research and practical implementation, highlighting the evolution from purely technical standards to “enviro-sociotechnical standards” that incorporate sustainability and ethical considerations. She discussed how modern standards development includes environmental impact assessments, circular economy principles, and ethical guidelines, citing IEEE’s initiatives like Ethically Aligned Design and Planet Positive 2030. Luiken stressed that standards now address everything from green data centers to ethical AI considerations, moving beyond traditional technical integration requirements.
Dimitrios Kalogeropoulos focused on AI applications in healthcare, arguing that standards must be collaborative, inclusive, and dynamic to accommodate rapidly evolving technologies. He advocated for “evidence sandboxes” – controlled environments where stakeholders can test AI applications for compliance with ethical, security, and safety criteria before real-world deployment. Kalogeropoulos emphasized the importance of transparency in AI systems, particularly regarding data provenance and the growing problem of “synthetic truth contamination” where AI models are trained on AI-generated content, leading to degraded accuracy.
The discussion revealed significant challenges in standards development, including the need for broader global participation, particularly from the Global South, and addressing power imbalances where major technology companies effectively set de facto standards. Participants explored how to incorporate qualitative aspects like ethics and human rights into traditionally quantitative technical standards, with suggestions including threshold-based approaches and metadata standards that focus on behavioral and contextual information rather than just technical specifications.
The conversation concluded with recognition that standards development must evolve to keep pace with rapidly advancing technologies while ensuring inclusive participation and addressing real-world implementation challenges through collaborative, adaptive approaches.
Keypoints
## Major Discussion Points:
– **Sustainability and Ethics Integration in Technology Standards**: The discussion emphasized moving beyond traditional technical standards to include environmental stewardship, climate change considerations, and ethical frameworks. Speakers highlighted the need for “enviro-sociotechnical standards” that incorporate sustainability by design and address the intersection of technology, environment, and social impact.
– **AI Standards and Healthcare Applications**: Significant focus on developing standards for AI in healthcare, particularly addressing bias, transparency, and accountability in AI systems. The conversation covered challenges with AI model training data, synthetic truth contamination, and the need for metadata standards to ensure AI systems are trustworthy and equitable.
– **Global Participation and Inclusivity in Standards Development**: A central theme was the critical need for broader, more diverse participation in standards development, especially from the Global South and underrepresented communities. Speakers discussed challenges in achieving meaningful multi-stakeholder collaboration and strategies for making standards development more inclusive.
– **Regulatory Sandboxes and Evidence-Based Implementation**: Discussion of “evidence sandboxes” as controlled environments for testing AI applications and standards compliance, particularly in highly regulated sectors like healthcare. This included exploring how regulatory frameworks like the EU AI Act can work with standards development.
– **Bridging the Gap Between Innovation Speed and Standards Development**: Addressing the challenge that emerging technologies (like ChatGPT) often outpace standards development, leading to potential misuse and negative consequences. The conversation explored how to make standards development more agile and responsive to rapid technological change.
## Overall Purpose:
The discussion aimed to showcase how technology standards can drive progress on climate action, digital accessibility, and resilient infrastructure while exploring collaborative approaches to standards development in the digital economy era. The session was designed to facilitate knowledge sharing and encourage broader participation in IEEE’s standards development processes.
## Overall Tone:
The discussion maintained a collaborative and constructive tone throughout, with speakers demonstrating expertise while remaining open to questions and dialogue. The atmosphere was academic yet practical, with participants actively engaging in problem-solving discussions. The tone became increasingly interactive as the Q&A session progressed, with audience members contributing substantive questions and the speakers responding with detailed, thoughtful answers. There was a consistent emphasis on invitation and inclusion, with multiple speakers encouraging audience participation in standards development work.
Speakers
**Speakers from the provided list:**
– **Kathleen A. Kramer**: Opening speaker, appears to be affiliated with a global community organization with 500,000 members in 190 countries focused on standards development
– **Karen Mulberry**: Workshop moderator/facilitator, introduces speakers and manages Q&A sessions
– **Maike Luiken**: Chair of standard working group addressing sustainability, environmental stewardship and climate change in professional practice; vice chair of another working group; expert in sustainability and standards development
– **Dimitrios Kalogeropoulos**: Expert in AI in healthcare, focusing on building bridges for tomorrow’s population; works on applying technology to healthcare; has experience in standards development since 1992
– **Heather Flanagan**: Chair and participant in several standards organizations including IETF, W3C, and OpenID Foundation
– **Priyanka Dasgupta**: Representative from IEC (International Electrotechnical Commission), another standardization organization
– **Kiki Wicachali**: Senior Technology Advisor at UNICEF
– **Shamira Ahmed**: Researcher
– **Participant**: Multiple unidentified participants who asked questions during the session
**Additional speakers:**
– **Yuhan Zheng**: Young professional within IEEE, was supposed to speak about “building the future with collective and invigorating minds” from a university to initial career path perspective, but was not present during most of the session (mentioned as joining remotely but didn’t participate in the recorded portion)
– **Philip**: Psychologist interested in human dimension and community development in standards
– **Representative from World Digital Technology Academy**: Mentioned their organization is gathering experts for multi-stakeholder working groups and has published standards around generative AI and agentic AI
Full session report
# Comprehensive Report: Standards Development for Sustainability, Digital Connectivity, and Accessibility
## Executive Summary
This WSIS workshop discussion, moderated by Karen Mulberry, brought together experts from IEEE’s Standards Association to explore how technical standards can drive progress in sustainability, digital connectivity, and accessibility within a global multi-stakeholder framework. The session featured two primary speakers who examined different aspects of standards development for emerging technologies, with particular emphasis on artificial intelligence and digital health applications.
The conversation revealed an evolution in standards development philosophy, moving from purely technical specifications towards comprehensive standards that integrate environmental, social, and ethical considerations. Speakers emphasised the importance of inclusive global participation whilst acknowledging significant challenges in achieving meaningful multi-stakeholder collaboration. The discussion highlighted the tension between rapidly advancing technology and the pace of standards development, proposing innovative solutions such as “evidence sandboxes” and adaptive governance models.
## Opening Context and Framework
Kathleen A. Kramer opened the session by establishing IEEE’s global reach as a community of 500,000 members across 190 countries, emphasising the organisation’s commitment to multi-stakeholder standards development. Karen Mulberry, serving as workshop moderator, framed the discussion around how government, private sector, civil society, and technical communities can collaborate effectively in developing digital standards.
The session was originally planned to include three speakers, but Yuhan Zheng experienced technical difficulties and was unable to present, leaving Maike Luican and Dimitrios Kalogeropoulos as the primary speakers.
## Sustainability and Environmental Integration in Standards
### Evolution Towards Enviro-Sociotechnical Standards
Maike Luican, Chair of the standard working group addressing sustainability, environmental stewardship, and climate change in professional practice, presented standards as “true bridges between research, academic research, new science, new findings, and the application, implementation, and use of this research output.” She argued that modern standards development must evolve beyond traditional technical specifications, stating “I like to actually to call them now enviro-sociotechnical standards.”
This evolution represents a shift in how standards are conceived and developed, integrating environmental and social considerations from the outset rather than treating them as add-ons to technical requirements. Luican highlighted how standards can enable circular economy principles and sustainability by design, requiring consideration of the entire lifecycle of technologies from development through deployment to eventual decommissioning.
### Global Participation and Personal Outreach
Luican emphasised a critical insight about achieving broad participation in standards development: “people want to be asked. They’re usually not just coming. They truly want to be asked.” She advocated for personal outreach and networking rather than relying solely on open calls for participation, noting that this approach has been more effective in engaging diverse stakeholders.
She also demonstrated practical engagement with AI technologies, sharing that she had tested ChatGPT with her own biography to understand how AI systems process information, illustrating the hands-on approach needed for effective standards development in emerging technologies.
## AI Standards and Healthcare Applications
### Dynamic Standards for Rapidly Evolving Technologies
Dimitrios Kalogeropoulos, an expert in AI applications in healthcare, argued that traditional static standards are inadequate for governing rapidly evolving technologies like artificial intelligence. He proposed “dynamic standards that accommodate uncertainty, support iterative learning, and evolve alongside the systems they govern.”
Kalogeropoulos reframed standards as “tools of trust” rather than mere compliance mechanisms, emphasising their role in embedding transparency, enabling accountability, and making complexity navigable across sectors and borders. He referenced the “IEEE European Public Policy Committee” AI and Digital Health for Equity position statement as an example of this approach.
### Evidence Sandboxes and Collaborative Testing
A key innovation proposed by Kalogeropoulos was the concept of “evidence sandboxes” – controlled environments where stakeholders can test AI applications for compliance with ethical, security, and safety criteria before real-world deployment. He deliberately avoided the term “regulatory sandboxes,” preferring “living labs” and “collaborative spaces” that bring together communities to test standardised solutions.
This approach addresses the challenge of ensuring AI systems are safe and effective whilst allowing for innovation and experimentation, providing a framework for collaborative standards development that includes all relevant stakeholders in the testing and validation process.
### Synthetic Truth Contamination
Kalogeropoulos introduced the concept of “synthetic truth contamination,” describing how AI models trained on AI-generated content experience degraded accuracy and increased hallucination. He noted that “we are already dealing with this reality” and characterised it as “not just a technical glitch” but “a structural vulnerability” that poses systemic risks to AI development.
He used the analogy of corner protectors to illustrate how standards should create safe environments: “You put corner protectors there. This is what we need to do about AI now.” This framing emphasises the protective and enabling role of standards rather than their purely regulatory function.
## Key Discussion Points and Q&A Insights
### Accessibility and Participation
Karen Mulberry clarified an important point about IEEE standards participation: “you don’t need to be an IEEE member to participate in standards development – you just have to show up and be interested.” This accessibility is crucial for achieving the multi-stakeholder participation that speakers emphasised throughout the session.
### Power Imbalances and Market Forces
Participants raised concerns about power imbalances in standards development, particularly the ability of major technology companies to impose de facto standards through market dominance. One participant noted that “the key players are AWS and Microsoft and Meta. And they basically set their own standards and they try to impose them on others because they have the power to do so.”
Luican acknowledged that “market adoption sometimes overtakes formal standards development, creating de facto industry standards,” presenting this as part of the natural evolution where effective solutions gain adoption.
### Qualitative Standards Challenges
Kiki Wicachali from UNICEF raised important questions about incorporating qualitative aspects like ethics and child rights into technical standards, noting that “quantitative standards are easier to develop than qualitative aspects like ethics and child rights.” This challenge reflects the broader evolution towards standards that must address human values and social impacts alongside technical functionality.
### Data Quality and Crowdsourcing
Priyanka Dasgupta from the International Electrotechnical Commission raised questions about maintaining data quality when crowdsourcing inputs for AI dataset standards development, highlighting the challenge of balancing inclusive participation with quality assurance.
### Cross-Community Collaboration
Heather Flanagan, Chair and participant in several standards organisations including IETF, W3C, and OpenID Foundation, highlighted the challenge of bridging gaps between different standards communities that don’t typically interact with each other, noting the need for better coordination to create more comprehensive and interoperable standards.
## Certification and Implementation
Karen Mulberry highlighted the role of certification programs in attesting to compliance with ethical AI standards, noting that IEEE has certification programs that can demonstrate adherence to standards and build trust in AI systems. She mentioned specific opportunities for collaboration between different standards organisations on AI and certification work.
## Future Directions and Collaboration Opportunities
The discussion revealed interest in continued collaboration, with representatives from various organisations expressing willingness to work together on standards development. A representative from the World Digital Technology Academy expressed particular interest in collaboration opportunities.
The conversation demonstrated both the potential and limitations of current standards development processes. While there was consensus on fundamental principles like inclusivity, transparency, and ethical considerations, significant challenges remain in translating these principles into effective practice, particularly regarding the pace of technological change, global participation, and enforcement mechanisms.
## Conclusions
The discussion highlighted a standards development community adapting to govern emerging technologies in an inclusive and effective manner. The evolution towards what Luican termed “enviro-sociotechnical standards” represents a significant expansion in thinking about the role and scope of technical standards.
Key themes included the critical importance of personal outreach for inclusive participation, the need for adaptive governance models like evidence sandboxes that can keep pace with technological change, and the ongoing challenge of integrating qualitative considerations like ethics and sustainability into technical specifications.
The proposed solutions offer promising directions for future development, but their effectiveness will depend on the standards community’s ability to address underlying challenges related to power imbalances, global participation, and the rapid pace of technological advancement while maintaining collaborative approaches and technical rigour.
Session transcript
Kathleen A. Kramer: We are a global community of 500,000 members in 190 countries and a multi-stakeholder model. We develop and support standards that reflect an open collaboration of expertise and experience, ensuring relevant and impact in global, real-world contexts. Today’s session will showcase how technical standards designed with sustainability, connectivity and accessibility in mind are helping to drive progress on climate action, expanded digital reach and resilient infrastructure. I encourage all of us to use this opportunity not only to learn from each other, but to reflect on how we can further collaborate and co-create because only through shared effort can we build a truly sustainable digital future. Thank you for being part of this conversation and for your commitment to advancing technology for the benefit of humanity.
Karen Mulberry: Thank you, Kathleen. I’d like to welcome you to our workshop where I’ve got three experts who have been actively engaged in standards development and who can provide their perspectives on how you can build a standard and how you look at a sustainable future. We’re all entering the age of the digital economy, so there’s a lot that technology standards enable and a lot to consider as you develop a standard. Now, our three expert speakers today are Michael Lucan, who will talk about building the path to sustainability with technology standards, who is an expert in sustainability. Yuhan Zheng, who is joining us remotely, building the future with collective and invigorating minds. She’s one of the young professionals within IEEE who are looking at this from a lens from, you know, the university to their initial career path on building a standard and what does that mean for them. And then Dimitrios, who has a very long last Greek name that I’m sure I’m going to butcher, Kalogeropoulos, close maybe? That was very good. Thank you. Who’s going to be talking about building bridges for tomorrow’s population, who is looking at applying technology to healthcare and the ramifications and opportunities that are there? So I’d like to turn this over to Maike, who started that.
Maike Luiken: Well, thank you very much for the lovely introduction and welcome everybody to our session. It’s an honor to be here and to have the opportunity to talk one of my favorite subjects. So, as Karen indicated, I do work on standards development. I’m a chair of one standard working group. It’s addressing sustainability, environmental stewardship and climate change and professional practice. And we call that a recommended practice. I’m a vice chair of another and work with a couple of more. So why standards and what are the emerging trends? Standards, as far as I’m concerned, are a true bridge between research, academic research, new science, new findings, and the application, implementation, and use of this research output. And so standards give us essentially a language across which and Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, no longer talking about just making technology work in technology standards, but we are looking much further in terms of including the impact of the using of this standard on the environment and on people. Hence, like we talked earlier about ethical or socio-technical standards. We talked about ethical standards for AI. So we are really looking at the intersection between standards development for technology, but including sustainability considerations in that development. So topics are green data centers like modular data centers. We’re looking at blockchain, AI and automation. We’re looking at efficiency, of course, green digital transformation and the global energy systems transformation. All of it looking at from linear to circular resource models now. We didn’t do that 20 years ago. What plays into the space are, of course, regulation and compliance in different countries, different jurisdictions. I mentioned here a few which are European centered, the AI Act, CRS-RD and the digital product passport. Yuhan Zheng, Dr. Maike Luiken, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios As I mentioned already, one of the outputs or the impacts of standards is the harmonized language and commonly agreed terminology. Once a number of companies use a particular standard or it’s part of a regulation, we now speak a common language. So they help to ensure that digital and other solutions, they’re designed with interoperability, security, safety, and now sustainability in mind. And in that sense, help reduce waste, enhance efficiency, and have a positive social impact. I like to actually to call them now enviro-sociotechnical standards. It’s a mouthful, but at least it speaks to the facts that we’re dealing with. I already mentioned the impact on market and supply chain stakeholders. Important is, of course, scalability with standards we can scale, and this is greatly reducing integration issues and risks based on standardized protocols. We are really looking for implementation of a circular economy and sustainability principles by design. Actually, by design is part of the definition of a circular economy, and otherwise we design for de-manufacturability and no waste from the outset, not as an afterthought. And ultimately, of course, standards lead to technical implementation of regulation. A couple of examples out of IEEE, and I’m focusing here on ethically aligned design, which speaks to AI and strong Sustainability by Design, of course, speaks to sustainability, as opposed to the, say, energy-related standards, simply because we are adversives. And so we have formed initiatives like Ethically Aligned Design and Planet Positive 2030 to ultimately come up with recommendations and then build standards based on those recommendations. And the initiatives are all from the bottom up, with 100, 200, 300 people from around the globe and different backgrounds. A couple of examples of standards. Out of the Ethically Aligned Design comes the 7000 series of IEEE standards. And here’s one mentioned, it’s the standard that’s 2000, sorry, 7014, came out in 24. It’s a standard for ethical considerations and emulated empathy in autonomous and intelligent systems. Another standard that relates to WSIS is the standard for H-appropriate digital services framework. And another one is standard for online age verification. So standards have gone far beyond looking at integration, say, of microgrids into the grid, which is 1547, all the way to ethical considerations in the design of services and platforms. So let’s cap the lines up for a conclusion. The global impact then is, on the one hand, acceleration of innovation. On the other hand, we are looking at building bridges, and I should have said that. mentioned that last to link to you. So building bridges between policymakers, government and institutions. If we are including ethics and sustainability in standards development, it leads to change in human values and ethical considerations. Of course, we promote technical governance. We protect children’s rights. And we truly work on making the planet more sustainable. In other words, our biosphere. And with that, I thank you very much and turn it back
Karen Mulberry: to Karen. Thank you very much, Maike. I also want to let everyone know, after our experts have spoken, there’ll be some opportunities to ask them some questions on their presentations and their experiences. So with that said, I’d like to move on to Yuhan, who is joining us remotely. Yuhan, are you there? No, she’s not there. Okay, thank you. Dimitrios, why don’t we move on to you and your perspective?
Dimitrios Kalogeropoulos: Yeah, hello, everyone. Forgive me, but I will read. So the title for me today is Building Bridges for Tomorrow’s Population. I’m going to sort of delve into AI in healthcare specifically. But I hope that my subject will cover domains outside healthcare more broadly. So as the world becomes more interconnected, our approach to population health, along the foundation of productivity and sustainability, faces an unprecedented opportunity to evolve. But treating illness alone is no longer enough. We must design systems that actively promote well-being, equity and resilience. And to do that, we must fundamentally rethink how we govern, collaborate and design through technology. Global standards are essential. They ensure that digital innovations, especially in healthcare, are scalable, secure and accessible. But beyond the technical layer lies a deeper challenge. Digital health and artificial intelligence offer the promise of better care access and coordination. But left unchecked, they risk reinforcing inequities. Systems meant to close care gaps may instead deepen them by reproducing bias, excluding communities or simply being designed without them. Global leaders have raised this concern. The UN, World Health Organization and OECD have all called for stronger digital cooperation to bridge, divide and accelerate the sustainable development goals. But declarations alone cannot solve implementation challenges. Healthcare illustrates this clearly. It’s where digital cooperation is most difficult and most urgently needed. The question is this. In a domain as regulated and standardized as healthcare, how is it that we so often fail to meaningfully include digital technologies in the very frameworks meant to support multi-stakeholder collaboration? And this has been the case for many, many years. The answer in part lies in how we develop standards. For them to be effective, they must be collaborative, shaped through open, inclusive processes that reflect diverse needs, values and jurisdictions. That’s how we ensure that standards are not only technically sound, but socially relevant. The other part lies in the kind of standards we pursue. Too often, standard setting codifies what is already known. Subtle technical criteria that are easy to measure but slow to evolve. In fast-moving domains like AI and digital health, we need a different approach, dynamic standards that accommodate uncertainty, support iterative learning, and evolve alongside the systems they govern. And we have two very important projects going on within IEEESA right now. These are not just tools of compliance. They are tools of trust, designed to embed transparency, enable accountability, and make complexity navigable across sectors and borders. So to illustrate what I’m trying to get to, let me use public health as an example. If we build the right digital infrastructure, and there’s a lot of talk within WHO in delivering just that in healthcare, we can create an AI superhighway for health, a blueprint for systems that are equitable, resilient, and globally connected. This ambition aligns directly with the UN SDGs and the World Summit on the Information Society. In Europe, legislative efforts such as the EU AI Act aim to enable that vision based on a clear principle. Laws define essential requirements, standards interpret them. But how can we standardize the path to digital inclusion on which AI models so heavily rely on? What would these standards look like? Today, many AI ethics frameworks advocate for fairness, transparency, and accountability. But most lack testable thresholds, a way to move from aspiration to application. First, standards must make uncertainty visible and shared. Transparency must become a design feature, not a compliance task. Second, they must support governance of future data ecosystems, ecosystems capable of disclosing the strengths, limitations, and risks. And we have a long way to cover in that respect. In this light, disclosure becomes a primary goal of standard setting. Alongside disclosure, we need to make sure that the data we produce is accessible to everyone. We need to make sure that the data we produce is accessible to everyone. We need to make sure that the data we produce is accessible to everyone. must be built into systems through standards that clarify obligations, not dilute them. Standards that everyone can use. That’s where threshold checkpoints matter. Data sets used to train or fine-tune models must be embedded with metadata. That means transparency criteria. And this is how we also tackle the deeper risk, synthetic truth contamination. As more models are trained on AI-generated content, originality and factual accuracy degrade. The knowledge space becomes recursively polluted and models hallucinate. Distorted information is recycled. They loop in on themselves. Accuracy declines and costs rise. And we are already dealing with this reality, albeit we have had LLMs in our lives for less than two years. This is not just a technical glitch. It’s a structural vulnerability. And some models are already masking this behavior, which is a lot more concerning. They hide it and they’re very good at it. So these systems are not sustainable.
Karen Mulberry: Thank you very much, Dimitrios. Let me check one more time. Sorry, I turned it off, I turned it on, I turned it off. I’d like to open up the floor to see if anyone has any questions or any responses to what you’ve heard so far today. And I think what I’d like to do is start us off with a question of my own. So, Dimitrios, how can the government, the private sector, civil society, the technical community, which the foundations of the WSIS discussions, collaborate in developing digital standards? I mean, you touched upon it from your expertise, but do you have any other suggestions? we should consider as we move forward through this multi-stakeholder process?
Dimitrios Kalogeropoulos: Thank you, Karen. That’s an excellent question. Some of it is immediate, some of it is longer term. I will try to answer the question in the European context, because I think that’s closer to the reality right now. So we have, first of all, the EU AI Act, which is in the process of implementation, as we all know, states the implementation. The GPI code of practice, General Purpose Artificial Intelligence code of practice, is on the way as well. And one of the provisions in that is to set up in different member states evidence sandbox facilities. So these are entities that are part of the regulatory system, and I’m talking about domains where systemic risk is more pronounced. So healthcare is one of those, which is my domain. And in those environments, solutions can be tested for compliance with virus ethical security and other criteria before they can be released in the open world, in reality, in real world. So we have two standards development projects that are supposed to support these facilities, and these facilities have been designed and are being designed. For example, the MSRA in the UK is in the process of designing the AI airlock with specific mechanisms that will eventually lead to evidence pathways and adoption of artificial intelligence tools in healthcare. Different countries are working on different projects. But the general idea is that you bring together all of these stakeholders that have an impact in the adoption of the and the tools to discuss among them how to best navigate this regulatory complexity. So to come back to your question, we have the mechanisms. Also the Joint Purpose Artificial Intelligence Code of Practice, which is implementing the AI Act, has provisions for standard setting in consensus with industry stakeholders for various aspects related to copyright protection, for example, and model training, model tuning and refining. But how are standards relevant here? If we somehow manage to develop a set of metadata, a set of metastandards that govern the space of implementation, that provide guardrails for what we consider ethical and safe, and then we bring all those stakeholders within the aforementioned mechanisms, then we have two benefits. One of them is we come out with standards for everyone. And the other one is that we do that in a controlled environment. So everybody learns from each other within those controlled environments.
Karen Mulberry: Thank you very much, Maike. Any thoughts on how various communities can get together and develop digital standards, especially from your work with the Sustainability Committee?
Maike Luiken: Actually, I wanted to add one point, and that is, to my understanding, the UK put together a sandbox for actually testing applications. The AI, I love that. That’s right. And Canada is copying this now, so starting a sandbox as well. And so that is actually helping, not necessarily the standards development, but certainly the developers to see whether A, things work, and two, whether whatever is being developed performs to standard. So I think that’s a necessity. We need the standards, but we also need… and Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, that’s one of the issues that I don’t think we have been figuring out very well on how to actually keep up with standards development, with technology development.
Dimitrios Kalogeropoulos: No, I totally agree with you. But look, it’s the very beginning. We have to come to terms with the fact that we are in the beginning.
Maike Luiken: Okay, agreed.
Participant: Yes, I have a question for you. Yes, I have a question for Dimitrios. I mean, you talked a lot about Europe and the European standards, but I mean, since we spoke earlier about standardization being a process on a global scale, then with local implications, perhaps there is a chance you could also talk briefly about the Global South and standardization among stakeholders in the Global South?
Dimitrios Kalogeropoulos: Yes, thank you. That’s an excellent question as well. Look, let me try to approach that in a different way. First of all, standards development and what I talked about today needs to be participatory, and that means that everyone participates. So if not everyone participates, that means that the standards are not going to be fair for everyone. And I’m going to give an example of what it means that if they’re not fair for everyone, not everyone participates. And then as they will guide AI adoption, they will eventually also guide the truth that goes in the models of the artificial intelligence. We’re going to just propagate existing standards. Dr. Dimitrios Kalogeropoulos, Dr. Dimitrios Kalogeropoulos, Dr. Dimitrios Kalogeropoulos, leading the development of the different writing groups within the standard. That’s not a bad thing. What we are planning to do once we have… So, we want to create a container of knowledge where knowledge transfer can take place, but we also need to be very careful about everyone participating in the process. So, after long and careful consideration and deliberation, we have come up with a model where we’re thinking we can create an initial container of know-how, a first set of this standard, get it out there and then have a process where we can disseminate it as broad as we can globally to have everyone participate in defining the different metadata and parameters that go into this standard, which basically is about AI preparing datasets around the world. So, we still have a lot of work. It is most certainly one of our key priorities. We are convening, as IEEE SA, a global public health forum in London on November the 6th. And this is going to be very much about generative AI and fast prototyping to develop knowledge based on standards for data. And there’s a lot of work ongoing within IEEE in that direction. It’s only the beginning. We still have a lot of work to do. So, yeah, I mean, if anyone wants to join us or support or in any way, make this a reality, you’re very welcome to reach out and talk to me. And thank you for posing that very important issue. Thank you.
Karen Mulberry: We have some other questions, so.
Heather Flanagan: My name is Heather Flanagan. I’m a chair and participant in several standards organizations, including the IETF, the W3C, the OpenID Foundation, a few others. And I absolutely agree so much that the need for broad participation is critical to making a standard work. And one of the reasons I’m here is because, of course, in those communities, they’re them and they’re not here. And there’s very, very little crossover. And I don’t know how to fix it. That’s the whole reason that I’ve come to this event is to try to understand more clearly how to bridge that gap of, I know as chair that I want broader participation, but I don’t know how to get it. So I’d really love any suggestions or feedback how to make what we all agree needs to happen actually happen.
Maike Luiken: I’d love to take this question. It’s a formidable question. I face the same issue. And I’m pretty sure most people who have been sharing a standards working group have run into this one from time to time. People want to be asked. They’re usually not just coming. They truly want to be asked. at our booth who asked about a couple of things and we got talking. We talked for an hour and yes, now he is interested in joining a standards working group. So what it really takes is use the network of the folks that are already in your working group and look for suggestions who else could join and do a personal ask as far as you can get. That’s the best way I have learned how to do it and usually at events like this or other conferences I go to like issue the invitation like Dimitrios did if you want to join us, talk to people in more detail. I haven’t found a more effective way. But of course you can do the other and that is talk about the work in public places like LinkedIn and others as much as you can and you might get other folks who are interested. But never forget that people want to be asked.
Dimitrios Kalogeropoulos: Thank you. Can I also add perhaps a little bit from my experience. So my first standards development project was in 1992 for artificial intelligence and agentic AI in healthcare. And then I went into 20 years of international development also developing standards in different jurisdictions, different parts of the world. One thing I learned is that standards evolve with the people that develop them. If you try to push people to standardize the people who develop the standards, they’re not going to join your groups. You really need to make the standard development process an expression of their own understanding of reality. And this is where standards become magical. So people start expressing their own experiences, their own ways of doing things through the standards, and somehow they come together into final versions of this. And this is what I mentioned earlier. Once you have this from people who can do it faster, once you have that prototype, then you can release that prototype to the world and then fine-tune it. Look, standards are like LLMs. If you observe how LLMs learn, then the same thing should happen with standards. They’re just different layers of a very similar process. Thank you. One of the reasons why we like to
Karen Mulberry: have these workshops, to expose the work within IEEE, in hopes of attracting more people to participate, to add more voices and views to the work. So I have a question here and two along the back. And then our remote speaker has joined us, so I’d like to turn it over to
Priyanka Dasgupta: her at that point. Hello, everyone. I am Priyanka Dasgupta, again from another standardization organization, IEC. So as we’re talking about standards, I quite like the intervention you made, Dimitrios, and so I was curious about that around the project that you were speaking about, building on the fact that when we are talking about standards, they’re not just built for the global majority, they’re built with them. And in that, you mentioned the project where you will be somehow opening it up to the public for them to be able to crowdfund the data for this thing. And I was curious, genuinely curious about how do you then ensure data quality within such work? Because one of the critical things when developing learning models for AI or learning data sets for AI is if you ask for input from everywhere, one, there’s always that question of bias, and of course, also questions of how to maintain that data quality. I was just curious to know how, or how do you intend to do that at such a broader level? Although the inclusivity is definitely a step I completely support. So thank you.
Dimitrios Kalogeropoulos: I’m not sure if I can answer your question in the limitation of time we have here, but one response would be join our project and find out. We want all the help we would like to have, all the help we can get. But I think I mentioned it in between what I said in my talk. We have been focusing on the data of the standard. We’ve been focusing on being descriptive. If we focus on the metadata, I’m sorry, instead of being prescriptive, being descriptive, if we focus on giving to people who participate in science development what kind of data we need, they will deliver it for you. So I think the broader idea is that instead of developing standards that tell you what to do, develop standards that tell you how to develop your standards. This is what we’re trying to do and this is the whole concept. Now when it comes to bias for data, this particular project I mentioned a minute ago, it has been ongoing for more than 20 years. So there’s a lot of work behind it and a lot more than I can go through right now. But rest assured, it relies on existing standards. It takes into consideration work that has been going on for a very long time to make sure that it doesn’t. And it essentially builds mechanisms to fight bias. So yeah.
Priyanka Dasgupta: Yeah, no, thank you. My question was more around the sourcing, the inputs for the metadata part, but it’s not questioning the development of the standard itself with the bias, how to retaliate bias in that sense. But thank you. I think you answered my question. Yeah.
Dimitrios Kalogeropoulos: Transparency is an issue. For example, if you develop a data set, you need to ask your data set if an AI was used to produce it. It’s a piece of metadata and nobody builds that into their standards, do they? Please.
Kiki Wicachali: Greetings all. Allow me to borrow a microphone. Kiki Wicachali, Senior Technology Advisor in UNICEF. My question is around quantitative and qualitative. Well, a little bit of that. As technical people and what goes into standards, it is usually, let’s call it easier, for lack of a better word, to build quantitative. How should we build qualitative aspects, ethics, child rights into standards, number one. Number two, we’re talking about new standards or developing standards. Second part of my question is, we already have so many ethics by design, digital by design. How do we enforce compliance?
Dimitrios Kalogeropoulos: These are all very good and very valid questions. I mean, my immediate response is, why not form a group, a working group among us and, you know, unpack these issues and address them together. We need to find a way to do that. But this is my sort of sustainable process response. In terms of an immediate answer, it’s thresholds that you need to think about. So when it comes to ethics, issues of ethics, you need to consider yourself, when is it that you need to define what unpacks your ethics issues? So you need to make sure that, for example, you have an age group participation in child locks, for example, you can have a child lock, an automatic child lock for specific digital access above, I don’t know, 10 or 12, depending on the content. These are all thresholds and you need to adjust your thresholds based on different parameters. What is the content? How is it accessible? I don’t know. I’m just giving a very pedantic example to explain that there are ways in which you can actually standardise ethics, both qualitatively and quantitatively. What I can say, though, is that I totally agree with you that quantitative we had enough of. It’s the qualitative bit which is really difficult. And this is why we need to start talking about metadata. We don’t have enough of that. So, you know, behaviour is defined in metadata sets, not in data sets. The way, so, you know, do I drink coffee or do I prefer tea? This is metadata. I will drink something in the end and the purpose is to have some caffeine or whatever. But that is what actually defines us as people. If you look at our behaviour, you look at our metadata, you look at the way that we actually navigate our lives. This is through metadata and we need to and this is where, you know, broad participation is very important because it’s about discovery. Open up the process to many people. To do that, you need to have the right instruments. How are you going to do that? The way that we approach standards development doesn’t work right now. There needs to be a new way, and that’s why I mentioned the regulatory. That’s why I mentioned sandboxes. There needs to be a closer loop, an agile development process, where we develop the standards together with the communities. For example, in the Global South, access to care is very difficult. We need mobile health applications, telehealth applications that will enable access to high-quality health care and preventive health care. We’re not using that enough. How can we allow those applications to go in a very highly regulated environment? Well, we need sandboxes, but so what? What are the sandboxes? Nobody understands what sandboxes are. I would recommend we let go of the term evidence, of the term regulatory sandbox, and we introduce the term evidence sandbox. Why can’t we finally build a sandbox environment where we develop, we allow for evidence, common evidence to be used within domains and regulate that? This is where the opportunity for standards comes in, because regulatory sandbox is about the law, and we have covered the law. Now, we need to cover the application of the law. We need to go practical. We need to bring communities into play. We need to equip them with the tools and standardise those tools in an adaptive manner. Let them use them, see what comes out of it, monitor very closely, start with an initial data set and develop the evidence and the regulatory paths for them. Make those global, eventually. We have published a policy on this, if you’re interested in reading it, and I think this is the shortest way to read about these views. So, all you need to do is go to IEEE European Public Policy Committee and look for our Artificial Intelligence and Digital Health for Equity public position statement, and you will see all this written in a short statement,
Kiki Wicachali: policy statement. Or I’ll take you up on your offer and get in touch with you, and then we can have a more…
Dimitrios Kalogeropoulos: Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng,
Karen Mulberry: I’d like to add to that, too, at the Standards Association. We have some guiding frameworks, ethically aligned design, that was developed in 2017 to actually look at applying bias and ethics to AI in particular development and providing some frameworks for consideration as you look at technical standards along that pathway. To add to that, we’ve also looked at certification programs on top of applying ethically aligned design such that if you comply with a standard in the 7000 series, which is where AI standards reside, we can go back and then certify that you comply with the standard, that you followed all of the requirements within the standard so you can attest to that, that you have a safe, trustworthy system and it’s ethically sound, that it’s used the appropriate data set in the way that it was defined. So, I mean, there is a body of work out there to try and move into the ethical approach and applications of technology. But again, technology resides in your hands and how you use it. So, to participate in the process would be wonderful because we need more voices at the table to consider all of the various aspects. So, this is your invitation to come join us at IEEE and participate in our standards development. And you don’t have to be a member, you just have to show up and be interested. Michael, you got some comments? Yeah, just one. So, an expression of ethics and values
Maike Luiken: are the guardrails that we put in place. So, the standards will talk to guardrails to protect against misuse of a particular technology. And so, that’s one of the expressions that implements essentially our value system or the value system of those who participated in the development of the standard. That’s just one way to look at it. And of course, those guardrails then become part of a certification program if a standard leads to that.
Participant: Yeah. So, you mentioned that one way to get people to talk together and to join and design new standards based on their shared experiences is to basically go talk to them. and invite them, as you did yesterday with me, and thank you very much. But how do you deal with the power imbalance in the arena of standard definitions? And the topics that I study myself as a researcher is the standards, the environmental standards regarding data centers. And the key players are AWS and Microsoft and Meta. And they basically set their own standards and they try to impose them on others because they have the power to do so. Once they have these standards, then the other players have to use them as well if they want to compete with those. And obviously, they design those standards because they fit the vision of sustainability that they designed as well themselves. Thank you.
Maike Luiken: You have to ask the difficult question, do you? Thank you. So actually, what we often have is representatives, but representing themselves of these various companies in the standards development groups. So that is one side of the coin. The other is when you talk about a company setting a standard, it’s a de facto industry type standard, but it’s not a global standard. It’s like MPLS from Cisco was one of the protocols that de facto became standard because multi-protocol label switching was pushed forward faster than the actual standards development was happening. So that is what happened. Sometimes the market overtakes the actual standards development and there’s nothing we can do other than ultimately see which standard wins. And that is market adoption. So we had a failure around standards development for asynchronous transfer mode, for example. So thank you.
Dimitrios Kalogeropoulos: Can I just add one short bit there? Never forget the policy of standards in your work. So policy is about adding layers. In policy, you can never cancel other people’s policies. You have to add another policy that builds on the other policies and that has to be an integral part of standards. So if you find enemies, it means your policy is not right. You need to adjust your policy to make them your friends. And that will eventually happen. It just takes a lot of patience. I hope that was of value.
Participant: Oh, hi. I’m from a new organization called the World Digital Technology Academy. Right now, we’re gathering experts from industry and academia to form multi-stakeholder working group. Actually, we have published four standards around generative AI and agentic AI. In the future, we’ll do more about data governance. I’m very glad to hear IEEE is doing certification in addition to standards. Yeah, we have also put together a certification program. So in the future, I would love to collaborate with IEEE to work on more new standards on AI and certification. So probably after this meeting, we can talk more about this.
Karen Mulberry: I’m sure we would be very interested in collaboration and in hearing of your work and your approach. So I welcome that, and I will talk to you afterwards. And I believe we had another question then. Oh.
Shamira Ahmed: Thank you. My name is Shamira Ahmed. I am a researcher, and my question is for Dimitrios. You mentioned regulatory sandboxes, or as you said, they should be evidence sandboxes as a way to create guardrails and implement operationalized standards and ethics that we have on AI. So regulatory sandboxes are not a new thing when it comes to managing implementation of technologies. And you also mentioned, correct me if I’m wrong, that… Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Maike Luiken, Dr. Dimitrios Kalogeropoulos, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng,
Dimitrios Kalogeropoulos: Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, pf Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, gives you an ability, through standards, not through a common resource, not through a central database that belongs to a government, this is a purely democratic process. If there was a way, through standards, to create a global data space, look at the European health data space. Think of a global health data space. Then what I would actually be working toward is to start with mobile health applications working for civic engagement. Getting people to participate in their healthcare, and then create evidence pathways that go all the way up to global health policy. Okay. Okay, so is this possible? The question here is the regulatory sandbox. First of all, I don’t like the idea of a regulatory sandbox. The regulatory sandbox is a term which is being used to tell me it’s okay to test an application within this specific environment. In fintech, in other domains, I have no idea whether this succeeded or failed. But you have to consider that the European Medicines Agency, for example, EMA, works in a very similar way. It guides local authorities that regulate pharmaceuticals and it creates meta-policy and assists those. So there is a structure that you can build with which you can govern the process of evidence generation. And there’s no reason why it shouldn’t work. What is difficult there is addressing digital determinants. Because if you want to have scale, and you haven’t addressed the digital determinants of health, if you haven’t made technology accessible to entire populations, this clearly is not going to work. I mean, this is a failure from the onset. We know this is going to be a no-gamer. But for that, both the UN and the European Union and different organizations are actually funding infrastructure development. And we can’t wait for that infrastructure to be delivered and then think about what we’re going to do with it. What we’re trying to do is develop policies that will create those living labs. Does that resonate better with you? We avoid the term regulatory sandboxes, living labs, the collaborative spaces, where we can bring together communities and test out standardized solutions that deliver specific functions in healthcare. This is about population health. So the idea is that you mobilize, you engage the society to participate in improving their digital health, improving their health through placemaking digital ecosystems. Now we have digital, we need to go the other way around. We need to create communities around them. We have been so set on using digital technologies to develop new pathways for healthcare, anticipatory healthcare. We have gotten, you know, our heads too deep into this idea that it’s all digital. Well, it can’t be. It has to be real as well. So I completely agree with you. But again, I would urge you to read the policy document that we have set out and see if it answers your questions and if it doesn’t, I welcome your participation in any dialogue. And if you find any weak points, we would love to look at them. We want to make this successful. We’re not trying to just enforce a solution that we believe is successful.
Participant: Thank you. Super interesting. I’m a psychologist. I’m interested in exactly what you’ve been discussing, which is how you bring the human dimension into play effectively and how you go beyond efficiency and effectiveness and ethics into the evolution of communities and individuals and leadership. I’d love to hear any of the experiences you have briefly of what you described as the requirement for standards development to go hand in hand with people development and where you’ve seen that happen, whether it’s people collectively, teams and communities, organizations, or whether it’s individual leaders and how that’s made a big impact. That’s my area of work. My name is Philip. Thank you.
Dimitrios Kalogeropoulos: Thank you for your question, Philip. Well, we are building exactly that now within WHO’s Strategic Partners Initiative for Data and Digital Health, WHO Europe. I urge you to reach out, find out what it’s about. It’s about mental health specifically, but I cannot discuss this program here. So if you need further information, please reach out to WHO Europe and ask for, you know, if you, I’m sure the participation is open and you can still join the program or next year’s cohort. We are seeing, I can just very briefly tell you that, no, we don’t have unfortunately concrete evidence as to whether it works successfully. For now, we know what the pushback is, as you probably know is expected. And we are finding ways in which to work around that pushback. And this is fairly common. So. So I think that standards development is the very idea that, you know, we can progressively develop new layers of knowledge and participation without people thinking that we, you know, overreach into the privacy.
Karen Mulberry: And we have one concluding question.
Participant: Okay, thank you for your excellent presentations. We can see that many frontier technology, especially AI, is developing at a fast speed. And some new innovative technologies are completely groundbreaking, like trans-GBT in 2023. And they can have a huge impact on the industry immediately. But related standards often have not yet been established, which may lead to some negative consequences. Therefore, my problem is that what can we do to better tackle this situation? Thank you.
Karen Mulberry: Well, Maike, much like your example in terms of MPLS, where something was set prior to the standard. I mean, how do you resolve that issue?
Maike Luiken: Well, we do continue to develop standards around AI. And of course, we talked here a lot about AI based on large language models and so on. There’s an awful lot more to AI than the chat GPT and so on, right? So let’s not forget that. And there’s a lot of standards development around that piece. But in this particular case, we’re looking at the ethical use. And we are also looking at the issue that the users are using, say, chat GPT-3, when it came out, for functions that it was not designed for. I personally went in and said, OK, I’m going to test this. And I had to write my biography. My name, first name plus last name. If I do a Google search, it’s always me. There’s no two people with the same name combination. And I tried this eight times. Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, Yuhan Zheng, Dr. Dimitrios Kalogeropoulos, So, I’m not sure how standard development will change misuse of a tool, right? So, the path forward here is to either check GPT changing, which it has, like it gives you today references and so on, which it didn’t when it started out, and to let people, to educate people about what the function of different platforms in AI are, okay, or different AI platforms are, and always realizing that some level of hallucination may actually happen with all of them, okay? People are brought it up with the robo-taxi, with phantom breaking. In other words, the autonomous system is misinterpretation, misinterpreting signals, and it’s slowing down rapidly because it perceives the danger, okay? So, that is a fact, and a standard is not going to change that, right? But what we need to do is, and I go back to this, we need to educate and communicate the actual capacity, capability of a technology, what it can do and what it cannot do. And that brings me back to our sandboxes, because we are not only testing and bringing new knowledge, we are also testing the guardrails.
Dimitrios Kalogeropoulos: Yeah, look, think of this. In a room, you have a child growing up. You don’t go to the child and say, look, don’t do that. This is a corner. You put corner protectors there. This is what we need to do about AI now. That’s what the standards do. You need to make safe for a child the environment for the AI to grow up and those are the guardrails. You know, those little plastic sticky things that you put wherever you see a corner. That’s what we need to build. Thank you.
Karen Mulberry: Yes, and I think you’ve raised an issue that is always going to be a conundrum. You know, which comes first? The idea and thought of an application and the standard behind it to make it safe and trustworthy and globally applicable. Well, I’d like to conclude my remarks because we’re way over time and thank everyone very much for participating in our workshop, for your contributions to the dialogue we had today and supporting the multi-stakeholder process. So thank you very much. Thank you.
Maike Luiken
Speech speed
110 words per minute
Speech length
1739 words
Speech time
941 seconds
Standards serve as bridges between academic research and practical implementation, providing common language across disciplines
Explanation
Standards act as a crucial connection between new scientific research and findings from academia and their real-world application and implementation. They provide a harmonized language and commonly agreed terminology that enables different disciplines and stakeholders to communicate effectively.
Evidence
Once a number of companies use a particular standard or it’s part of a regulation, we now speak a common language. Standards help to ensure that digital and other solutions are designed with interoperability, security, safety, and now sustainability in mind.
Major discussion point
Standards Development and Sustainability
Topics
Digital standards
Disagreed with
– Dimitrios Kalogeropoulos
Disagreed on
Approach to standards development – prescriptive vs descriptive
Modern standards must include environmental and social impact considerations, moving toward “enviro-sociotechnical standards”
Explanation
Contemporary standards development has evolved beyond just making technology work to include the impact of using standards on the environment and people. This represents a shift toward what Luiken calls “enviro-sociotechnical standards” that encompass environmental, social, and technical considerations.
Evidence
We are looking at the intersection between standards development for technology, but including sustainability considerations in that development. Topics include green data centers, blockchain, AI and automation, efficiency, green digital transformation and global energy systems transformation.
Major discussion point
Standards Development and Sustainability
Topics
Digital standards | Sustainable development
Standards enable circular economy principles and sustainability by design rather than as an afterthought
Explanation
Standards facilitate the implementation of circular economy and sustainability principles from the initial design phase rather than adding them later. This approach ensures that sustainability considerations are built into systems from the outset.
Evidence
We are really looking for implementation of a circular economy and sustainability principles by design. By design is part of the definition of a circular economy, and otherwise we design for de-manufacturability and no waste from the outset, not as an afterthought.
Major discussion point
Standards Development and Sustainability
Topics
Digital standards | Sustainable development
Broad global participation is essential for fair standards, requiring personal outreach and invitation rather than waiting for volunteers
Explanation
Effective standards development requires diverse global participation, but people typically need to be personally invited rather than volunteering spontaneously. This requires active networking and outreach efforts from existing working group members.
Evidence
People want to be asked. They’re usually not just coming. They truly want to be asked. Use the network of the folks that are already in your working group and look for suggestions who else could join and do a personal ask as far as you can get.
Major discussion point
Inclusive Participation in Standards Development
Topics
Digital standards
Agreed with
– Dimitrios Kalogeropoulos
– Heather Flanagan
– Priyanka Dasgupta
– Participant
Agreed on
Broad participation is essential for effective standards development
Market adoption sometimes overtakes formal standards development, creating de facto industry standards
Explanation
In some cases, market forces and industry adoption move faster than formal standards development processes, resulting in de facto standards that become widely adopted before official standards are established. This can lead to situations where the market determines which standard wins through adoption.
Evidence
MPLS from Cisco was one of the protocols that de facto became standard because multi-protocol label switching was pushed forward faster than the actual standards development was happening. We had a failure around standards development for asynchronous transfer mode.
Major discussion point
Implementation and Compliance Challenges
Topics
Digital standards
Disagreed with
– Participant
Disagreed on
Market-driven vs formal standards development
Education about technology capabilities and limitations is crucial alongside standards development
Explanation
Standards alone cannot prevent misuse of technology; there must also be education and communication about what technologies can and cannot do. This is particularly important for AI systems where users may apply tools for purposes they weren’t designed for.
Evidence
I personally went in and said, OK, I’m going to test this [ChatGPT]. And I had to write my biography… I tried this eight times and it got it wrong eight times. People are using ChatGPT for functions that it was not designed for.
Major discussion point
Implementation and Compliance Challenges
Topics
Digital standards | Online education
Standards must address both technical functionality and ethical considerations through guardrails
Explanation
Modern standards development must incorporate both technical specifications and ethical values by establishing guardrails that protect against misuse of technology. These guardrails represent the implementation of value systems from those who participate in standards development.
Evidence
An expression of ethics and values are the guardrails that we put in place. So, the standards will talk to guardrails to protect against misuse of a particular technology. Those guardrails then become part of a certification program if a standard leads to that.
Major discussion point
Quality and Trust in Standards
Topics
Digital standards | Human rights principles
Standards cannot prevent misuse of tools but can provide guardrails for safe development environments
Explanation
While standards cannot completely eliminate the misuse of technology tools, they can establish protective measures and safe environments for development and testing. The focus should be on creating appropriate safeguards rather than trying to control every possible use case.
Evidence
I’m not sure how standard development will change misuse of a tool. The path forward here is to educate people about what the function of different platforms in AI are, and always realizing that some level of hallucination may actually happen with all of them.
Major discussion point
Technology Development Speed vs Standards
Topics
Digital standards
Dimitrios Kalogeropoulos
Speech speed
144 words per minute
Speech length
3743 words
Speech time
1553 seconds
Standards development should focus on metadata and descriptive rather than prescriptive approaches
Explanation
Instead of developing standards that dictate specific actions, the focus should be on creating descriptive standards that provide guidance on how to develop standards and emphasize metadata. This approach allows for more flexibility and adaptability in rapidly evolving fields like AI.
Evidence
If we focus on the metadata, instead of being prescriptive, being descriptive, if we focus on giving to people who participate in science development what kind of data we need, they will deliver it for you. Develop standards that tell you how to develop your standards.
Major discussion point
Standards Development and Sustainability
Topics
Digital standards | Data governance
Agreed with
– Maike Luiken
Agreed on
Transparency and metadata are crucial for AI systems governance
AI systems risk reinforcing inequities if not properly governed through inclusive standards development
Explanation
Digital health and AI technologies have the potential to improve care access and coordination, but without proper governance and inclusive development, they may actually worsen existing inequalities by reproducing bias or excluding certain communities from the design process.
Evidence
Digital health and artificial intelligence offer the promise of better care access and coordination. But left unchecked, they risk reinforcing inequities. Systems meant to close care gaps may instead deepen them by reproducing bias, excluding communities or simply being designed without them.
Major discussion point
AI Ethics and Healthcare Standards
Topics
Digital standards | Human rights principles | Digital access
Agreed with
– Maike Luiken
– Karen Mulberry
– Kiki Wicachali
Agreed on
Standards must address ethical considerations and human values, not just technical specifications
Standards must make uncertainty visible and support governance of future data ecosystems with transparency as a design feature
Explanation
Effective standards should acknowledge and make visible the uncertainties inherent in AI systems while supporting the governance of data ecosystems. Transparency should be built into systems as a fundamental design element rather than treated as a compliance requirement.
Evidence
Standards must make uncertainty visible and shared. Transparency must become a design feature, not a compliance task. They must support governance of future data ecosystems, ecosystems capable of disclosing the strengths, limitations, and risks.
Major discussion point
AI Ethics and Healthcare Standards
Topics
Digital standards | Data governance | Privacy and data protection
Agreed with
– Maike Luiken
Agreed on
Transparency and metadata are crucial for AI systems governance
Synthetic truth contamination poses structural vulnerability as AI models trained on AI-generated content degrade in accuracy
Explanation
As more AI models are trained on content generated by other AI systems, there is a recursive pollution of the knowledge space that leads to degraded accuracy, increased hallucinations, and higher costs. This represents a fundamental structural vulnerability in AI systems.
Evidence
As more models are trained on AI-generated content, originality and factual accuracy degrade. The knowledge space becomes recursively polluted and models hallucinate. Distorted information is recycled. They loop in on themselves. Accuracy declines and costs rise.
Major discussion point
AI Ethics and Healthcare Standards
Topics
Digital standards | Data governance
Evidence sandboxes are needed to test AI applications in controlled environments before real-world deployment
Explanation
Rather than traditional regulatory sandboxes, there should be evidence sandboxes that allow for testing AI applications in controlled environments where all stakeholders can collaborate to navigate regulatory complexity and develop evidence-based pathways for adoption.
Evidence
We have two standards development projects that are supposed to support these facilities. The MSRA in the UK is in the process of designing the AI airlock with specific mechanisms that will eventually lead to evidence pathways and adoption of artificial intelligence tools in healthcare.
Major discussion point
AI Ethics and Healthcare Standards
Topics
Digital standards | Legal and regulatory
Standards evolve with the people who develop them and must reflect participants’ own understanding of reality
Explanation
Successful standards development requires allowing participants to express their own experiences and ways of doing things through the standards process. Standards become effective when they reflect the collective understanding and reality of the people who develop them.
Evidence
Standards evolve with the people that develop them. If you try to push people to standardize, they’re not going to join your groups. You really need to make the standard development process an expression of their own understanding of reality.
Major discussion point
Inclusive Participation in Standards Development
Topics
Digital standards
Agreed with
– Maike Luiken
– Heather Flanagan
– Priyanka Dasgupta
– Participant
Agreed on
Broad participation is essential for effective standards development
Threshold checkpoints and metadata embedding are necessary for transparency in AI systems
Explanation
AI systems require specific threshold checkpoints and embedded metadata to ensure transparency and address ethical concerns. This includes requiring disclosure of whether AI was used to produce datasets and implementing automatic safeguards based on defined parameters.
Evidence
Data sets used to train or fine-tune models must be embedded with metadata. If you develop a data set, you need to ask your data set if an AI was used to produce it. You can have a child lock, an automatic child lock for specific digital access above, I don’t know, 10 or 12, depending on the content.
Major discussion point
Quality and Trust in Standards
Topics
Digital standards | Privacy and data protection | Children rights
Agreed with
– Maike Luiken
Agreed on
Transparency and metadata are crucial for AI systems governance
Corner protector analogy: standards should create safe environments for AI development rather than just restrictions
Explanation
Standards should function like corner protectors in a child’s room – creating safe environments for AI development and growth rather than simply imposing restrictions. The focus should be on building protective guardrails that enable safe exploration and development.
Evidence
In a room, you have a child growing up. You don’t go to the child and say, look, don’t do that. This is a corner. You put corner protectors there. This is what we need to do about AI now. You need to make safe for a child the environment for the AI to grow up and those are the guardrails.
Major discussion point
Technology Development Speed vs Standards
Topics
Digital standards
Living labs and collaborative spaces are needed to test standardized solutions with communities
Explanation
Instead of traditional regulatory sandboxes, there should be living labs and collaborative spaces where communities can be brought together to test standardized solutions that deliver specific functions. This approach emphasizes community engagement and real-world testing of digital health solutions.
Evidence
We avoid the term regulatory sandboxes, living labs, the collaborative spaces, where we can bring together communities and test out standardized solutions that deliver specific functions in healthcare. This is about population health.
Major discussion point
Technology Development Speed vs Standards
Topics
Digital standards | Capacity development
Heather Flanagan
Speech speed
158 words per minute
Speech length
138 words
Speech time
52 seconds
There is significant challenge in bridging gaps between different standards communities that don’t interact
Explanation
Despite agreement on the need for broad participation in standards development, there are significant barriers between different standards communities that prevent crossover and collaboration. This creates isolated silos that don’t communicate effectively with each other.
Evidence
I’m a chair and participant in several standards organizations, including the IETF, the W3C, the OpenID Foundation, a few others. In those communities, they’re them and they’re not here. And there’s very, very little crossover.
Major discussion point
Inclusive Participation in Standards Development
Topics
Digital standards
Agreed with
– Maike Luiken
– Dimitrios Kalogeropoulos
– Priyanka Dasgupta
– Participant
Agreed on
Broad participation is essential for effective standards development
Priyanka Dasgupta
Speech speed
160 words per minute
Speech length
240 words
Speech time
89 seconds
Data quality maintenance is challenging when crowdsourcing inputs for AI standards development
Explanation
When opening up standards development to broader public participation for AI dataset creation, there are significant challenges in maintaining data quality. This includes concerns about bias and ensuring the integrity of inputs when soliciting contributions from diverse global sources.
Evidence
When we are talking about standards, they’re not just built for the global majority, they’re built with them. One of the critical things when developing learning models for AI or learning data sets for AI is if you ask for input from everywhere, there’s always that question of bias, and of course, also questions of how to maintain that data quality.
Major discussion point
Quality and Trust in Standards
Topics
Digital standards | Data governance
Agreed with
– Maike Luiken
– Dimitrios Kalogeropoulos
– Heather Flanagan
– Participant
Agreed on
Broad participation is essential for effective standards development
Karen Mulberry
Speech speed
128 words per minute
Speech length
897 words
Speech time
418 seconds
Standards development requires collaboration between government, private sector, civil society, and technical communities
Explanation
Effective standards development necessitates a multi-stakeholder approach that brings together diverse groups including government entities, private sector organizations, civil society groups, and technical communities. This collaborative approach is fundamental to the WSIS process and ensures comprehensive input.
Evidence
How can the government, the private sector, civil society, the technical community, which the foundations of the WSIS discussions, collaborate in developing digital standards?
Major discussion point
Global Collaboration and Multi-stakeholder Approach
Topics
Digital standards
Certification programs can attest to compliance with ethical AI standards in the 7000 series
Explanation
IEEE has developed certification programs that work alongside technical standards to verify compliance with ethical AI requirements. These programs can certify that systems comply with standards in the 7000 series, ensuring they are safe, trustworthy, and ethically sound.
Evidence
We’ve also looked at certification programs on top of applying ethically aligned design such that if you comply with a standard in the 7000 series, which is where AI standards reside, we can go back and then certify that you comply with the standard, that you followed all of the requirements within the standard so you can attest to that, that you have a safe, trustworthy system and it’s ethically sound.
Major discussion point
Implementation and Compliance Challenges
Topics
Digital standards | Human rights principles
Agreed with
– Maike Luiken
– Dimitrios Kalogeropoulos
– Kiki Wicachali
Agreed on
Standards must address ethical considerations and human values, not just technical specifications
Kathleen A. Kramer
Speech speed
137 words per minute
Speech length
126 words
Speech time
55 seconds
IEEE operates as a global community of 500,000 members across 190 countries using multi-stakeholder model
Explanation
IEEE functions as a large-scale global organization that develops and supports standards through an open collaboration model involving diverse expertise and experience. This multi-stakeholder approach ensures that standards are relevant and impactful in real-world global contexts.
Evidence
We are a global community of 500,000 members in 190 countries and a multi-stakeholder model. We develop and support standards that reflect an open collaboration of expertise and experience, ensuring relevant and impact in global, real-world contexts.
Major discussion point
Global Collaboration and Multi-stakeholder Approach
Topics
Digital standards
Kiki Wicachali
Speech speed
129 words per minute
Speech length
120 words
Speech time
55 seconds
Quantitative standards are easier to develop than qualitative aspects like ethics and child rights
Explanation
There is a significant challenge in incorporating qualitative aspects such as ethics and child rights into technical standards, as these are more complex to standardize compared to quantitative technical specifications. The question remains how to effectively build these human-centered considerations into standards.
Evidence
As technical people and what goes into standards, it is usually, let’s call it easier, for lack of a better word, to build quantitative. How should we build qualitative aspects, ethics, child rights into standards?
Major discussion point
Quality and Trust in Standards
Topics
Digital standards | Children rights | Human rights principles
Agreed with
– Maike Luiken
– Dimitrios Kalogeropoulos
– Karen Mulberry
Agreed on
Standards must address ethical considerations and human values, not just technical specifications
Participant
Speech speed
135 words per minute
Speech length
514 words
Speech time
226 seconds
Power imbalances exist where major tech companies can impose de facto standards on others
Explanation
Large technology companies like AWS, Microsoft, and Meta have significant power to set their own standards and impose them on other market participants. These companies design standards that fit their own vision of sustainability and business models, forcing competitors to adopt them to remain competitive.
Evidence
The key players are AWS and Microsoft and Meta. And they basically set their own standards and they try to impose them on others because they have the power to do so. Once they have these standards, then the other players have to use them as well if they want to compete with those.
Major discussion point
Inclusive Participation in Standards Development
Topics
Digital standards
Disagreed with
– Maike Luiken
Disagreed on
Market-driven vs formal standards development
Global South participation is essential to prevent standards from propagating existing inequalities
Explanation
Standards development must include meaningful participation from Global South stakeholders to ensure fairness and prevent the perpetuation of existing inequalities. Without inclusive participation, standards may not serve the needs of all global communities and could reinforce existing disparities.
Evidence
Since we spoke earlier about standardization being a process on a global scale, then with local implications, perhaps there is a chance you could also talk briefly about the Global South and standardization among stakeholders in the Global South?
Major discussion point
Global Collaboration and Multi-stakeholder Approach
Topics
Digital standards | Digital access
Agreed with
– Maike Luiken
– Dimitrios Kalogeropoulos
– Heather Flanagan
– Priyanka Dasgupta
Agreed on
Broad participation is essential for effective standards development
Collaboration opportunities exist between different standards organizations for AI and certification work
Explanation
There are opportunities for collaboration between different standards organizations working on AI and certification programs. Organizations like the World Digital Technology Academy are developing standards around generative AI and seeking partnerships with established organizations like IEEE.
Evidence
I’m from a new organization called the World Digital Technology Academy. Right now, we’re gathering experts from industry and academia to form multi-stakeholder working group. We have published four standards around generative AI and agentic AI. In the future, we’ll do more about data governance.
Major discussion point
Global Collaboration and Multi-stakeholder Approach
Topics
Digital standards
Rapid technology development often outpaces standards creation, leading to potential negative consequences
Explanation
The fast pace of technological innovation, particularly in frontier technologies like AI, often results in groundbreaking technologies being deployed before appropriate standards are established. This timing mismatch can lead to negative consequences and regulatory gaps.
Evidence
Many frontier technology, especially AI, is developing at a fast speed. And some new innovative technologies are completely groundbreaking, like trans-GBT in 2023. And they can have a huge impact on the industry immediately. But related standards often have not yet been established, which may lead to some negative consequences.
Major discussion point
Technology Development Speed vs Standards
Topics
Digital standards
Shamira Ahmed
Speech speed
114 words per minute
Speech length
106 words
Speech time
55 seconds
Regulatory sandboxes are not new for managing technology implementation but may have limitations in addressing digital determinants of health
Explanation
Shamira Ahmed points out that regulatory sandboxes have been used before for managing technology implementation and questions their effectiveness. She raises concerns about whether these mechanisms can adequately address digital determinants of health and achieve the scale needed for meaningful impact.
Evidence
Regulatory sandboxes are not a new thing when it comes to managing implementation of technologies. If you want to have scale, and you haven’t addressed the digital determinants of health, if you haven’t made technology accessible to entire populations, this clearly is not going to work.
Major discussion point
AI Ethics and Healthcare Standards
Topics
Digital standards | Legal and regulatory | Digital access
Agreements
Agreement points
Broad participation is essential for effective standards development
Speakers
– Maike Luiken
– Dimitrios Kalogeropoulos
– Heather Flanagan
– Priyanka Dasgupta
– Participant
Arguments
Broad global participation is essential for fair standards, requiring personal outreach and invitation rather than waiting for volunteers
Standards evolve with the people who develop them and must reflect participants’ own understanding of reality
There is significant challenge in bridging gaps between different standards communities that don’t interact
Data quality maintenance is challenging when crowdsourcing inputs for AI standards development
Global South participation is essential to prevent standards from propagating existing inequalities
Summary
All speakers agree that inclusive, diverse participation is crucial for developing fair and effective standards, though they acknowledge significant challenges in achieving this participation across different communities and regions.
Topics
Digital standards
Standards must address ethical considerations and human values, not just technical specifications
Speakers
– Maike Luiken
– Dimitrios Kalogeropoulos
– Karen Mulberry
– Kiki Wicachali
Arguments
Modern standards must include environmental and social impact considerations, moving toward ‘enviro-sociotechnical standards’
AI systems risk reinforcing inequities if not properly governed through inclusive standards development
Certification programs can attest to compliance with ethical AI standards in the 7000 series
Quantitative standards are easier to develop than qualitative aspects like ethics and child rights
Summary
Speakers consistently emphasize that modern standards development must go beyond technical functionality to incorporate ethical, social, and environmental considerations, though they acknowledge the complexity of standardizing qualitative aspects.
Topics
Digital standards | Human rights principles | Children rights
Transparency and metadata are crucial for AI systems governance
Speakers
– Dimitrios Kalogeropoulos
– Maike Luiken
Arguments
Standards development should focus on metadata and descriptive rather than prescriptive approaches
Standards must make uncertainty visible and support governance of future data ecosystems with transparency as a design feature
Threshold checkpoints and metadata embedding are necessary for transparency in AI systems
Summary
Both speakers agree that transparency should be built into AI systems as a design feature, with emphasis on metadata and descriptive approaches rather than prescriptive rules.
Topics
Digital standards | Data governance | Privacy and data protection
Similar viewpoints
Both speakers view standards as protective frameworks that bridge research and practice while creating safe environments for technology development through guardrails rather than restrictions.
Speakers
– Maike Luiken
– Dimitrios Kalogeropoulos
Arguments
Standards serve as bridges between academic research and practical implementation, providing common language across disciplines
Standards must address both technical functionality and ethical considerations through guardrails
Corner protector analogy: standards should create safe environments for AI development rather than just restrictions
Topics
Digital standards
Both recognize that market forces and powerful industry players can establish de facto standards that bypass formal standards development processes, creating challenges for inclusive standardization.
Speakers
– Maike Luiken
– Participant
Arguments
Market adoption sometimes overtakes formal standards development, creating de facto industry standards
Power imbalances exist where major tech companies can impose de facto standards on others
Topics
Digital standards
Both speakers discuss the concept of regulatory/evidence sandboxes as mechanisms for testing technology implementations, though they have different perspectives on their effectiveness and scope.
Speakers
– Dimitrios Kalogeropoulos
– Shamira Ahmed
Arguments
Evidence sandboxes are needed to test AI applications in controlled environments before real-world deployment
Regulatory sandboxes are not new for managing technology implementation but may have limitations in addressing digital determinants of health
Topics
Digital standards | Legal and regulatory | Digital access
Unexpected consensus
Personal invitation is more effective than open calls for standards participation
Speakers
– Maike Luiken
– Dimitrios Kalogeropoulos
Arguments
Broad global participation is essential for fair standards, requiring personal outreach and invitation rather than waiting for volunteers
Standards evolve with the people who develop them and must reflect participants’ own understanding of reality
Explanation
It’s unexpected that experienced standards developers would agree that personal outreach is more effective than traditional open participation models, suggesting that current standards development processes may be inherently exclusive despite intentions for openness.
Topics
Digital standards
Standards development should be more like AI learning processes
Speakers
– Dimitrios Kalogeropoulos
Arguments
Standards are like LLMs. If you observe how LLMs learn, then the same thing should happen with standards. They’re just different layers of a very similar process
Explanation
The comparison between standards development and AI learning processes represents an unexpected conceptual framework that suggests iterative, adaptive approaches to standards creation rather than traditional linear development models.
Topics
Digital standards
Overall assessment
Summary
The speakers demonstrate strong consensus on the need for inclusive, ethical standards development that goes beyond technical specifications to address social and environmental impacts. They agree on the importance of transparency, metadata, and protective guardrails for AI systems, while acknowledging significant challenges in achieving broad participation and keeping pace with rapid technological development.
Consensus level
High level of consensus on fundamental principles with shared recognition of implementation challenges. The agreement spans technical, ethical, and procedural aspects of standards development, suggesting a mature understanding of the complexities involved. However, the consensus also reveals systemic issues in current standards development processes that may require fundamental changes to achieve stated goals of inclusivity and global participation.
Differences
Different viewpoints
Approach to standards development – prescriptive vs descriptive
Speakers
– Dimitrios Kalogeropoulos
– Maike Luiken
Arguments
If we focus on the metadata, instead of being prescriptive, being descriptive, if we focus on giving to people who participate in science development what kind of data we need, they will deliver it for you. Develop standards that tell you how to develop your standards.
Standards serve as bridges between academic research and practical implementation, providing common language across disciplines
Summary
Dimitrios advocates for descriptive, metadata-focused standards that guide how to develop standards, while Maike presents a more traditional view of standards as bridges providing common language and direct implementation guidance.
Topics
Digital standards | Data governance
Terminology and approach to regulatory frameworks
Speakers
– Dimitrios Kalogeropoulos
– Shamira Ahmed
Arguments
We avoid the term regulatory sandboxes, living labs, the collaborative spaces, where we can bring together communities and test out standardized solutions that deliver specific functions in healthcare.
Regulatory sandboxes are not a new thing when it comes to managing implementation of technologies
Summary
Dimitrios rejects the term ‘regulatory sandboxes’ in favor of ‘evidence sandboxes’ or ‘living labs’, while Shamira points out that regulatory sandboxes are established mechanisms, questioning their novelty and effectiveness.
Topics
Digital standards | Legal and regulatory | Digital access
Market-driven vs formal standards development
Speakers
– Maike Luiken
– Participant
Arguments
Market adoption sometimes overtakes formal standards development, creating de facto industry standards
Power imbalances exist where major tech companies can impose de facto standards on others
Summary
Maike presents market-driven standards as a natural process where the best solution wins, while the participant views this as problematic power imbalances where large companies impose their standards on others.
Topics
Digital standards
Unexpected differences
Effectiveness of established regulatory mechanisms
Speakers
– Dimitrios Kalogeropoulos
– Shamira Ahmed
Arguments
Evidence sandboxes are needed to test AI applications in controlled environments before real-world deployment
Regulatory sandboxes are not a new thing when it comes to managing implementation of technologies but may have limitations in addressing digital determinants of health
Explanation
This disagreement is unexpected because both speakers are discussing similar regulatory mechanisms, but Dimitrios presents his approach as innovative while Shamira points out these are established practices, questioning their effectiveness for achieving scale in healthcare.
Topics
Digital standards | Legal and regulatory | Digital access
Role of market forces in standards development
Speakers
– Maike Luiken
– Participant
Arguments
Market adoption sometimes overtakes formal standards development, creating de facto industry standards
Power imbalances exist where major tech companies can impose de facto standards on others
Explanation
This disagreement is unexpected because they’re discussing the same phenomenon (market-driven standards) but with completely different value judgments – Maike sees it as natural market selection while the participant views it as problematic power concentration.
Topics
Digital standards
Overall assessment
Summary
The main areas of disagreement center around methodological approaches to standards development (prescriptive vs descriptive), the role of market forces versus formal processes, and the effectiveness of regulatory mechanisms. Most disagreements are about means rather than ends.
Disagreement level
The level of disagreement is moderate and primarily methodological rather than fundamental. Speakers generally agree on core goals (inclusive participation, ethical considerations, transparency) but differ on implementation approaches. This suggests that while there are different perspectives on how to achieve effective standards development, there is substantial common ground that could facilitate collaboration and consensus-building in the standards development community.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers view standards as protective frameworks that bridge research and practice while creating safe environments for technology development through guardrails rather than restrictions.
Speakers
– Maike Luiken
– Dimitrios Kalogeropoulos
Arguments
Standards serve as bridges between academic research and practical implementation, providing common language across disciplines
Standards must address both technical functionality and ethical considerations through guardrails
Corner protector analogy: standards should create safe environments for AI development rather than just restrictions
Topics
Digital standards
Both recognize that market forces and powerful industry players can establish de facto standards that bypass formal standards development processes, creating challenges for inclusive standardization.
Speakers
– Maike Luiken
– Participant
Arguments
Market adoption sometimes overtakes formal standards development, creating de facto industry standards
Power imbalances exist where major tech companies can impose de facto standards on others
Topics
Digital standards
Both speakers discuss the concept of regulatory/evidence sandboxes as mechanisms for testing technology implementations, though they have different perspectives on their effectiveness and scope.
Speakers
– Dimitrios Kalogeropoulos
– Shamira Ahmed
Arguments
Evidence sandboxes are needed to test AI applications in controlled environments before real-world deployment
Regulatory sandboxes are not new for managing technology implementation but may have limitations in addressing digital determinants of health
Topics
Digital standards | Legal and regulatory | Digital access
Takeaways
Key takeaways
Standards must evolve beyond technical specifications to include environmental, social, and ethical considerations, becoming ‘enviro-sociotechnical standards’
Broad global participation is essential for fair standards development, requiring active outreach and personal invitations rather than waiting for volunteers
AI systems pose risks of reinforcing inequities and creating synthetic truth contamination if not properly governed through inclusive standards
Evidence sandboxes (controlled testing environments) are needed to test AI applications before real-world deployment
Standards serve as bridges between academic research and practical implementation, providing common language across disciplines
Market forces sometimes create de facto standards that overtake formal standards development processes
Transparency must be built into AI systems as a design feature, not just a compliance requirement
Standards development should focus on metadata and descriptive approaches rather than prescriptive ones
Education about technology capabilities and limitations is as important as the standards themselves
Resolutions and action items
Dimitrios invited participants to join IEEE’s AI and healthcare standards development projects
A global public health forum on generative AI will be convened in London on November 6th
Participants were encouraged to read the IEEE European Public Policy Committee’s AI and Digital Health for Equity policy statement
Karen Mulberry invited collaboration with the World Digital Technology Academy on AI standards and certification
Multiple speakers offered to continue discussions with interested participants after the session
Participants were invited to join IEEE standards working groups (membership not required, just interest and participation)
Unresolved issues
How to effectively bridge gaps between different standards communities that don’t interact with each other
How to address power imbalances where major tech companies can impose de facto standards on smaller players
How to maintain data quality when crowdsourcing inputs for AI standards development
How to standardize qualitative aspects like ethics and child rights, not just quantitative measures
How to ensure Global South participation in standards development to prevent perpetuating existing inequalities
How to keep standards development pace aligned with rapid technology advancement
How to prevent misuse of AI tools beyond their intended design parameters
How to create effective enforcement mechanisms for standards compliance
Suggested compromises
Start with prototype standards developed by available experts, then release globally for broader input and refinement
Use evidence sandboxes as controlled environments where all stakeholders can collaborate on standards testing
Focus on creating guardrails and safety measures rather than restrictive regulations
Develop adaptive standards that can evolve alongside rapidly changing technology
Build policy layers that complement rather than cancel existing standards and policies
Create living labs and collaborative spaces where communities can test standardized solutions together
Combine certification programs with standards to provide attestation of compliance and trustworthiness
Thought provoking comments
Standards, as far as I’m concerned, are a true bridge between research, academic research, new science, new findings, and the application, implementation, and use of this research output… we are really looking at the intersection between standards development for technology, but including sustainability considerations in that development.
Speaker
Maike Luiken
Reason
This comment reframes standards from mere technical specifications to dynamic bridges that connect research with real-world application while incorporating sustainability and ethics. It introduces the concept of ‘enviro-sociotechnical standards’ which expands the traditional scope of technical standards.
Impact
This foundational comment set the tone for the entire discussion by establishing that modern standards must go beyond technical functionality to include ethical, environmental, and social considerations. It influenced subsequent speakers to address the human and societal dimensions of their work.
In fast-moving domains like AI and digital health, we need a different approach, dynamic standards that accommodate uncertainty, support iterative learning, and evolve alongside the systems they govern… These are not just tools of compliance. They are tools of trust, designed to embed transparency, enable accountability, and make complexity navigable across sectors and borders.
Speaker
Dimitrios Kalogeropoulos
Reason
This comment challenges the traditional static nature of standards development and proposes a paradigm shift toward dynamic, evolving standards. It reframes standards as ‘tools of trust’ rather than mere compliance mechanisms, which is particularly profound for emerging technologies.
Impact
This comment fundamentally shifted the discussion from traditional standards development to adaptive governance models. It sparked multiple follow-up questions about implementation and led to discussions about regulatory sandboxes and evidence-based approaches.
As more models are trained on AI-generated content, originality and factual accuracy degrade. The knowledge space becomes recursively polluted and models hallucinate… This is not just a technical glitch. It’s a structural vulnerability.
Speaker
Dimitrios Kalogeropoulos
Reason
This comment introduces the critical concept of ‘synthetic truth contamination’ – a systemic risk that emerges when AI systems train on their own outputs. It elevates the discussion from technical implementation to existential risks in AI development.
Impact
This observation deepened the conversation by highlighting long-term sustainability issues in AI development. It connected technical standards to broader questions of information integrity and influenced later discussions about metadata requirements and transparency standards.
People want to be asked. They’re usually not just coming. They truly want to be asked… use the network of the folks that are already in your working group and look for suggestions who else could join and do a personal ask as far as you can get.
Speaker
Maike Luiken
Reason
This seemingly simple observation about human psychology in standards participation reveals a fundamental barrier to inclusive standards development. It shifts focus from technical processes to human relationship-building as essential for effective standards.
Impact
This comment directly addressed Heather Flanagan’s concern about achieving broader participation and provided practical guidance. It humanized the standards development process and influenced subsequent discussions about community engagement and global participation.
Standards are like LLMs. If you observe how LLMs learn, then the same thing should happen with standards. They’re just different layers of a very similar process… standards evolve with the people that develop them.
Speaker
Dimitrios Kalogeropoulos
Reason
This analogy between standards development and machine learning is intellectually provocative, suggesting that standards should learn and adapt iteratively like AI systems. It implies a fundamental reconceptualization of how standards should be developed and maintained.
Impact
This metaphor provided a new framework for thinking about standards development that resonated throughout the remaining discussion. It influenced conversations about adaptive processes and community-driven development approaches.
How can the government, the private sector, civil society, the technical community… collaborate in developing digital standards? … how can we allow those applications to go in a very highly regulated environment? Well, we need sandboxes… We need to go practical. We need to bring communities into play.
Speaker
Dimitrios Kalogeropoulos (in response to Karen Mulberry’s question)
Reason
This response bridges theoretical multi-stakeholder collaboration with practical implementation through ‘evidence sandboxes.’ It proposes a concrete mechanism for bringing diverse stakeholders together in controlled environments to develop and test standards collaboratively.
Impact
This comment introduced a practical solution to the multi-stakeholder challenge and became a recurring theme. It influenced multiple follow-up questions about implementation, global participation, and the balance between innovation and regulation.
How do you deal with the power imbalance in the arena of standard definitions? … the key players are AWS and Microsoft and Meta. And they basically set their own standards and they try to impose them on others because they have the power to do so.
Speaker
Unnamed participant
Reason
This question cuts to the heart of democratic participation in standards development by highlighting how market power can override inclusive processes. It challenges the idealistic view of collaborative standards development with economic and political realities.
Impact
This question forced speakers to confront the tension between inclusive ideals and market realities. It led to discussions about de facto versus formal standards and the role of policy in addressing power imbalances, adding a critical dimension to the conversation.
Overall assessment
These key comments fundamentally transformed what began as a technical discussion about standards development into a sophisticated exploration of adaptive governance, democratic participation, and systemic risks in emerging technologies. The most impactful contributions challenged traditional assumptions about standards as static technical documents, instead proposing dynamic, community-driven approaches that evolve with technology and society. The discussion progressed from theoretical frameworks to practical implementation challenges, with participants grappling with power imbalances, global participation, and the need for new institutional mechanisms like ‘evidence sandboxes.’ The conversation’s evolution from technical specifications to questions of trust, democracy, and social impact demonstrates how thoughtful interventions can elevate a discussion to address fundamental questions about technology governance in a rapidly changing world.
Follow-up questions
How can we keep up with standards development alongside rapidly advancing technology development?
Speaker
Maike Luiken
Explanation
This addresses a fundamental challenge in standards development where technology advances faster than the standards can be created to govern them safely and effectively.
How can we ensure broad global participation, especially from the Global South, in standards development processes?
Speaker
Participant
Explanation
This is critical for ensuring standards are fair and inclusive for all populations, not just those from developed countries who typically dominate standards development.
How can we bridge the gap between different standards communities that don’t typically interact with each other?
Speaker
Heather Flanagan
Explanation
This addresses the siloed nature of standards organizations and the need for better cross-community collaboration to create more comprehensive and interoperable standards.
How can we ensure data quality when crowdsourcing inputs for AI dataset standards development?
Speaker
Priyanka Dasgupta
Explanation
This is crucial for maintaining the integrity and reliability of AI systems while still enabling inclusive participation in standards development.
How can we build qualitative aspects like ethics and child rights into technical standards, and how do we enforce compliance with existing ethics-by-design standards?
Speaker
Kiki Wicachali
Explanation
This addresses the challenge of translating abstract ethical concepts into measurable technical requirements and ensuring they are actually implemented.
How do we deal with power imbalances in standards definition, particularly when large tech companies set de facto standards?
Speaker
Participant
Explanation
This highlights the challenge of ensuring democratic and fair standards development when powerful market players can impose their own standards through market dominance.
What can we do to better tackle the situation where groundbreaking technologies develop faster than related standards can be established?
Speaker
Participant
Explanation
This addresses the ongoing challenge of standards lagging behind technological innovation, potentially leading to negative consequences from unregulated technology deployment.
How can evidence sandboxes be effectively implemented to create collaborative spaces for testing standardized solutions in healthcare?
Speaker
Shamira Ahmed
Explanation
This explores the practical implementation of regulatory frameworks that can keep pace with AI development while ensuring safety and efficacy.
What are the concrete experiences of standards development going hand in hand with people development and community evolution?
Speaker
Philip
Explanation
This seeks practical examples of how human-centered approaches to standards development have been successfully implemented and their impact on communities and leadership.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.