International multistakeholder cooperation for AI standards | IGF 2023 WS #465

12 Oct 2023 00:30h - 01:00h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Matilda Road

The AI Standards Hub is a collaboration between the Alan Turing Institute, British Standards Institute, National Physical Laboratory, and the UK government’s Department for Science, Innovation, and Technology. It aims to promote the responsible use of artificial intelligence (AI) and engage stakeholders in international AI standardization.

One of the key missions of the AI Standards Hub is to advance the use of responsible AI by encouraging the development and adoption of international standards. This ensures that AI systems are developed, deployed, and used in a responsible and ethical manner, fostering public trust and mitigating potential risks.

The involvement of stakeholders is crucial in the international AI standardization landscape. The AI Standards Hub empowers stakeholders and encourages their active participation in the standardization process. This ensures that the resulting standards are comprehensive, inclusive, and representative of diverse interests.

Standards are voluntary codes of best practice that companies adhere to. They assure quality, safety, environmental targets, ethical development, and promote interoperability between products. Adhering to standards helps build trust between organizations and consumers.

Standards also facilitate market access and link to other government mechanisms. Aligning with standards allows companies to enter new markets and enhance competitiveness. Interoperability ensures seamless collaboration between different systems, promoting knowledge sharing and technology transfer.

The adoption of standards provides benefits such as quality assurance, safety, and interoperability. Compliance ensures that products and services meet defined norms and requirements, instilling confidence in their reliability and performance. Interoperability allows for the exchange of information and collaboration, fostering innovation and advancements.

In conclusion, the AI Standards Hub promotes responsible AI use and engages stakeholders in international AI standardization. It fosters the development and adoption of international standards to ensure ethical AI use. Standards offer benefits like quality assurance, safety, and interoperability, building trust between organizations and consumers, enhancing market access, and linking to government mechanisms. The adoption of standards is crucial for responsible consumption, sustainable production, and industry innovation.

Ashley Casovan

Standards play a crucial role in the field of artificial intelligence (AI), ensuring consistency, reliability, and safety. However, the lack of standardisation in this area can lead to confusion and hinder the advancement of AI technologies. The complexity of the topic itself adds to the challenge of developing universally accepted standards.

To address this issue, the Canadian government has taken proactive steps by establishing the Data and AI Standards Collaborative. Led by Ashley, representing civil society, this initiative aims to comprehensively understand the implications of AI systems. One of the primary goals of the collaborative is to identify specific use cases and develop context-specific standards throughout the entire value chain of AI systems. This proactive approach not only helps in ensuring the effectiveness and ethical use of AI but also supports SDG 9: Industry, Innovation, and Infrastructure.

Within the AI ecosystem, various types of standards are required at different levels. This includes certifications and standards for both evaluating the quality management systems and ensuring product-level standards. Furthermore, there is a growing interest in understanding the individual training requirements for AI. This multifaceted approach to standards highlights the complexity and diversity within the field.

The establishment of multi-stakeholder forums is recognised as a positive step towards developing AI standards. These forums play a vital role in establishing common definitions and understanding of AI system life cycles. North American markets have embraced such initiatives, including the National Institute of Standards and Technology’s (NIST) AIRMF, demonstrating their effectiveness in shaping AI standards. This collaborative effort aligns with SDG 17: Partnerships for the Goals.

Inclusion of all relevant stakeholders is seen as crucial for effective AI standards. The inclusivity of diverse perspectives is paramount for ensuring that the standards address the needs and challenges of different communities. Effective data analysis and processing within the context of AI standards necessitate inclusivity. This aligns with SDG 10: Reduced Inequalities as it promotes fairness and equal representation in the development of AI standards.

Engaging Indigenous groups and considering their perspectives is critical in developing AI system standards. Efforts are being made in Canada to include the voices of the most impacted populations. By understanding the potential harms of AI systems to these groups, measures can be taken to mitigate them. This highlights the significance of reducing inequalities (SDG 10) and fostering inclusivity.

Given the global nature of AI, collaboration on an international scale is essential. An international exercise through organisations such as the Organisation for Economic Co-operation and Development (OECD) or the Internet Governance Forum (IGF) is proposed for mapping AI standards. Collaboration between countries and regions will help avoid duplication of efforts, foster harmonisation, and promote the implementation of effective AI standards globally.

It is important to recognise that AI is not a monolithic entity but rather varies in its types of uses and associated harms. Different AI systems have different applications and potential risks. Therefore, it is crucial to engage the right stakeholders to discuss and address these specific uses and potential harms. This aligns with the importance of SDG 3: Good Health and Well-being and SDG 16: Peace, Justice, and Strong Institutions.

In conclusion, the development of AI standards is a complex and vital undertaking. The Canadian government’s Data and AI Standards Collaborative, the involvement of multi-stakeholder forums, the importance of inclusivity and engagement with Indigenous groups, and the need for international collaboration are all prominent factors in shaping effective AI standards. Recognising the diversity and potential impact of AI systems, it is essential to have comprehensive discussions and involve all relevant stakeholders to ensure the development and implementation of robust and ethical AI standards.

Audience

The analysis reveals that the creation of AI standards involves various bodies, but their acceptance by governments is not consistent. In particular, standards institutions accepted by the government are more recognized than technical community-led standards, such as those from the IETF or IEEE, which are often excluded from government policies. This highlights a discrepancy between the standards created by technical communities and those embraced by governments.

Nevertheless, the analysis suggests reaching out to the technical community for AI standards. The technical community is seen as a valuable resource for developing and refining AI standards. Furthermore, the analysis encourages the creation of a declaration or main message from the AI track at the IGF (Internet Governance Forum). This indicates the importance of consolidating the efforts of the AI track at IGF to provide a unified message and promote collaboration in the field of AI standards.

Consumer organizations are recognized as playing a critical role in the design of ethical and responsible AI standards. They represent actual user interests and can provide valuable insights and data for evidence-based standards. Additionally, consumer organizations can drive the adoption of standards by advocating for consumer-friendly solutions. The analysis also identifies the AI Standards Hub as a valuable initiative from a consumer organization’s perspective. The Hub acknowledges and welcomes consumer organizations, breaking the norm of industry dominance in standardization spaces. It also helps bridge the capacity gap by enabling consumer organizations to understand and contribute effectively to complex AI discussions.

The analysis suggests that AI standardization processes should be made accessible to consumers. Traditionally, standardization spaces have been dominated by industry experts, but involving consumers early in the process can help ensure that standards are compliant and sustainable from the start. User-friendly tools and resources can aid consumers in understanding AI and AI standards, empowering them to participate effectively in the standardization process.

Furthermore, the involvement of consumer organizations can diversify the AI standardization process. They represent a diverse range of views and interests, bringing significant diversity into the standardization process. Consumer International, as a global organization, is specifically mentioned as having the potential to facilitate this diversity in the standardization process.

In conclusion, the analysis highlights the importance of collaboration and inclusivity in the development of AI standards. It underscores the need to bridge the gap between technical community-led standards and government policies. The involvement of consumer organizations is crucial in ensuring the ethical and responsible development of AI standards. Making AI standardization processes accessible to consumers and diversifying the standardization process are essential steps towards creating inclusive and effective AI standards.

Wansi Lee

International cooperation is crucial for the standardization of AI regulation, and Singapore actively participates in this process. The country closely collaborates with other nations and engages in multilateral processes to align its AI practices and contribute to global standards. Singapore has initiated a mapping project with the National Institute of Standards and Technology (NIST) to ensure the alignment of its AI practices.

In addition, multi-stakeholder engagement is considered essential for the technical development and sharing of AI knowledge. Singapore leads in this area by creating the AI Verify Testing Framework and Toolkit, which provides comprehensive tests for fairness, explainability, and robustness of AI systems. This initiative is open-source, allowing global community contribution and engagement. The AI Verify Toolkit supports responsible AI implementation.

Adherence to AI guidelines is important, and the Singapore government plays an active role in setting guidelines for organizations. Implementing these guidelines ensures responsible AI implementation. The government also utilizes the AI Verify Testing Framework and Toolkit to validate the implementation of responsible AI requirements.

Given Singapore’s limited resources, the country strategically focuses its efforts on specific areas where it can contribute to global AI conversations. Singapore adopts existing international efforts where possible and fills gaps to make a valuable contribution. Despite being a small country, Singapore recognizes the significance of its role in standard setting and strives to make a meaningful impact.

The Singapore government actively engages with industry members to incorporate a broad perspective in AI development. Input from these companies is valued to create a comprehensive and inclusive framework for responsible AI implementation.

The establishment of the AI Verify Foundation provides a platform for all interested organizations to contribute to AI standards. The open-source platform is not limited by organization size or location, welcoming diverse perspectives. Work done on the AI Verify Foundation platform is rationalized at the national level in Singapore and supported globally through various platforms, such as OECD, GPA, or ISO.

In conclusion, Singapore recognizes the importance of international cooperation, multi-stakeholder engagement, adherence to guidelines, strategic resource management, and industry partnerships in standardizing AI regulation. The country’s active involvement in initiatives such as the AI Verify Testing Framework and Toolkit and the AI Verify Foundation demonstrates its commitment to responsible AI development and global AI conversations. The emphasis on harmonized or aligned standards by Wansi Lee further highlights the need for a unified approach to AI regulation.

Florian Ostmann

During the session, the role of AI standards in the responsible use and development of AI was thoroughly explored. The focus was placed on the importance of multi-stakeholder participation and international cooperation in developing these standards. It was recognized that standards provide a specific governance tool for ensuring the responsible adoption and implementation of AI technology.

In line with this, the UK launched the AI Standards Hub, a collaborative initiative involving the Alan Turing Institute, the British Standards Institution, and the National Physical Laboratory. The aim of this initiative is to increase awareness and participation in AI standardization efforts. The partnership is working closely with the UK government to ensure a coordinated approach and effective implementation of AI standards.

Florian Ostmann, the head of AI Governance and Regulatory Innovation at the Alan Turing Institute, stressed the significance of international cooperation and multi-stakeholder participation in making AI standards a success. He emphasized the need for a collective effort involving various stakeholders to establish effective frameworks and guidelines for AI development and use. The discussion highlighted the recognition of AI standards as a key factor in ensuring responsible AI practices.

The UK government’s commitment to AI standards was reiterated as the National AI Strategy published in September 2021 highlighted the AI Standards Hub as a key deliverable. Additionally, the AI Regulation White Paper emphasized the role of standards in implementing a context-specific, risk-based, and decentralized regulatory approach. This further demonstrates the UK government’s understanding of the importance of AI standards in governing AI technology.

The AI Standards Hub actively contributes to the field of AI standardization. It undertakes research to provide strategic direction and analysis, offers e-learning materials and in-person training events to engage stakeholders, and organizes events to gather input on AI standards. By conducting these activities, the AI Standards Hub aims to ensure a comprehensive approach to addressing the needs and requirements of AI standardization.

The discussion also highlighted the significance of considering a wider landscape of AI standards. While the AI Standards Hub focuses on developed standards, it was acknowledged that other organizations, like ITF, also contribute to the development of AI standards. This wider perspective helps in gaining a holistic understanding of AI standards and their implications in various contexts.

Florian Ostmann expressed a desire to continue the discussion on standards and AI, indicating that the session had only scratched the surface of this vast topic. He welcomed ideas for collaboration from around the world, underscoring the importance of international cooperation in shaping AI standards and governance.

In conclusion, the session emphasized the role of AI standards in the responsible use and development of AI technology. It highlighted the significance of multi-stakeholder participation, international cooperation, and the need to consider a wider landscape of AI standards. The UK’s AI Standards Hub, in collaboration with the government, is actively working towards increasing awareness and participation in AI standardization. Florian Ostmann’s insights further emphasized the importance of international collaboration and the need for ongoing discussions on AI standards and governance.

Aurelie Jacquet

The analysis examines multiple viewpoints on the significance of AI standardisation in the context of international governance. Aurelie Jacquet asserts that AI standardisation can serve as an agile tool for effective international governance, highlighting its potential benefits. On the other hand, another viewpoint stresses the indispensability of standards in regulating and ensuring the reliability of AI systems for industry purposes. Australia is cited as an active participant in shaping international AI standards since 2018, with a roadmap focusing on 40,2001 in 2020. The adoption of AI standards by the government aligns with the NSW AI Assurance Framework, strengthening the use of standards in AI systems.

Education and awareness regarding standards emerge as important factors in promoting the understanding and implementation of AI standards. Australia has taken steps to develop education programs on standards and build tools in collaboration with CSIRO and Data61, leveraging their expertise in the field. These initiatives aim to enhance knowledge and facilitate the adoption of standards across various sectors.

Despite having a small delegation, Australia has made significant contributions to standards development and has played an influential role in shaping international mechanisms. Through collaboration with other countries, Australia strives to tailor mechanisms to accommodate delegations of different sizes. However, it is noted that limited resources and time pose challenges to participation in standards development. In this regard, Australia has received support from nonprofit organisations and their own government, which enables experts to voluntarily participate and contribute to the development of standards.

Context is highlighted as a crucial element for effective engagement in standards development. Australia’s experts have been actively involved in developing white papers that provide the necessary background and context for standards documents. This ensures that stakeholders have a comprehensive understanding of the issues at hand, fostering informed discussions and decision-making processes.

The analysis also highlights the challenges faced by SMEs in the uptake of standards. Larger organisations tend to adopt standards more readily, leaving SMEs at a disadvantage. Efforts are underway to address these challenges and make standards more accessible and fit for purpose for SMEs. This ongoing discussion aims to create a more inclusive environment for all stakeholders, regardless of their size or resources.

The significance of stakeholder inclusion is emphasised throughout the analysis. Regardless of delegation size, stakeholder engagement is seen as critical in effective standards development. Australia has actively collaborated with other countries to ensure that mechanisms and processes are tailored to their respective sizes, highlighting the importance of inclusiveness in shaping international standards.

Standards are seen as enablers of interoperability, promoting harmonisation of varied perspectives in AI regulations. Different regulatory initiatives and practices in AI are deemed beneficial, and standards play a key role in facilitating interoperability and bridging gaps between different approaches.

Moreover, the adoption of AI standards is advocated as a means to learn from international best practices. Experts from diverse backgrounds can engage in discussions, enabling nations to develop policies and grow in a responsible manner. The focus lies on using AI responsibly and scaling its application through the use of interoperability standards.

In conclusion, the analysis underscores the importance of AI standardisation in international governance. It highlights various viewpoints on the subject, including the agile nature of AI standardisation, the need for industry-informed regulation, the significance of education and awareness, the role of context, the challenges faced by SMEs, the importance of stakeholder inclusion, and the benefits of interoperability and learning from international best practices. The analysis provides valuable insights for policymakers, industry professionals, and stakeholders involved in AI standardisation and governance.

Nikita Bhangu

The UK government recognizes the importance of AI standards in the regulatory framework for AI, as highlighted in the recent AI White Paper. They emphasize the significance of standards and other tools in AI governance. Digital standards are crucial for effectively implementing the government’s AI policy.

To ensure effective standardization, the UK government has consulted stakeholders to identify challenges in the UK. This aims to provide practical tools for stakeholders to engage in the standardization ecosystem, promoting participation, collaboration, and innovation in AI standards.

The establishment of the AI Standards Hub demonstrates the UK government’s commitment to reducing barriers to AI standards. The hub, established a year ago, has made significant contributions to the understanding of AI standards in the UK. Independent evaluation acknowledges the positive impact of the hub in overcoming obstacles and promoting AI standards adoption.

The UK government plans to expand the AI Standards Hub and foster international collaborations. This growth and increased collaboration will enhance global efforts towards achieving AI standards, benefiting industries and infrastructure. Collaboration with international partners aims to create synergies between AI governance and standards.

Representation of all stakeholder groups, including small to medium businesses and civil society, is crucial in standard development organizations. However, small to medium digital technology companies and civil society face challenges in participating effectively due to resource and expertise limitations. Even the government, as a key stakeholder, lacks technical expertise and resources.

The UK government is actively working to improve representation and diversity in standard development organizations. Initiatives include developing a talent pipeline to increase diversity and collaborating with international partners and organizations such as the Internet Governance Forum’s Multi-Advisory Group. Existing organizations like BSI and IEC contribute to efforts for diverse and inclusive standards development organizations.

In conclusion, the UK government recognizes the importance of AI standards in the regulatory framework for AI and actively works towards their implementation. Consultation with stakeholders, establishment of the AI Standards Hub, and efforts to increase international collaborations reduce barriers and promote a thriving standardization ecosystem. Initiatives aim to ensure representation of all stakeholder groups, fostering diversity and inclusion. These actions contribute to advancements in the field of AI and promote sustainable development across sectors.

Sonny

The AI Act introduced by the European Union aims to govern and assess AI systems, particularly high-risk ones. It sets out five principles and establishes seven essential requirements for these systems. The act underscores the need for collaboration and global standards to ensure fair and consistent AI governance. By adhering to shared standards, stakeholders can operate on a level playing field.

The AI Standards Hub is a valuable resource that promotes global cooperation. It offers a comprehensive database of AI standards and policies, accessible worldwide. The hub facilitates collaboration among stakeholders, enabling them to align efforts and work towards common goals. Additionally, it provides e-learning materials to enhance understanding of AI standards.

Moreover, the AI Standards Hub strives to promote inclusive access to AI standards and policies. It encourages stakeholders from diverse backgrounds and industries to contribute and participate in standard development and implementation. This inclusive approach ensures comprehensive and effective AI governance.

The partnership between the AI Standards Hub and international organizations, such as the OECD, further demonstrates the significance of global cooperation in this field. By leveraging expertise and resources from like-minded institutions, the hub fosters a collective effort to tackle AI-related challenges and opportunities.

In summary, the EU AI Act and the AI Standards Hub emphasize the importance of collaboration, global standards, and inclusive access to AI standards and policies. By working together, stakeholders can establish a harmonized approach to AI governance, promoting ethical and responsible use of AI technologies across industries and regions.

Session transcript

Florian Ostmann:
Good morning, everyone. I think we’re going to start. I realize it’s an early start. And thank you very much for those of you who are in the room for making it so early to this session to start today with us. My name is Florian Ostmann. I’m the head of AI Governance and Regulatory Innovation at the Alan Turing Institute, which is the UK’s National Institute for Data Science and AI. And it’s a real pleasure to welcome you to this session today, which will be dedicated to thinking about AI standardization and the role that multi-stakeholder participation and international cooperation have to play to make AI standards a success. There’s been quite a lot of discussion, of course, across many different sessions around AI over the last few days, including on AI governance in many different ways. And standards has come up in quite a few different contexts. But I don’t believe there has been a full session dedicated to standards in the sense that we will be looking at today, which is standards developed by formally recognized standards development organizations. We’ll tell you a bit more about what we mean by that in a moment. And so we’re really excited about the opportunity to dive deeper into this particular topic, into the role that standards as a specific governance tool can play to ensure the responsible use and development of AI. And to do so in particular in relation to the principles that are at the core of IGF in terms of multi-stakeholder participation and international cooperation. I’ll say a few words about the structure of the session. We will begin with a presentation about an initiative that we launched in the UK just about a year ago. That initiative is called the AI Standards Hub. Some of you may have heard about it before. It’s a partnership between the Alan Turing Institute, the British Standards Institution, and the National Physical Laboratory in the UK, working very closely with. the UK government. And it’s an initiative dedicated to awareness raising and increasing participation capacity building around AI standardization. So we’ll tell you a bit about how we set up the initiative, what the mission is, and also our plans and sort of interest to collaborate internationally with like-minded partners around these topics. And we’ll then move on to a panel discussion. We’ve got four terrific speakers with us today from different regions of the world to join us on a reflect on these themes of multi stakeholder participation and international cooperation in AI standards. And then we’ll make sure to reserve some time at the end for sort of your participation, your thoughts, and questions that you may have. We will be using Mentimeter later on as an interactive exercise, but we will get to that later on. We’ll share the link for that when we get to it. And please do feel free, you know, throughout the session to use the chat function or the Q&A function to share any questions. We will monitor the chat and we will try our best to work any questions into the session as we move along. So with all of that said, we will start with the presentation and for that I’m joined by two colleagues by Mathilda Road, who is the AI and cyber security sector lead at the British Standards Institution, which is the UK’s national standards body, and Sandeep Bhandari, head of digital innovation at the National Physical Laboratory, which is the UK’s National Measurement Institute or Metrology Institute. So I’ll pass over to them and then I’ll come back later.

Matilda Road:
Mathilda, over to you. Thank you, Florian. Good morning everyone. It’s great to see so many of you here and thank you to those of us who are joining us online as well. So the AI Standards Hub, as Florian has already introduced, has got two key missions. And the first is advancing the use of responsible AI, and that’s by unlocking some of the particular benefits of standards as a governance mechanism. As Florian mentioned, this week we’ve heard a lot about regulation for AI, guidelines and frameworks, but in this session we’re specifically focusing on standards which is distinct from these other regulatory mechanisms in the sense that standards are voluntary codes of conduct of representing best practice. And the second mission of the Standards Hub is to empower stakeholders to become actively involved in the international AI standardization landscape, including participation in the development of standards and the informed use of published standards. If you’ve attended any other sessions this week on how we can look at responsible AI practices, you might find the landscape slightly overwhelming and the AI Standards Hub can hopefully be a tool to help navigate that space. Is anyone in the room involved in the development of standards in any way? Just out of interest? No, okay, great. So there are several organizations behind the AI Standards Hub. We’ve heard again this week, I’m sure you’ve been to other sessions on the use of responsible AI calls for tracing the data that’s used in models, finding weaknesses, making sure that they’re reliable and not giving us untrustworthy results. Many of these questions are actually still open research problems, and that’s one of the reasons that the Hub brings together several organizations with different strengths. So the three that we’re here representing today that make up the Hub are the Alan Turing Institute, which is the national institute for. Data Science and AI, so it’s an academic research organization. BSI, British Standards Institute, which is the national standards body that represents the UK at ISO, the International Standards Organization. And the National Physical Laboratory, NPL, which is the National Metrology or Measurement, not Weather Institute, which produces technical measurement standards. And these feed into the overall standards themselves. And this initiative has been supported by the UK government’s Department for Science, Innovation, and Technology. So international standards are governance tools which are developed by various standards organizations, some of which we’ve listed on the slide here. And if you aren’t aware of the standards development landscape in AI, which hopefully by the end of this session you will be more informed on that topic, you might have come across some of the most famous ISO standards, such as 27,001 series on cybersecurity and 9,001 on quality management. And there is now a rapidly growing landscape of standards for AI. So we’re anticipating the first ISO standard on AI to be published at the end of this year or perhaps beginning of next year. And there are many others in development, including on the use of sustainable AI, mitigation of bias in AI, and a very interesting standard to be published, hopefully next spring or summer, 42006 on audit practices for AI, which will also be very interesting for compliance with the EU AI Act. So why standardization for AI versus, for example, regulation or framework? So regulation is obviously something that’s supported by legal framework and organization. organizations are required to comply, whereas standards are voluntary codes of best practice. But why would companies bother to adhere to these voluntary codes? Well, as you might have heard from some of the large organizations developing AI models this week, they’ve been developing their own internal codes of best practice, but each one of these are slightly different. If we can develop a standardized way of doing this, we can provide quality and safety assurance, in-build other goals like environmental targets or UN Sustainable Development Goals. They can be used for ethical development, knowledge and technology transfer, and to provide interoperability between products. Ultimately, standards can help build trust between organizations and their consumers, and also along the supply chain, both in the supply chain that an organization is feeding into and the organizations that are feeding into your own supply chain. These can also provide market access by complying with certain trade requirements. They link into other government mechanisms, and can also be used as a kind of pathway towards regulation, as they are indeed in certain sectors, particularly for things like medical devices, for example. So, just because we had a response in the room, to standards development, I hope that this is relevant information, that standards are voluntary for organizations to comply with. They’re developed by committees, so they’re not developed by standards bodies, and unlike regulation, which is developed by regulators, they’re developed by experts in this area, who are volunteers on a standards committee, and they’re also developed by consensus, two-thirds consensus, in case you’re interested. There are also quite a lot of standards. 3,000 standards, roughly, are produced every year by BSI, British Standards Institute, alone, and again, I hope that the AI Standards Hub will be a useful tool for. those of you who are looking to navigate this space with regards to AI. So not just the horizontal AI standards, which are general standards related to AI, but also the ones that are sector specific, because we have specific requirements in certain sectors. Because it’s early in the morning, I thought it would be fun to do a quiz. And I wondered if anybody, if these things on the board mean anything to anyone in the room. Don’t be shy. Okay, good. Yeah. This again, is a kind of indicator of the fact that there can be quite an overwhelming number of acronyms and numbers in the standards landscape, which once you become familiar with using them, you find yourself using them all the time, but can make it quite impenetrable in the first place. So 42001 is the standard that we’re expecting to be published at the end of the year. It’s currently in FDIS stage, which is final draft international standard. It means it’s only out for editorial comments. And as long as there aren’t too many of them, we’re expecting it to be published in December or January. So this will be the first international standard published on AI. And it’s an AI risk management system standard. I already mentioned 42006 on audit, which we expect next spring. JTC1 is the joint technical committee one, which is the parent committee of subcommittee 42. I can keep going on with these numbers. That actually developed 42001. And then just showing how this maps down to the national level, ART1 is the relevant AI standards development committee within BSI. And in case you’re interested on the ISO website, there’s a lot of information about how many and which countries are involved in each committee in the development of the standards. So you can dig into that data. And with that, I’m going to hand back to Florian to tell you more about the hub.

Florian Ostmann:
Thank you, Matilda. So with that set out in terms of what kinds of standards we are focused on and why we think those standards are important, let me tell you a bit more about the relationship between the standards hub and the UK’s policy thinking on AI, and then go into more detail on how we developed our strategy and the kinds of challenges that we’re trying to address with the hub. So in terms of the policy context, and Nikita, who’s joining us on the panel discussion will go into more detail later on. The main thing to mention is that the UK government has, over the last few years, gone through a process of thinking about the regulation of AI, but also more broadly, the regulation of digital technologies in general, and throughout different pieces of policy work, policy papers, and policy statement, there’s been a recognition of the role of standards as a governance tool for the reasons that Matilda mentioned, sort of the way in which standards are developed, the fact that they are open to input from all relevant stakeholder groups, the fact that they can be useful to support regulation in various ways, or also to fill regulatory gaps where regulation doesn’t exist. So Nikita will say more about this, but essentially the hub is a deliverable that was highlighted in the National AI Strategy that the UK government published in September 2021, and also plays an important role in the context of the recently published, that was about half a year ago in March this year, the recently published AI Regulation White Paper. Now, the AI Regulation White Paper, at a very high level, just a few words about that, sets out a context-specific. So, what is the White Paper? The White Paper is a framework that is based on a very specific risk-based and decentralized regulatory approach. So, what that means in practice is that it’s based on the view that existing regulatory bodies are best placed to think about the implications of AI in the relevant regulatory remits. And in order to sort of encourage and enable regulatory regulators to think about the implementation of AI, the White Paper sets out five principles. So, the White Paper is fairly familiar to anyone, you know, familiar with AI governance. They resonate very closely with the OECD AI principles, for example. So, the White Paper sets out these five principles and then essentially sort of puts the task to regulators to think about the implementation of these principles in their remits and emphasizes the role of the regulators in the implementation of these principles. So, in a sense, there’s an important link between the objectives of the regulatory approach and the role of standards in the sense that standards are seen as facilitating the implementation of principles, providing the detail that is needed to make those principles meaningful in a given context, in a given regulatory remit. So, the White Paper sets out five principles and then sets out the five stakeholders that we’re trying to address with the hub. As Matilda mentioned, standards in the organizations that we are focused on are developed through a process that is open to all stakeholder groups, and we know that in the AI space, there are lots of different stakeholder groups, you know, whose interests are affected or whose views are relevant to the development of standards, and that includes, of course, different actors in the AI space, such as populations, institutions, contexts, many different possibilities that are out there. Otherwise, those are exposed participants outside of industry, and includes, importantly, civil – and consumer perspectives, and it also includes regulators and academic researchers. And while the standards development process is open to all of those groups, we know historically that not all of these groups are equally strongly represented in those processes. And so to give some examples, civil society voices we know are less strongly represented compared to other voices, compared to industry, for example. And within industry, SMEs and startups, for example, are less strongly represented compared to larger companies. So at the core of the mission of the AI Standards Hub and the reason for setting it up isn’t just the recognition of the importance of AI standards, but it’s also the recognition that in order for AI standards to be effective and fit for purpose, it’s really important that all of those stakeholder perspectives are included in the development of standards. And what we’re trying to do with the Hub is to help all of those stakeholder groups, and especially those who have less experience in the space, to develop the knowledge, develop the skills and the understanding, and also perhaps the coordination that’s needed to achieve that involvement. In terms of what sort of the key groups are, I think I mentioned them already. So in the private sector, it includes larger and smaller companies, includes civil society, consumer groups, regulatory bodies, academia, and then, of course, there are people who are already actively involved in standards committees. Those are also key because they can, of course, play an important role in guiding others and sharing information about that work. We did a fair amount of stakeholder engagement leading up to the launch of the Hub. So we were very mindful of making sure that we develop an initiative and develop a shape for the initiative that meets actual needs, rather than just developing something in the abstract for which there isn’t a need. And so we did several engagement roundtables and surveys. with each relevant stakeholder groups and you know one of the things we tested at the outset of course was you know is there a recognition of the importance of standards and what’s the current level of awareness and engagement in the space and as this slide shows you know there’s more detailed data but just at a high level to give you a sense it’s really that across all groups you know there was a strong recognition that standards are going to be key for AI governance and there is significant thinking in each group about AI standardization but there is a clear gap as you can see between the perceived importance of the topic and the extent of current thinking and that sort of you know awareness gap and to some extent also capability gap in thinking about standards is what we’re trying to address. We then try to dig a bit deeper and sort of you know explored with stakeholder groups what are the challenges what’s holding you back you know what explains that gap what’s holding you back in engaging with AI standardization and at a high level there were sort of four key areas that came out of that part of the engagement. The first one is a perceived lack of easily accessible information around AI standards that includes understanding keeping track of which standards are being developed and published but then also identifying those standards that are most relevant to a given user or stakeholder. Secondly the skills needed to contribute to standards development or engage with standards once they are published so there you know there’s a strong sense that both the process of development developing standards can be quite complex and you need knowledge and skills to navigate the process but then of course also knowledge about what does best practice for AI look like you know what does a good standard look like what should I be contributing if I am on a committee and contribute to drafting a standard. So skills the second area. Thirdly, securing organizational buy-in for engagement. So we know that engagement with standards development can be time-consuming. How do I convince my organization that that’s a worthwhile thing to do, given that they’re computing resource priorities? And that’s sort of relevant, of course, especially for those types of organizations who are historically less involved in this space. And then fourthly, a need for analysis and strategic direction. Given the fact, and I’ll say more about this in a moment, that there is such a vast number of AI-related standards already being developed, which are the areas that are most important? Are there gaps that need to be addressed, standards that are missing? And a need for strategic direction in shaping AI standardization. So those were the challenges. And we then, in shaping the strategy, essentially translated those challenges in four different pillars of activity that the hub is pursuing. The first pillar is what we’re calling the observatory. That can be found on our website, and that consists of two databases. One is a database on AI standards, and the other one is a database on AI-related policies from around the world. Community collaboration is around organizing events, virtual events, in-person events, to engage the community and bring stakeholders together around conversations to gather input into standards that are under development to identify priorities and needs and so on. Knowledge and training, that’s where we’ve developed a suite of e-learning materials that can be found on our website, but we’re also offering in-person training events, virtual ones and in-person ones. And then, fourthly, research and analysis. That’s sort of a more traditional research function where the hub pursues research to develop insights to address these challenges. needs for strategic direction and analysis. I would like to say just a bit more about each pillar and in particular the observatory and within the observatory the AI standards database because that’s in a way the resource that you know sort of took the most thinking in terms of how we develop it and and how it should be designed. So the observatory for AI standards is a database on our website that tracks both standards under development and standards that have already been published for AI. You can see a breakdown on the slide for how these standards are distributed across different categories. The key thing here is that it’s a really a large number already so over 300 relevant standards that are captured in the database, a large number of standards that are already published and what sort of was key in designing the database is to make it easier to navigate that vast number of standards. So we’ve developed a range of different filter categories, a search function and so on. We also have interactive features, it’s possible to follow a standard in which case you get updates when the standard moves from one development phase to the next for example. You can let other community members know if you have been involved in the development of standards so they can reach out to you and try to find out more and then there is a discussion thread and also opportunity to leave reviews for a standard that you have may have used or that you may have been involved with. In terms of the other pillars I’ll keep this very brief but you will find more information on all of this on our website. So on the community collaboration pillar over the last 12 months we had a series of events. Those are to a large extent recorded and you will be able to find recordings on our website. page if you’re interested. Some of that was focused on transparent and explainable AI as a specific topic, others was more generally focused on trustworthy AI. There was targeted engagement with certification bodies and then we also have a standing forum for UK regulators where regulators have a space to come together among themselves as a single stakeholder group to exchange knowledge around the role that standards can play in AI regulation. For knowledge and training, as I mentioned, that includes various e-learning materials. There’s sort of a snapshot of some examples on this slide. If you’re interested, we’d like to invite you to take a look at that on the website and the same is the case for research and analysis. This is just a snapshot of some of the most recent pieces but more of, you know, more of that and more details you’ll be able to find on the website. That concludes the sort of the summary of what we have been up to so far and why we set up the AI Standards Hub and I’ll now pass on to Sonny to tell us more about our objectives and our interest to collaborate internationally.

Sonny:
Brilliant, thank you Florian and good morning to everybody in the room and good afternoon and good evening to those online as well. So my name is Sonny and I’m from the National Physical Laboratory and part of this amazing collaboration that we’ve set up and I’ll talk a bit more about what the collaboration is and what our aspirations are and why we have those objectives and aspirations. So we’re seeing, we’ve heard a lot over the last few days about the growing need for standards to help with governance, with assessment and heard about many different challenges. So on the screen you can see several different initiatives, development of policies and strategies, but actually even just yesterday I heard about some work going on in Africa where across the continent there are at least 25 initiatives and around 466 different policies in development. And so, we’ve got all of this environment out there in the world where lots of countries, lots of regional organizations are working to do all of this work. And we’ve really seen that the world recognizes the importance of standards. So, if we draw out just one of those examples with the recently published EU AI Act, we can see that the support, the development, the creation of the conformity to standards, to do that, most nations have something called a quality infrastructure, which tends to be built up of a few different organizations, which include organizations such as myself, which is the Technical Measurement Standards, and then the National Standards Body, such as the British Standards Institute, and then other organizations that actually check the conformity and compliance, as well as accredit organizations. So, our hub is an example of how bringing these can be a valuable exercise in itself, because it helps with a diverse set of skills, capacity building, as well as looking at the entire ecosystem, all that value chain in the whole together at the same time. But how do we lift that from a national paradigm to an international paradigm? These standards have to be worked in by consensus. And we all have shared challenges, and globally, we’re all at various stages of our domestic journeys. So, how do we bring everyone around the world to the same level and work on our shared global challenges to truly realize the benefits of AI, as well as provide that confidence to us as normal people, as the public, to have in this technology and really benefit from it? So, here on the screen, you see some of the role of standards within the EU AI Act, where they’ve taken five principles, and then set out seven essential requirements within a high-risk system or high-risk systems. And so, the European Commission has requested CENTER-ELECT to now develop standards around 10 issues to really try and harmonize these standards, which then make this presumption of legal conformity. Now, as I said, these standards are generally voluntary, so how can we work on these together such that everyone is on that same level? level platform. So we really are trying to do this, and on the screen you just see three small examples of some of the things that we have in train. In addition, we have had much international interest, and really pleasing the reach out we’ve had from north, south, east, and west. And so we’re partners with the OECD, and we cross-reference with their tools and metrics for trustworthy AI, and they also cross-reference with the hub. We’re also doing a lot of work with NIST, and other like-minded organizations, put that into a bit of context, NIST is the American equivalent, or NPL is the British equivalent of each other, and there are 100 different organizations of this around the world, which are signatories to the 1875 METRE Convention. So there is already certain platforms to do this work. Now assessment, for example, is expressly a measurement activity. How can we all understand and make those measurements? How would you actually measure the trustworthiness of something quantitatively, also appreciating that AI is very context-specific, so now there is a new paradigm where we also have to think about qualitative assessment and measurements. And then another example we have here is where some of the work we’re working with other national standards bodies around the world, and in this case we’ve pulled out the bilateral work going on with Canada at the moment, and again, it’s not limited to just Canada, we’re working with many other countries. Next slide please, Florian. And so broadly, these are the kinds of things that people are asking us to think about and do. So how do we build these international stakeholder networks? There is a big challenge out there in the world that every region is lacking the skills, the resources, the people, the knowledge in these things, so how do we bring the right people together to share, to address these shared challenges? And so, as talked about several times already, it’s about bringing the national standards bodies together with the national measurement institutes, bringing the right academic prowess into the room, and most importantly, why are we doing this and who are we doing this for? We’ve been asked to help and work with others on clarity. collaborative research, and then also developing these shared resources, so lifting up from the national paradigm to the international paradigm. And so Florian’s already shown some screenshots of the platform. And what we’d really like to, I’d like to finish off this part is, this is not just a UK resource. This is available. Anybody can access this, so please come have a look, and if there’s anything there of interest and you would really like to work on shared challenges, then please get in touch. Thank you.

Florian Ostmann:
Great. Thank you, Sonny. So that brings us to the end of the presentation part of our session. As I mentioned earlier, we do have sort of an interactive exercise that we’ve prepared and that we’d like to come back to towards the end. We’ll do that using Mentimeter, and so before we move on to the panel discussion, I’d like to invite you all, both those of you in the room and those of you joining online, to take a moment to go to Mentimeter and then in your own time sort of complete the questions that you will see on your screen that come up. It’s not a big exercise, so don’t worry. It’s also worth mentioning that it’s completely anonymous, but I think there will be some interesting results that we can look at when we get to the discussion later on. So to get to Mentimeter, you can either go to menti.com and enter this code. My colleague Anna will also put the link for Mentimeter into the chat, so you can just click there, or you can try to scan the QR code if that works for you. So we’ll just take a moment, I’ll leave the slide on for a short moment, and the link is now in the chat as well, before we move on. Great, I think I’ll stop sharing the slide, but the link for the Mentimeter is in the chat, so I hope everyone will be able to access that there. Let me move on to introducing our panel. As I mentioned, we’re very excited to be joined by a great panel of experts today with a vast amount of experience in the AI standards space from across different regions of the world, and also Nikita Bangu, our colleague from the UK government, who will tell us a bit more about the context in the UK policy field. So I will stop sharing the slide, and I’d like to invite our panellists to turn on their cameras. Fantastic. Nikita is joining us here on the stage, so it’d be great if you could move the camera such that we are both visible. Nikita is sitting to the right of me. And I’ll just briefly introduce our panellists. So I’ll start with Nikita. Nikita Bangu is the Head of Digital Standards Policy in the UK government’s Department for Science, Innovation and Technology. She works in the Digital Standards team in the department, which brings together the UK government’s global engagement with key internet governance and digital standards bodies. And she works on Digital Standards Policy Portfolio, which includes standards policy on new and emerging technologies such as AI and other areas such as quantum technology. So welcome, Nikita. Thank you for joining us. Ashley Kosoban. Next on the panel is the Executive Director of the Responsible AI Institute, which is a multi-stakeholder non-profit dedicated to mitigating harm and unintended consequences of AI systems. Ashley has been at the forefront of building tools and policy interventions to support the responsible use and adoption of AI and other technologies. She’ll tell us more about that, including her important work and the Institute’s important work on certification. And previously, Ashley led the development of the first major AI-related government policy instrument in Canada, which is the Directive on Automated Decision-Making Systems. Welcome, Ashley, and thank you for taking the time to join us. Wansi Lee is the Director of Data-Driven Tech at Singapore’s Infocomm Media Development Authority. In the area of AI, Wansi’s responsibilities include driving Singapore’s approach to AI governance, growing the trustworthy AI ecosystem in Singapore, and collaborating with governments around the world to further the development of responsible AI. She is also responsible for encouraging greater use of emerging data technologies, such as privacy-enhancing tech, to enable more trusted data sharing in Singapore. Welcome, Wansi, and thank you for joining. And then, last but not least, we have Aurelie Jacquet, who is an independent consultant advising ASX 20 companies on the responsible implementation of AI. Aurelie also works as a principal research consultant at CSIRO Data61, which is part of Australia’s National Science Agency, and she leads global initiatives on the implementation of responsible AI in various areas. And one piece that’s particularly worth highlighting, which, again, we’ll hear more about, is Aurelie’s role in chairing Australia’s National Committee for Standardization for AI, which represents Australian views within ISO and the development of AI standards in ISO. Welcome, Aurelie, and thank you. Great, so with those introductions done, let’s move on to the first round of questions. And I would like to start with you, Nikita, from a sort of a UK perspective. I mentioned earlier, you know, at a very high level, what the relationship between the hub and the wider policy thinking and government has been and is. But it’d be great to hear from you a bit more about how policy thinking in DCIT relates to the hub. What are the ideas that led to the creation of the hub? And why does the UK government think that this is an important initiative?

Nikita Bhangu:
Sure, thank you, Florian, and good morning to all of those in the room and good afternoon and evening to those online as well. As Florian mentioned, I’m Nikita Bangu and I’m the UK government representative on the panel today. So, I mean, Florian, Mathilde and Sunny provided a great overview of what the AI Standards Hub does. I guess just to provide a bit more context from the UK government perspective and our policy thinking there, I’ll just run you through kind of our approach to standards and how we’ve embedded that into our AI policy governance as well. So, I guess to start with, it’s just to note that the UK government sees standards as many benefits in kind of AI standards and engaging in the standardization landscape. In our recent AI White Paper, which sets out our approach to regulating AI policy more broadly, we noted the importance that standards and other tools such as assurance techniques can play within the wider AI governance framework as well and can help sort of implement some of the approaches from the UK government’s approach on AI policy as well. The paper sort of recognizes that digital standards are not an end to themselves. They are a means of making the technology more accessible. technology work and really important to consider sort of the wider toolkit that we have within our regulation and governance approach to AI as well. Under the UK presidency of the G7 we also looked into digital standards with our G7 and like-minded colleagues as well and set up the collaboration on digital technical standards as well to kind of note the importance of working together within this space and recognizing the benefits that standards have within the wider AI policy regulatory framework. I think having said that in terms of the benefits of AI standards we also recognize that there it is a very complex space there are from speaking to our stakeholders and through sort of our research and collaboration with international partners we recognize that there are many barriers in place in participating in the AI standardization ecosystem so as UK government we were really keen to sort of work with our stakeholders on our international partners to reduce these benefits so that standards can be for all whether it’s from knowing what standards are and how to adopt them into your business to kind of encouraging that multi-stakeholder global approach to developing standards and providing all groups with the opportunities and toolkits they need to participate in this ecosystem as well. You would have heard a lot at the IGF today of the importance of collaboration and multi-stakeholder approach to digital technologies that’s exactly the same for standards which it’s quite difficult to do because as I mentioned it is quite complex many people have been many people who develop standards have been playing sort of in that game for many years so it’s there is kind of a need to sort of support our stakeholders there to sort of help get them in those organizations and really understanding what standards are so I guess through consulting with our stakeholders we identified the key challenges in the UK which Florian went through in the presentation just now, we kind of thought about how we can kind of intervene in that market to support our stakeholders in reducing those barriers to enable the benefits of AI standards to seep through. Some of our key aims for kind of setting up the AI standards hub and doing that work was to increase the adoption and awareness of standards to create clear synergies between AI governance and standards hence sort of our work with the AI white paper and setting out the role for AI standards as a tool for trustworthy AI and also to provide practical tools for stakeholders to understand and engage in the standards ecosystem. So that really was our thinking behind setting up the AI standards hub and sort of working with our key experts in the field bringing together parts of the UK national quality infrastructure, British Standards Institute, a national physical laboratory and our national AI centres to bring the minds together so that we can reach a wide user base in the UK and beyond to help facilitate the reduction of barriers we’ve seen in this place. The AI standards hub has been running for a year now. I think next week it’s the first birthday of the AI standards hub which is great and we’ve seen lots of success in this space over the past year. We’re looking to increase our international collaboration with the AI standards hub in the coming years. I’m really keen to follow up on this conference and participate with you more in that space as well. I think the last thing I would just note is that we, UK government commissioned an independent evaluation of the pilot phase of the hub which was the first six months of the hub to just to understand sort of what’s worked well. and how we can continue growing. We will be publishing that evaluation, so it will be on our gov.uk website, so accessible for all to look at. But some early findings really indicate that the hub has helped support the UK community in understanding what AI standards are. We conducted a survey and found 70% of respondents kind of noted that the hub is really helping in building that knowledge gap, and sort of inspiring and motivating them to get more involved in the development organizations, which is great to see. I’ll stop there.

Florian Ostmann:
Great, thank you very much, Nikita, for adding that context, and yeah, it’s exciting that we’re approaching the one-year anniversary. Thanks for mentioning that. Great, so we’ll now move on sort of to the international perspectives, and I didn’t make it explicit earlier, so the great thing about the panel is that we’ll have sort of perspectives from Canada and the US, so North America, that’s Ashley’s focus, and then Wansi from Singapore, and Aurelie’s experience in Australia. It’ll be great to hear how some of the themes that we shared resonate with your experiences in those countries. So I essentially would like, as a first round, to ask each of you roughly the same question, which is how does what we’ve presented so far, and of course, you’ve heard about the AI Standards Hub previously, the challenges that we’re trying to address, the kind of initiative that we’ve built, how does that resonate with what you see in terms of AI standardization priorities and challenges in your countries? I know that in some cases, there are initiatives that are quite similar, or comparable in nature, at least overlapping, and perhaps we can start with you, Ashley. One such initiative is the Data and AI Standards Collaborative that you are heavily involved in. So it’d be great to hear a bit more about that, and also more generally, your reflections on this space.

Ashley Casovan:
Yes, thank you so much, and thanks for having us here to present about the work that we’re doing, and also, I think, just establishing this really important conversation related to AI standards. I think it’s, as you’ve mentioned, becoming a more important discussion, or at least one that more people are reflecting on, given the connection to different types of regulations. However, it still seems to be a very confusing topic, because standards can mean so many different things. Ironically, standards are not standard, and so there’s a lot of different points in which, or entry points, I guess, into that conversation, and understanding why they’re being established for what purpose is something that we’re trying to reflect on in the Canadian context. As Florian’s mentioned, I am heavily involved in an initiative that’s been established by the Canadian government called the Data and AI Standards Collaborative, and I am the co-chair of that, representing civil society. And in this capacity, we’ve been really trying to understand the implications of AI systems, and the data that feeds into those, and really trying to bring together civil society, academia, and government agencies to reflect on what types of standards are needed, really, really similar in nature to what you’ve already heard from the Standards Hub. And one of the things that we’re quite interested in doing as part of this initiative, is trying to identify different types of specific use cases, again, kind of aligned to the pillars that Florian presented. on previously and understand the context specific standards that are required within the whole value chain of an AI system. And I guess what I’ll say in addition to that is maybe because I’m here to represent the North American piece, but I do not speak on behalf of NIST, but because it was mentioned earlier, we’re starting to see a lot of uptake in tools in the North American markets that is related to some of the work that is happening in these national government activities. So Florian earlier spoke about NIST’s AIRMF, the risk management framework. And what we’re starting to see through this initiative, OECD, et cetera, is the work through these multi-stakeholder forums to be able to establish good baseline initiatives for standards to be developed from. So that could be things like even just what does the life cycle of an AI system look like? What are the different types of definitions that we should be using for these systems and have some commonality amongst those? Because what we’d like to get into then more deep from a Canadian data and AI standards collaborative perspective is, as I said, back to those use cases, understanding what types of certifications, standards, mechanisms are required for both the evaluation of a quality management system that Nikita spoke to earlier that we’re seeing with 40-2001. And then what is needed at a product level, which is work that we’re doing at the Responsible AI Institute, which I’m sure I’ll speak to after. And then looking also to individual certification. And this is something that Aurelie, I’m sure, will address as it’s something that we’re working on. that she’s been quite interested in for a while in terms of what does individual training look like. And so when I mention these different types of standards that are needed, there’s really a breadth that we’re needing to look at. So I’ll leave it there and I’m just really happy to be here and have this discussion at an international forum like this.

Florian Ostmann:
Great, thank you very much Ashley and we’ll come back to some of those points later. Moving on to you Wansi, a similar question for you. How does sort of the points around the importance of standards but also the challenges, how does that resonate with your work and your experience in Singapore? And I believe there’s an initiative that’s quite relevant from your perspective that’s the Verify Initiative. It’d be great to hear a bit more about that and then your sort of views on standardization more broadly.

Wansi Lee:
Thanks Lauren. Hi everyone, I’m Wansi from Singapore. I’m from the Singapore government. Thanks for having me on the panel this morning. It’s really interesting to be able to talk about standards with like-minded folks from around the world. One of the things that we recognize in Singapore, for us in Singapore, it’s very important is the need for international cooperation. So that’s something when Sunny talked about just now is something that really resonated as well. So international cooperation, it can be done in various ways. Of course, Singapore is an active member in the ISO process. So we participate and we contribute and we vote and so on. But at the same time, we do not just be at the ISO kind of level of cooperation. We also work quite closely with countries and we also participate actively in the multilateral process. So maybe just as an example, since I actually spoke about NIST and that was also brought up in the earlier presentation, the NIST AIRMF coming from the U.S. and so many organizations that we’re looking at. And then for us, and what’s important then is how then do we, our own work in Singapore, map to or how do we work together with what NIST has already published. So we very actively started a mapping project with NIST. We developed a crosswalk where we sort of looked at what we’ve done in Singapore in terms of our guidelines for AI verify, model AI governance framework that we have published a couple of years ago. And then we did a mapping exercise then see where we are aligned and where we’re different. And of course, at this level, we’ve gone to some level of detail. Even at the level of details, there are many similarities and quite a lot of alignment. We find that this work is very helpful for organizations or companies that are operating internationally because they want to make sure that this, what they’re doing in terms of implementing the right practices and so on for responsible AI, it is aligned both to Singapore’s requirements as well as some of the standards work that’s happening in the U.S. So that’s why we started that process with NIST. And of course, then extending that, then we’re looking at other standards that are being developed through ISO, SENSE, and NALAC and so on, and to see how we can align as well. So that’s one example of how we can cooperate internationally and how we can make sure that there’s at least some kind of alignment or interoperability amongst the kind of guidelines and standards that have been developed. The other area that resonated is the need for multi-stakeholder engagement. Of course, there are platforms to do that. ISO is one platform. Our own Singapore Standards work is also another platform. But I thought, as Florian mentioned, I’ll highlight one of the things that we’re doing that’s a little bit different, just to show that there are many alternatives out there. So besides guidelines… requirements that the Singapore government sets out. We also wanted to make sure that organizations are able to demonstrate adherence or compliance to some of these guidelines, right? So we developed the AI Verify Testing Framework and Toolkit. So it’s a set of detailed requirements of how then you think about validating responsible AI practices or implementation of responsible AI requirements when organizations implement AI systems. So quite a lot of detailed process checks, for example, align again to international principles. We looked at requirements from around the world. We looked at principles from OECD and so on, and then we define that into a set of testing requirements. At the same time, we also identified how do we test, right? It’s not just about process checks, but also how do we actually test the system? So we developed a toolkit looking at some of the work that’s already been done by academics around the world, as well as some of the work that’s been done by companies and put together a toolkit to test for fairness, explainability, and robustness, because those are things that we think we can test at this stage. And that, but we also recognize that testing capability continues to evolve and there are many gaps. So people around the world are working on different aspects. That’s why then we decided for AI Verify Testing Framework and Toolkit to open-source it, that’s one, but not only to open-source it so that people could contribute, but also created a foundation, an open-source foundation to support the contribution and engagement of organizations, developers, individuals around the world to build up AI Verify toolkit and framework. Even as we look at generative AI, for example, that’s something then that needs to be extended. verify and that’s why we feel it’s important to work with the global community and the open source foundation is one way in which you can get multi-stakeholder involvement in technical development as well as sharing of knowledge and experiences in the space of AI governance testing. So that’s one kind of slightly different take on how we take on multi-stakeholder engagement approach. Thanks. I’ll just pause here for now.

Florian Ostmann:
Thanks. That’s great. Thank you, Vansi. And it would be great to come back later in the next round and go into a bit more detail both on sort of priorities for international collaboration and the multi-stakeholder involvement. But before going into more depth, let’s move on to you, Aurelie, sort of for your general take on this topic. You, of course, have a lot of hands-on experience, you know, probably the most hands-on experience in relation to standards given your role as the Australian committee chair. So it would simply be great to hear from your vantage point what’s your take on AI standardisation both in terms of importance, challenges and sort of the international cooperation and multi-stakeholder angle.

Aurelie Jacquet :
Thank you, Florian. And again, delighted to be here in this forum and talking about standards and certifications. This is my favourite topic. To your point, I’d like to maybe go back in time and remind that actually back in 2017 already, the UN published, there was a few papers, academic papers that were published on standards at the time explaining how they can be used as an agile tool for international governance. And so now that the standards are mature, and we see a lot more published, there’s increased interest. From my perspective, I actually led Australia active participation in the standards. So my motivation was actually, I come from the global markets, financial services, and I saw the mini crash that had happened. We had to, we had an onset of regulation that came after the GFC. And at the time, from a compliance perspective, the thought was we really need some set of best practice that we can provide to industry in order to ensure that the onsets of regulation is industry informed. So that was a strong motivation for us to actually make the submission to standards and ISOs for Australia to actively participate and shape the international standard on AI. So that was our entry into that world back in 2018. And as I said, the core business case is Australia is a small country and it really needs to actively participate in a topic and in the involvement of best practice that for AI that is effectively an international, that’s got an international remit. So we had a roadmap already in 2020 for AI standards that I’d already focused on 40, 2001. You’ll hear that number a lot from me. That’s the AI management system standard. And this is what we described as the crown of the journey. standards because it provides for the certification of AI systems. So this was one key part of our roadmap. Obviously, also as part of the work that I do with CSIRO, Data61 and National AI Centre in Australia, there was one challenge is often standards, they are embedded everywhere in our life, but they’re not visible and often organisations are not aware of them. So we started also in Australia through the National AI Centre and through the Responsible AI Network, which is a community of what we guess is best practice with a community of experts. That’s got seven pillars, including standards. We started to develop education program that also covers best practice, including AI standards. So the initial course that we developed were on what are standards and how they’re part of our daily life and how they’re relevant for AI. And of course, the AI management system standards, what’s coming, what’s likely to become the standards that will enable audit of those AI systems. With CSIRO, Data61, we’re also building tools, a set of tools that are leveraging standards work. And you see in a day-to-day as part of adoption in Australia of standards, we’ve already got the NSW AI Assurance Framework that actually is leveraging standards to provide assurance for AI system used by the government. This has been made mandatory for all public services. in New South Wales, if they’re using AI, they have to go through that AI assurance framework. And obviously, from a business perspective, there’s definitely some appeal, we see increased appeal in looking at standards that are starting to, that have effectively over 60 countries that are involved in developing them, but also that, let’s say, SENCENELEC and the EU Commission have been interested from the beginning. We had the EU Commission coming to our ISO meeting from 2018 onwards. So that’s why we see our governments already, even at the federal government, we had some guidance about Chad GPT and generative AI provided and that referenced some of our standards on bias and others. So there’s been a good uptake from that perspective in Australia. And I finish with international initiatives that we have initiated. Also, with Standard Australia, we have developed a workshop that we delivered at the APEC SOM, explaining how effectively AI standards can help, really, how standards can help scale AI and what has the benefit for organization in different state. But to your point, Florian, there’s still this challenge of getting the standards well known, so they’re much more visible and increase participation. But really, this is standards so far as proven, as I said from the beginning, as a very good agile tool for international governance.

Florian Ostmann:
Great. Thank you very much for that overview, Aurelie. And I think that was a really good segue to the next round of questions. So, you know, we ended on the challenges and more work that needs to be done, and I’d like to basically do two rounds, you know, on each of the topics for the session. So the first one on multi-stakeholder engagement and then the second one on international collaboration. Let’s start with multi-stakeholder involvement. So, of course, standardization is already, you know, compared to other governance mechanisms, a very, you know, inclusive mechanism, right? I mean, it’s, you know, contrasts with regulatory rulemaking, for example, in that the process is open to all stakeholder groups in principle. But we are aware, as we mentioned at the beginning, that not all groups are equally represented. So it would be great to hear from all of you, and we’ll start with Nikita. You know, what do you see as the main challenges? What are the main obstacles for achieving, you know, equitable involvement from all groups? And also, what are the most promising strategies for addressing those challenges? So what can be done, including what can be done collaboratively at the international level to ensure and increase stakeholder inclusion?

Nikita Bhangu:
Sure, thanks, Florian. So I’ll start with some of the main challenges. I mean, we covered it previously in terms of sort of what the UK sees as some of the challenges, but just to kind of point to a few of the key ones. For the UK in particular, it’s sort of that ensuring we have the right representation at relevant standards development organizations. We’re seeing quite a few large companies, for example, representing industry at standards development organization, which is, of course, great in terms of providing that view. However, for the UK, most of our technology companies are small to medium enterprises as well, who often are quite small companies and may not have a large regulatory team or standards expert. who have the skill set needed to engage effectively in the standards development organization. So that’s, we recognize that as a key challenge in terms of our small to medium enterprises as well. I think for us as well, another key stakeholder group is civil society, which has sort of always been a bit of a, quite challenging to get the resourcing and the expertise into standards development organizations and I guess Florian just mentioned the key point of the standards are for everyone and standards are at the sort of offset providing those building blocks for how technology will be developed. So it’s crucial that all stakeholder groups are taken in mind when developing these standards. I think another key challenge for UK government particularly is the issue of government is another key stakeholder and kind of expertise getting the state standards development organizations. For us, we have a very, very small technical team within our digital department back in London, which obviously their resources can only stretch so much. So getting those viewpoints and that coordination across sort of constrained resourcing is another challenge. One thing we’re doing in the UK government at the moment is thinking about talent pipeline as well. You know, trying to increase diversity presently but also in the future working with standards development organizations and other international partners to create sort of the next generation, I guess you can call it, of standards development organizations, developers as well. I know there’s a lot of work going on in this space already. I think BSI do quite, our national standards body do quite a lot in that space. And the IEC has a young professional program as well to sort of, again, provide that career route and continuation of skill sets into the standards organization as well. One thing particularly relevant for the IGF is that we’re also working with the MAG, the multi-advisory group, to sort of embed digital standards within that thinking as well, so again using international fora to promote that view of multi-stakeholder and the tools that we can develop together to get different voices in standards development organizations as well.

Florian Ostmann:
Great, thank you Nikita. Ashley, over to you. How does that resonate with you in terms of, you know, your views on obstacles and also solutions for ensuring inclusion and participation of stakeholders?

Ashley Casovan:
Yeah, I think all of that resonates here as well. I think one of the challenges that we’re actually having with the Data and AI Standards Collaborative is that we’re trying to be incredibly inclusive and so to some of the points that Nikita was just making, the bandwidth for the teams that are actually within government that are trying to process and analyze all of that information does become constrained and so it’s, I think, why I spent so much time in my previous discussion talking about the need for us to really understand what types of standards are we talking about because then we can identify who needs to be at the table for which types of conversations. To have broad-based discussions about all types of AI in all types of contexts makes it really difficult to get the right stakeholders there. One very significant effort that we’re trying to make is to ensure inclusion across all aspects of civil society and so something that’s been missing from a lot of our conversations is Indigenous groups in the Canadian context and so we’re making a concerted effort to ensure that voices from the most impacted populations in Canada are being not only brought in but again really understand in the harms that can come from these AI systems to try and find appropriate ways that standards can help to mitigate those.

Wansi Lee:
Great, thank you Ashley. And over to you, Wansi, for your views on stakeholder participation. Yeah, it’s definitely a very complex space. Singapore is a small country, smallest here I think amongst everybody on the panel, and we have also limited resources. One of the things that we need to do is then make sure that we focus our resources in areas where we can contribute to the global conversation, because there’s lots going on in the standard space. And so we want to make sure that what we do makes sense in the grand scheme of things. And that’s why we are very targeted in terms of where we want to develop and spend effort, because a lot of the things that’s already happening internationally, we could adopt. And where we think there are gaps, then we want to make sure that we have to plant. And that’s why we look at actually tooling, testing as an emphasis in terms of where we want to put our resources. That’s not to say that other areas are not important, it’s just where we think, oh, there’s a gap and this is where Singapore can help. And that’s how we started AI Verify. In terms of getting more involvement, I think we definitely are very active in making sure that what we do is not just a government kind of perspective. We are very active in engaging industry, companies, large and small companies. operate globally, that operate domestically in Singapore, to make sure that their voices or their input can be incorporated. Everyone or all organizations can participate in let’s say the AI Verify Foundation, open source anyway. We’re trying to make that mechanism for any organization that’s interested, even if you’re very small, not from Singapore, it is a platform that you can you can contribute on. And then from there, then we take some of the work that’s being done at AI Verify Foundation, rationalize it at the national level in Singapore, and then we see how we can then support that more globally across in other platforms, whether it’s OECD, GPA, or ISO, or other multilateral platforms. Thanks.

Florian Ostmann:
Great, thank you, Vansi. And over to you, Aurelie, for your take on stakeholder inclusion.

Aurelie Jacquet :
Thank you, Florian. So, Australia has a small yet powerful delegation. So, if you have a small delegation, that should not stop you from being involved in the standards. Most of the experts were new to standards, so it took a little bit of adaptation. When we got started back in 2018, one thing I’d like to highlight is I say, we worked with other small countries to ensure that the mechanisms that are in place actually fitting for our size. And when you have many experts, you cannot have them in all the different meetings at all times. So, we’ve worked very closely with others to make this process manageable. Australia is actually leading a great way to look at the key element that we have in Australia and how we want to lead them overseas. Of course, we have the resource challenge and the time challenge. From a resource perspective, we’re very lucky to help with an organisation that is a non-for-profit or smaller business. We have help from the government that just allows us to participate and travel as volunteers and attend the ISO plenary that’s coming actually in Vienna next week. One challenge also that we’ve been working very closely with Saira and the National AI Centre is really if you have not participated in the development of those standards, sometimes it’s hard to get the context around those documents when they’re written. Our experts have worked really hard to start developing some white papers on giving the background between 42001 or some of the bias standards or the sustainability standards that we are developing and how they’re building into practice. One challenge remains, obviously, it’s for SMEs, the uptake standards are often uptake by broader organisations, so how do we make this more fit for purpose for SME? How do we make it easier accessible for SME? That’s conversations that are ongoing and on which we are working very closely.

Florian Ostmann:
Great. Thank you, Aurélie. Now, we’re already approaching sort of almost getting close to the hour, so we don’t have much time left. There’s lots more that I’d like to ask, but I also would like to make sure that we get a chance. So, I think we’ll briefly pause the panel and see who might like to come in. I think there’s one contribution in the back and also Holly in the front, so if the two of you would like to come in and then if anyone online would like to come in, you will be able to actually speak, so please do raise your hand if you’d like to contribute. But, yeah, please go ahead.

Audience:
Thank you. My name is Walton Atwes. I’m the coordinator of the Internet Standards, Security and Safety Coalition here at the IGF, Dynamic Coalition. I’d like to make two comments. What I notice is that what we’re talking about here are all more or less government-accepted standards institutions like ISO, SENELEC, et cetera. What I’ve noticed in the research that we’ve been doing on internet standards is that in the technical community, quite often, all sort of standards are made as well, and we found that they’re almost 100% not accepted in government policies. So if that, I don’t know, but if that is the case with AI as well, then you have two separate bodies creating standards, which one may be official at some point and the other ones who make the internet run and AI run on the internet are not addressed in any way. So my suggestion would be to reach out to the technical community and see what is being done in the IETF or IEEE, et cetera. My second comment is more strategic a little bit. I hear these fantastic initiatives that you’re presenting, and we have probably had 19 other AI sessions here at the IGF. So what is going to come out of this session? Ideally, it would have been some sort of, we can’t call it the declaration in the IGF context, I know, but what you’re doing should be the main message coming out of the AI track here at the IGF. at the IGF, and probably now we all go home with a little report somewhere stuck on a fairly obscure website. So when you talk about the MAG, perhaps if you want to influence it, that next year there will be some sort of a declaration on this. Because what you’re presenting here is the future. And it’s a shame if we go home without the world hearing about it. So thank you.

Florian Ostmann:
Thank you very much for that. Thanks for the encouraging words in your second comment. To the first comment, just to briefly say, I think the point you raised is a really important one. And we’re focused in the presentation on the organizations that we mentioned, but we very much are aware of the wider landscape, including standards developed in ITF and others. And so it’s really part of the mission of what we’re trying to achieve is to make those connections and provide the full picture. So thanks for bringing that in.

Audience:
Holly, please. Hi. I’m Holly Hamblett with Consumers International. We’re a membership organization of consumer groups around the world. And I want to start by saying, I think this is a really great initiative. I think it’s going to be really helpful to have that multi-stakeholder approach, and it’s really vital to get consumer organizations and larger civil society involved in these processes. But I wanted to briefly comment on just the value of consumer organizations joining the AI standards hub, what we can bring, and then commenting on what the AI standards hub will give to us and how it will be helpful. So the value of consumer groups and Consumers International, especially, is that we can play this role in ensuring that AI is developed ethically and responsibly because we represent the interests of consumers who are the end users of the products and services. And we bring this unique perspective that a lot of consumer organizations are complaint mechanisms. for consumers, so they have direct insight into how they’re using the products and services, how it’s impacting them. They do a lot of product and service testing themselves, so they have information on whether it’s compliant with consumer protection regulations that are existing, whether it needs to be enhanced in some way with standards. So what I’m saying here is that consumer organizations have a lot of data that can help standards be supported with evidence and make sure that it is reflective of consumer interests. And the things that consumer organizations can bring to this space, we have those insights to make sure that standards are grounded in ground-level realities to reflect how the technology will impact consumers. We can bring a global perspective, not just Consumers International, but our whole membership base. We have around 200 consumer organizations in around 100 countries. This is very, very global, very diverse and representative, and bringing in these voices is absolutely vital. We can help ensure that standards are designed to protect consumer interests from the outset. It’s a huge problem with regulation, standards, policies that consumer interests are brought in at the end, and they’re an afterthought a lot of the time, which leads to further harm for consumers. But bringing them into the discussions to begin with is a really great way to make sure that not only is everything compliant with existing regulation, but that it is sustainable in the long run, because we can consider what impact it will have on consumers, mitigate the risks, and ensure that everyone enjoys the benefits. We can provide feedback on draft standards to make sure they’re clear, concise, and easy to implement. This isn’t just generally to businesses, governments, anyone that it applies to, but this is to consumers themselves. Consumers are aware of the standards. They’re able to exercise their consumer rights, they’re able to engage with technology a lot better. So it’s really good to make sure that they are translated into very consumer-friendly language. And that’s something that consumer organizations can absolutely help with. And then final way that we can help is to promote the adoption of standards by consumer organizations, businesses, governments. We are fairly connected in who we work with, and it’s a big benefit of working with consumer organizations that we’re able to say this is consumer-friendly, we support this. And it can help push that forward as a standard. The AI standards hub for us is going to be incredibly helpful. Florian mentioned absolutely in the PowerPoint that there are two very sizable challenges that consumer organizations face, or civil society generally. One being that we are not often welcome in the spaces, it’s very difficult to get into the standardization process. This is largely due to the fact that the process is dominated by industry experts or technical representatives, and civil society isn’t generally there. Which then leads to the consumer interest being the afterthought, which is something that absolutely needs to be avoided. And then secondly is the capacity building. Some of our organization’s members are wonderful in the digital space, they’re very, very clued up on it. Other members are experts in consumer protection and consumer protection only. It’s very difficult for consumer organizations, traditionally being underfunded, not very well resourced, and not experts in everything, to then try and cover the vast scope of all digital issues. particularly complex emerging technologies like AI. So something like the community and capacity building of the hub is gonna be beyond helpful. This isn’t something that we offer our members, so it’s gonna be helpful to us as an organization and to our members through that as well to make sure that they can contribute not only to our work but to work globally, internationally, make sure that there’s the space and the capacity there to be able to do that. And then I’ll end on one final note because I know I’m taking up a lot of time here, but it’s very important to consider that consumer organizations are not a monolithic group. We represent a diverse range of views and interests and it’s important to ensure that there’s broad representation of all consumer voices and AI standardization. And one way to make this easier is for the consumers themselves to understand the process, to contribute to it and to know what is going on and how they can be a part of this. So we need to develop these user-friendly tools, we need to have the resources and help consumers learn about AI, AI standards and provide their feedback consistently. Thanks.

Florian Ostmann:
Great, thank you very much for that. And we’ll be very interested to explore with you how we can work together, address those challenges. And it’s particularly great to hear about and consider your role as an international organization that brings together consumer organizations from around the world. Now, we’ve almost run out of time. I’d like to use the last couple of minutes, if we can, for a short, very quick round across the panel and invite each of you to share your final reflections. And perhaps in particular, if you have any points, maybe your top three priorities for international collaboration, if you bring it back to that theme and also going back to the earlier question or the comment to encourage us to think about tangible,

Nikita Bhangu:
sort of tangible. outcomes following kind of discussions and collaborations in this space, really emphasizing sort of research on standards and UK research, but also working with international partners to understand the broad issues that we’ve discussed today as well. Thanks.

Florian Ostmann:
Thank you, Nikita. Ashley, over to you.

Ashley Casovan:
Thanks. I’ll keep it short. I think that understanding what’s already happening in the space so that we’re not reinventing in any country is really important. And so an international exercise, whether that’s through OECD or another forum like IGF to do almost a mapping of the standards that are taking place where so that we can look to understand not only what’s being done, but then what types of harmonization efforts are required is something that I’d really love to be able to see. And then, again, I can’t stress this enough, AI is not one monolithic thing. And so really starting to break down the different types of uses and then therefore harms that are attributed from these systems and those specific uses, and then getting the right people, right stakeholders around the table to be able to have those dialogues that recognizing AI crosses or transcends borders, I think is going to be important dialogues for us to have in the years to come.

Florian Ostmann:
Great. Thank you, Ashley. And, Wansi, over to you.

Wansi Lee:
Thanks. I’ll also keep it short. I think for us, it’s really important that there’s no fragmentation of AI standards and AI regulations. So we have been working very hard over the last few years, and we continue to do that to partner with countries as well as actively multilateral platforms to try and hopefully drive towards, at least work together towards some kind of harmonized or aligned or interoperable standards for AI. I mean, we’re starting now. see a lot of countries coming up their own, you know, sort of requirements. In Singapore we’re doing it both within our region, in ASEAN we support the development of a consistent ASEAN guide for AI, a responsible AI implementation, but at the same time also then beyond ASEAN then we’re also active globally. Thanks.

Florian Ostmann:
Great, thank you. Aurelie.

Aurelie Jacquet :
Thank you. So, following on Wansi’s point, I think what’s important to know is it’s actually good to see different practices and different regulatory initiatives. What standards do is interoperability, that’s why we are doing standards, that’s why we are involving standards, it’s because of interoperability. It’s not about unification, it’s about harmonization. So, the key point that we made in some of our workshops at the APEC SOM, it’s allowing to have diverse views but actually making sense of each of these views and the standards is a thread that brings all those views and perspectives together. So, because from an Australian perspective what the three things that we focus on is making sure we use AI responsibly and we can scale it. To do that we need interoperability and that’s why we use standards not only as a way to check international best practice but also to learn from international best practice because when you have 100 experts from government, academia and industry together in the room that are discussing the best practice for responsible AI, this is a great resource to inform local policy but also develop our expert and grow the industry.

Florian Ostmann:
Thank you, great. I mean in many ways we’ve only sort of scratched the surface during the last 90 minutes, we could easily spend another hour or two discussing, but I’m glad we got as far as we did, and I do hope that what we were able to cover sort of spiked your interest for those of you who might be entering the standard space without a background, for those of you who are already involved, to sort of get those different perspectives from around the world, and for all of you, you know, going back to sort of the motivation for the session and the discussion around international collaboration, we’d be really, really interested if you have ideas on collaborating and, you know, joining up initiatives from across the fields that you’re working, and we’d be really interested and would love to hear from you, so please do reach out to us if you have ideas for working together. I think that’s the main message to end on, and other than that, all that is left to do, I think, is to thank everyone, thank our esteemed panelists, thank you for joining online across different time zones, and yeah, thank you Nikita for being in the room, and thank you to my colleagues Matilda and Sunny for being on the stage. So thank you everyone, and let’s hope that, you know, there’ll be a continuation of these discussions, and yeah, to see many of you again in one way or another. Thank you.

Ashley Casovan

Speech speed

168 words per minute

Speech length

1097 words

Speech time

392 secs

Audience

Speech speed

172 words per minute

Speech length

1396 words

Speech time

487 secs

Aurelie Jacquet

Speech speed

133 words per minute

Speech length

1389 words

Speech time

625 secs

Florian Ostmann

Speech speed

171 words per minute

Speech length

5342 words

Speech time

1874 secs

Matilda Road

Speech speed

160 words per minute

Speech length

1346 words

Speech time

506 secs

Nikita Bhangu

Speech speed

164 words per minute

Speech length

1599 words

Speech time

586 secs

Sonny

Speech speed

189 words per minute

Speech length

1057 words

Speech time

335 secs

Wansi Lee

Speech speed

173 words per minute

Speech length

1503 words

Speech time

523 secs