Importance of Professional standards for AI development and testing
11 Jul 2025 09:30h - 10:00h
Importance of Professional standards for AI development and testing
Session at a glance
Summary
This discussion focused on the importance of professional standards for AI development and testing, with particular emphasis on generative and agentic AI applications. The conversation was moderated by Moira De Roche and featured participants from various countries sharing their experiences and perspectives on AI ethics and professional responsibility.
Jimson Olufuye from Nigeria shared his experience using generative AI for government-citizen services, highlighting both the benefits and risks, particularly noting how AI was misused for disinformation during Nigeria’s 2023 political period. A key debate emerged between panelists Don Gotterbarn and Margaret Havey regarding whether AI requires separate ethical frameworks or if existing professional ethics standards are sufficient. Gotterbarn argued against creating specific “AI ethics,” advocating instead for applying traditional professional values and practices to AI development contexts. Havey countered that AI presents unique challenges, particularly for non-developers who must work with AI systems trained by others and deal with issues like bias and multiple AI models.
The discussion extensively examined the UK Post Office Horizon scandal as a cautionary example of what can happen when technology fails and human oversight is inadequate. Participants debated whether this represented a technological failure or a failure of human judgment and professional responsibility. The conversation addressed the challenge of balancing innovation with professional responsibility, particularly as AI technology advances rapidly. Questions were raised about how to establish global standards when different regions have varying cultural and regulatory contexts.
The panel concluded that organizations like IFIP can serve as catalysts for developing ethical AI frameworks, with emphasis on training, accountability, and ensuring that professional standards extend from developers to CEOs and board members.
Keypoints
## Major Discussion Points:
– **Professional Standards and Ethics for AI Development**: The core debate centered on whether AI requires new ethical frameworks or if existing professional ethics standards are sufficient. Panelists discussed the need for developers and organizations to maintain professional responsibility while working with cutting-edge AI technology.
– **Real-world Implementation Challenges**: Participants shared experiences using generative and agentic AI in business contexts, highlighting both successes (like government-citizen services automation) and concerns about misinformation, bias, and the need for proper data quality and testing.
– **The UK Post Office Horizon Scandal as a Cautionary Tale**: This case study dominated much of the discussion, illustrating how technological failures combined with human negligence led to devastating consequences. Panelists used this as an example of what could happen with AI systems if proper professional standards aren’t maintained.
– **Organizational Integration and Responsibility**: The conversation explored how to properly embed AI tools throughout organizations rather than having isolated AI teams, emphasizing the need for training at all levels from CEOs to end users, and establishing clear accountability structures.
– **Global Standards vs. Cultural Differences**: Participants grappled with the challenge of creating universal professional standards for AI while acknowledging that different regions have varying cultural values, regulations, and ethical considerations that affect AI development and deployment.
## Overall Purpose:
The discussion aimed to explore how IT professionals can maintain ethical standards and professional responsibility while developing and implementing AI systems, with a focus on preventing disasters like the UK Post Office scandal and establishing frameworks for responsible AI adoption across organizations globally.
## Overall Tone:
The discussion maintained a serious, professional tone throughout, reflecting the gravity of the subject matter. While collaborative and constructive, there was an underlying urgency driven by real-world examples of technological failures causing human harm. The tone became particularly somber when discussing the Post Office scandal and its tragic consequences, but remained forward-looking as participants worked toward practical solutions and frameworks for professional AI development standards.
Speakers
**Speakers from the provided list:**
– **Moira De Roche**: Discussion moderator, uses generative AI daily for creating learning content
– **Jimson Olufuye**: Principal Consultant of Contemporary Consulting (Abuja, Nigeria), Chair of the Africa City Alliance (Abuja, Nigeria), involved in digitalization and G2C (Government-to-Citizen) services
– **Don Gotterbarn**: Retiree, expert in AI ethics and software development standards
– **Margaret Havey**: Works in an organization providing networks and communications for government departments
– **Anthony Wong**: Session facilitator/co-moderator
– **Stephen Ibaraki**: Participating remotely, moderator and conference participant, travels extensively for conferences
– **Damith Hettihewa**:
– **Liz Eastwood**: Participating remotely, asked questions about the British Post Office scandal
– **Audience**: Identified as “Ian” – Reviews AI abstracts, based in Asia
**Additional speakers:**
– None identified beyond those in the provided speakers names list
Full session report
# Discussion Report: Professional Standards for AI Development and Testing
## Executive Summary
This international discussion, moderated by Moira De Roche, brought together experts from multiple countries to examine professional standards in AI development and testing. The conversation featured participants including Jimson Olufuye from Nigeria, Don Gotterbarn (a retiree with extensive experience in computing ethics), Margaret Havey (who provides networks and communications for government departments), Anthony Wong (session facilitator), Stephen Ibaraki (participating remotely), Damith Hettihewa, Elizabeth Eastwood (remote participant), and an audience member identified as Ian from Asia.
The discussion explored whether AI requires new ethical frameworks or if existing professional standards are sufficient, how to ensure accountability across organisations, and lessons from technological failures such as the UK Post Office Horizon scandal. Participants shared experiences with AI implementation while debating the balance between innovation and professional responsibility.
## The Post Office Horizon Scandal: A Central Warning
The UK Post Office Horizon scandal served as a recurring reference point throughout the discussion, illustrating the consequences of inadequate professional standards and oversight. Elizabeth Eastwood raised critical questions about how ICT professionals can convince management to implement proper testing and accept responsibility for decisions.
Different perspectives emerged on the scandal’s primary causes. Margaret Havey focused on implementation failures, arguing that problems stemmed from inadequate testing and poor organisational processes. She emphasised making leadership legally liable for IT system failures.
Moira De Roche offered a different view: “It was a failure in my opinion, more of human relations than of technology. So the technology let them down, but it was the actions taken by people using the output of their technology… Just relying on the data was a failure in human resource management.” She argued that management should have recognised patterns when multiple long-term employees suddenly received poor performance reviews.
Don Gotterbarn noted how IFIP’s code of ethics can serve as legal evidence when people claim no computer standards exist, highlighting the practical importance of established professional frameworks.
## Approaches to AI Ethics and Professional Standards
A significant discussion emerged between different approaches to AI ethics. Don Gotterbarn argued against creating separate “AI ethics,” stating: “I think that it’s basically a mistake to invent something called AI ethics. What happens is ethical decisions in general have to respond to contexts and situations… When you make that list and you start to make details, it fits a very narrow context.”
Gotterbarn advocated for applying traditional professional values through IFIP’s international code of ethics, contending that fundamental professional responsibilities remain universal despite cultural differences.
Margaret Havey presented a different perspective, arguing that AI presents unique challenges: “So most, I’d say the vast majority of people out there working with these products are not developers… And we have to be concerned about the multiple agents, the multiple types of AI that are in use and all the different models and all the data and regulations. So it becomes a different problem.” She emphasised that AI systems are taking on human personas and replacing human workers, requiring new considerations.
## Real-World Implementation Experiences
Jimson Olufuye shared insights from implementing generative AI for government-citizen services in Nigeria, highlighting both benefits and risks. He noted how AI was misused for disinformation during Nigeria’s 2023 political period, emphasising that AI output quality depends entirely on input data quality and that professional responsibility must focus on accountability regardless of technological advancement.
Moira De Roche, who uses generative AI daily for creating learning content, stressed the importance of human oversight. She advocated for embedding AI throughout organisational processes rather than having isolated AI teams, with comprehensive training needed at all levels. She also noted specific limitations, such as Microsoft’s image generation tools getting spelling wrong, reinforcing the “garbage in, garbage out” principle.
## Responsibility for Testing and Validation
A notable disagreement emerged regarding who bears primary responsibility for testing AI outputs. Don Gotterbarn argued that developers bear primary responsibility, using a pacemaker analogy: just as patients shouldn’t be expected to test medical devices, customers shouldn’t bear primary responsibility for validating AI systems. He criticised Microsoft’s assertion that customers should test software.
Moira De Roche, drawing from daily AI use, emphasised that users must carefully review AI-generated output, distinguishing this from traditional product testing since AI generates content dynamically. This reflected broader tensions between idealistic professional standards and practical implementation realities.
## Global Standards and Cultural Considerations
Ian from Asia raised fundamental questions about global AI standardisation: “As a world, as a global cooperation, how do we come up that the world, when we use a certain AI, we would be able to agree on what we’ll be using, when in fact we have so many different world views. The African might feel certain parts of the world might feel discriminated, the other parts of the world might be discriminated because of our standards.”
Ian specifically highlighted concerns about AI models trained on different regional data (European, American, Asian models) and resulting discrimination issues. This prompted discussion about establishing universal standards while respecting cultural differences.
Stephen Ibaraki suggested that international cooperation through bodies like IFIP could help coordinate responsible AI development globally, mentioning the Singapore AI Foundation as an example. Damith Hettihewa supported this view, noting the emergence of data scientists as a new profession and the importance of data privacy regulations.
## Organisational Integration and New Challenges
The discussion explored integrating AI tools throughout organisations rather than treating them as isolated technical solutions. Anthony Wong, reading from a BCS CEO statement, emphasised that professional standards must extend to CEOs and boards, not just IT professionals.
Damith Hettihewa highlighted the complexity of modern AI systems, noting concerns about “multiple agents, multiple types of AI that are in use and all the different models and all the data and regulations.” This complexity creates new challenges for traditional professional standards frameworks.
Moira De Roche emphasised the distinction between generative AI and general AI, advocating for what she called “artificial intelligence with human intelligence” – ensuring human oversight and validation of AI outputs.
## Ongoing Challenges and Unresolved Questions
Several critical issues remained unresolved:
**Innovation vs. Responsibility**: How to balance rapid AI advancement with thorough testing and validation requirements continues to challenge organisations.
**Enforcement Mechanisms**: Questions about ensuring legal liability and enforcement mechanisms have sufficient impact to hold organisations accountable for AI system failures were raised but not resolved.
**Cultural Fragmentation**: Addressing fragmentation of AI standards across different cultural and regulatory contexts while maintaining global interoperability remains complex.
**Practical Implementation**: How ICT professionals can effectively convince management to implement proper testing and responsible deployment practices continues to be challenging.
## Areas of Agreement
Despite disagreements on approaches, participants generally agreed on several points:
– Professional accountability remains fundamental regardless of technological advancement
– The Post Office scandal and similar failures result from human decision-making and organisational failures rather than inherent technology problems
– Data quality is crucial for AI system effectiveness
– International cooperation through bodies like IFIP is valuable for coordinating global standards
– Proper oversight and validation processes are essential
## Future Directions
The discussion concluded with general commitments to continue dialogue through platforms like WSIS and AI for Good. Moira De Roche mentioned developing frameworks for generative AI skills and training, while IFIP participants discussed exploring standards accreditation and serving as neutral facilitators for interoperability standards.
## Conclusion
This discussion revealed the complexity of establishing professional standards for AI development and testing. While participants disagreed on whether AI requires new ethical frameworks or can rely on existing professional standards, they recognised the urgency of addressing accountability, testing, and implementation challenges.
The Post Office Horizon scandal served as a powerful reminder of the consequences when technology fails and professional oversight is inadequate. As AI systems become increasingly integrated into critical functions, the discussion highlighted that the greatest challenges lie in human and organisational factors: ensuring proper training, establishing clear accountability, maintaining cultural sensitivity while pursuing global standards, and balancing innovation with responsibility.
The conversation demonstrated that while technical solutions are important, success in AI governance depends heavily on addressing human and organisational challenges. The role of international bodies like IFIP in facilitating ongoing discussions and developing practical frameworks emerged as important for responsible AI development, though many questions remain unresolved and require continued attention.
Session transcript
Moira De Roche: Good morning, everybody. Thank you for joining us for our discussion on the importance of professional standards for AI development and testing. I said in the description that I want this to be a very So, rather than us talking at you, I’d really rather hear your issues with AI and particularly agentic AI. And where you think that the professionals, in other words, people who write the code and create the products, where you think the responsibility is. Sorry about that. Let’s put that on silent. And people have finished throwing things at me. Okay, so, who has had experience with using particularly generative AI in their business and what has been the outcome? Somebody, please. Timson, you look like you should have an answer for me, please.
Jimson Olufuye: Yes. Good morning, everybody. My name is Principal Consultant of Contemporary Consulting, based in Abuja, Nigeria, and the Chair of the Africa City Alliance, based also in Abuja, Nigeria. Yes, I’ve used generative AI and the agentic AI, and with very useful factors. At this moment, we are developing some agents for clients. We are involved in digitalization, G2C, Governmental Citizen Services. which have been successful normally, but we are automating it more readily so that it can serve more citizens. Of course, the issue of ethics is important, and we have seen other ones that aim at disinformation and misinformation. So the issue of ethics matters a lot, even in the use of the in the algorithm and also in terms of data. So the data has to be good. If it’s not good, it’s not going to give the right response. And also, as I mentioned, it should be for good. We’re in the conference of AI for good. So that should be the focus of every developer, everyone working in this field. I belong to a number of platforms where I’ve seen generative AI deployed, and it was really missed during the political period in Nigeria in 2023. It was a massive deception that we saw, and it was very worrisome. So the issue is how do we ensure that those that are developing comply with the rules, follow the rules professionally, and that’s why this session is very important. That’s why I think it’s very, very important.
Moira De Roche: Sorry, thank you, Anthony. Do you think, do you or anyone here think the ethical considerations are different for AI than they were in writing any program? So AI is just taking it that one step further, because we’re in a way putting it in AI’s hands rather than doing it ourselves, but do you think the ethical considerations are the same or different? Anybody like to answer that? Is the panelist entitled to weigh in on anything?
Don Gotterbarn: I think that it’s basically a mistake to invent something called AI ethics. What happens is ethical decisions in general have to respond to contexts and situations, and it is a mistake, I believe, when you set up ethics laws and ethics compliance organizations. When you go into an organization and set up a list and say, this is what the compliance officer does, and you have to comply with these ethics, people think they are doing ethics when they take their pen or their cursor and check the box and say, I did a test on this system, so I complied with ethics. When you make that list and you start to make details, it fits a very narrow context. One of the wonderful things I love about computing is the context is always changing, and so you have to have a certain kind of flexibility. And those judgments don’t come from the top down as enforcement laws, unfortunately. They come from the practitioner who says, when I’m developing this piece of software, I have to follow certain values or practices? And I think there’s a problem in the way we phrase the question frequently because it’s how to punish the evildoer who’s unethical, the salesman who sells you AI telling you things it can’t, telling you things it can’t, can’t. What we need to focus on as developers is not worrying about these risks, but what are the positive ethical things we can do so when I deliver a product, it helps you and it’s directed toward you. The point seems to be that we ought to think about not doing technical AI and following, well, I’m going to use this large language model and I’m going to use this sanitized test. And once I’ve done that, everything is okay. It’s most likely not okay because the context you’re working in is a little bit different. And we have to get the developers and the programmers to go values. And when they do things, think about those issues. And this is a top down. First I have to press it. Okay. Press the button. I think there’s, I don’t necessarily
Margaret Havey: agree with you. So most, I’d say the vast majority of people out there working with these products are not developers. There are people like in my organization, we do the, we provide networks and communications for all the government departments. And we are not, we’re not developers. And we have to be very careful. and the other people with ethics on the way it’s being used and on whether or not there is bias in the models that have been trained by somebody else, whether or not the data is anything useful to us. And we have to be concerned about the multiple agents, the multiple types of AI that are in use and all the different models and all the data and regulations. So it becomes a different problem. And I don’t, I think that involved in AI are quite different because AI is taking on personas and their ethics and using likenesses of actors, for example. They don’t have, in movies, they age people and they de-age them. And they used to be three actors and now there’s just one. And then they do the rest with AI. So it’s a whole different situation in the real world, as opposed to the world of developers, who are, by the way, going to be replaced by AI.
Don Gotterbarn: I have to be careful talking as a retiree. That is, I’m talking about the development side where the ethics seems to be the same. There’s AI development, there’s AI applications that are out there, there’s AI hardware that has it embedded in it. And the piece that I’m talking about is that let’s do the development of the AI systems and have those ethical standards there. But I also think that those same values apply to the applications areas. I agree with you with all of these different things you have to deal with. You had difficulty dealing with something called email. and we had to worry about the ethics of email and who did it and and people invented email ethics and as I utter that now I would think you’re you ought to have a smile on your face for some absurdity about such a concept and that that’s the approach.
Moira De Roche: While we’re asking the questions I also want us to think about how we make sure that and our AI, not AI generally, but how we embed generative AI in our organizations and how it’s becomes part of the process in our organizations so that it’s not something other than. We don’t have a team of people using generative AI to do something and the rest of the organization carries on as before. So that’s a very important consideration with using generative AI or indeed agentic AI and so know that it won’t be perfect but if you set your prompts correctly and then review the output carefully it’s an excellent tool. I use it every day in my work to create learning content and for the most part it’s very very good. I’m not saying it’s not fantastic. Microsoft’s image generation tools for instance always get the spelling wrong once they put the words in the images. So it’s a very good tool and like all tools it must be used responsibly and understanding the power of it is so important. Anthony, are you going to read Elizabeth’s question?
Anthony Wong: Yes, we’ve got Elizabeth online and she’s got a question about the recent and Minister of Justice report from the UK and in support of our member society the BCS, the British Computer Society, about the scandal around the Post Office Horizon project. So Elizabeth, open to you to post the questions for the panel. Please. Do you mean me, Anthony? Elizabeth, yes, you are. I don’t have the question here for the panel, but it is indeed an issue. So please discuss, because I cannot say much about it. Sorry, not Lizbeth, Elizabeth Eastwood, who is online. Yeah. Thank you, Lizbeth. Are you online? No, I don’t see her. Do you want to read Elizabeth’s question for the panel in relation to what are we going to do with such an instance as the Fujitsu post office scandal, which is not so much about AI, but when AI really takes on board. It could have even more deviations. So what should we think about this panel on the professional standards for AI development testing? Because that’s the topic for today’s conversation. So I’d like to open that to the panel for discussion. When Elizabeth comes online, I’m sure she would further elaborate on the question. Thank you.
Moira De Roche: It went off by itself, it doesn’t like me. It was a failure in my opinion, more of human relations than of technology. So the technology let them down, but it was the actions taken by people using the output of their technology. And it wasn’t just AI, it was technology written to measure people’s productivity. And in the UK, people can set up very small post offices. So some of them put their life savings into these little post offices. And because of incorrect output from the system, they were fired and actually lost their livelihood. And there has been a movie about it. Some people even committed suicide. But my question there is, was that AI’s fault or was it for not checking the data? If a whole lot of people who worked with a post office in most cases for several years, suddenly get bad reviews, surely you should say there’s something wrong here. Just relying on the data was a failure in human resource management, not so much a failure of the system, but a failure of what came out of the system. Back in my day, when I was a programmer, we used to talk about garbage in, garbage out. So that was garbage out, and it was a failure of the system or failure of AI.
Margaret Havey: In my opinion, I think it’s a failure of the way it was implemented, of the implementation. And so that does go back to standards. So how people are, how well they’re doing their implementation, how well they’re organizing whatever product it is, and the lack of testing, etc. That’s my view. And I may add to that, taking further on garbage in, garbage out.
Damith Hettihewa: So I think the fundamental shift is the outcome of generative AI or agentic AI is output is as much good as the input, the data set, the data that is being used for training the algorithms. So in that context, I think I agree with Don on no need to add, there are new disciplines that is coming out, particularly the training of the algorithms based on data, the new profession of data scientist, you need to look at anything need to be reinvented in that context. The new professionals of data scientists, along the guidelines of data privacy and protection regulations, are there any new attributes to be added? As you said, the HR department, so the output was impacted by the input data. So management of data and using the data in secure and without compromising the privacy of the individuals or the data, how the data scientists need to have maybe few new attributes on the ethical standards. So that’s what I thought we need to probably consider at this point. Thank you. Thank you. I have a question from our colleague Stephen Ibaraki
Moira De Roche: who couldn’t be with us today, but I think it’s an important question. How should RCT professionals balance AI with their responsibility to standards and to maintaining public trust, especially when working with cutting edge AI projects? So the technology is coming so fast. How can IT professionals make sure that they’re innovative, that they use the cutting edge technology, but don’t lose their professional responsibility? Because a professional is all about ethics, accountability, responsibility. Anybody want to answer that one?
Jimson Olufuye: Yes. Within the context of WSIS, if you look at it closely, Action Line 10 is talking seriously about the ethical dimensions, the information society, the common good, preventing abuse, abusive use of ICT and values. As professionals, this should be what should guide us all the time. In fact, however the technology, it must be responsible to us. We need to have that in mind. Even as we develop, as we program and use data, there has to be some form of key switch, I believe that. Key switch, so that it will, however it is, it should still be accountable. Accountability is very important. If we don’t want to be taken over completely, accountability is important. and that guys, me too, even us too, and I tell that to our personnel that this is very important even as we provide products for the local consumption of our people. And even as a professional organization, even in NCS, we have a code of ethics, you know, and people that are violated, we bring them to some panel, you know, if there are challenges necessarily with the post office product, it’s a serious issue. People have died. People have died. There’s a gap somewhere there. And then even in some of our products, we have some glitches, actually. So we should be responsible to thorough testing. So it’s part of our responsibility as professional to review our data regularly. And then as you said, yes, the human side is also very important, you know, but we are the primary responsible people, because we are the creator of it, no matter what we created it. So as professional, the public really put a lot of confidence away. So that is the basis for all professionals in terms of work, whatever we do.
Moira De Roche: Thank you, Jimson. And part of our responsibility of trust is to accept that we will get output, but then to check that output to make sure that it actually is what it should be. It’s very easy to use generative AI, get something fantastic, and then it’s wrong or it’s off the mark. So it’s very important to have that. One of my colleagues calls it artificial intelligence with human intelligence. So you use artificial intelligence and then you use human intelligence to… to check the outputs. I think we have some questions online. Thank you so much. Can I ask my question? Ian, can we hear your question please? Thank you so much. I’d like to say a comment and then ask a question. Will that be okay? Welcome. Okay. Tell us who you are and where you’re from please.
Audience: My name is Ian. I do review AI abstracts. I’m here in Asia. And your question? I’d like to comment first that for all of us to realize that when we use the generic learning, this are trained on certain data. And it is vital that when we train this AI, we have to declare on which data they have been trained to. For example, that’s where here in Asia, or this AI that we’re using train on European models, American models or whatever models because the standards, the profiles would be different. Here in Asia, some of the models are very particular that we can never mention anything related to religion. What I’m trying to say is that these days, we haven’t got any AI which does not have a degree of discrimination. And as you can see, the world is so diverse that when we just say, professionalize, make standards. This is where my question comes in. As a world, as a global cooperation, how do we come up that the world, when we use a certain AI, we would be able to agree on what we’ll be using, when in fact we have so many different world views. The African might feel certain parts of the world might feel discriminated, the other parts of the world might be discriminated because of our standards. The Asian standard is different from the African, the African is different from the European, the European is different from the American. So how do we intend, from your view, to come up with a standard that would be more or less acceptable? Sorry, can we just wait? I’d just like to ask Stephen Ibaraki to answer, to ask his question and then perhaps to help answer your question. Ibaraki here, and I’m attending remotely. I just want to bring up a comment.
Moira De Roche: Maura, you were talking earlier about the question about AI innovating or innovation occurring in that. How do IT professionals keep up with these things? And then this relates to Ian’s question as well. Again, I think it’s the ideal sort of body because, for example, Singapore has the AI
Stephen Ibaraki: Foundation. And what they’re trying to do is create an open source information, so you can test some of these generative AI models, and you can look at the data as well. And because IFIP is an international, multi-international alliance, They’re a perfect sort of vehicle for taking input from the UN but also from these different government bodies like the AI Foundation out of Singapore. The reason I mention this is because I was there and I was moderating with the gentleman who actually founded the AI Foundation. In terms of being on your data, there is a work on data commons by ITU. In other words, if you have different repositories of data around the world, how can you manage that? How can you ensure some commonality? And they’ve tried to address this with AI for Good, with the focus group in AI health as well as being part of that conversation. And then recently I had a conversation with Yong Kun who won the Turin Award from the ACM in 2018 and Elizabeth for this year. He’s working on a world foundation model. So, he’s indicating through the open source repositories worldwide, he’s suggesting that these will become amalgamated into a world foundation model. So, I guess it’s a sort of a comment and maybe more of some ideas or answers to some of the discussion that’s happened here. Thank you, Stephen.
Don Gotterbarn: Thank you, Stephen. The previous assertion to Stephen’s that says essentially, because there’s differences in the world, there’s no commonality. One of the things that IFIP has done in its representation of many different countries has adopted an international code of ethics. where they find that yes, even if you’re in China and you’re not allowed to mention religion, you do think that it’s your professional responsibility to test it. You do think that if you release software, it should not at least unintentionally harm people. You do think that you should review your software so that when it does things, you make sure that any collateral damage may be minimized. And I’m not going to repeat the whole code of ethics here. It’s available. But this is the common thing with professionalism and responsibility to your community. So to say that you have this Asian model that says don’t mention religion, well, there’s some atheists in the United States who don’t mention religion. That doesn’t change the way in which they develop software. That doesn’t change the way in which they develop my hearing aids so that I can hear what’s going on here. In any country you’re in, if you make it so that it would randomly buzz and make noises so nobody in the audience could hear, we would all agree whatever country you’re from, that’s an abysmal failure. And if you didn’t pay attention to that, we’d also say it’s not a technological failure. It’s a moral failure. So we have to be very careful. We all admit there are differences. And there’s these sets of responsibilities. One of the things that’s scary is we’re starting to repeat a kind of relocation. It’s on. It’s on. Okay. We’re starting at least it says it’s on. It’s red if we can believe that. Don’t double it up. Okay. Is the repeat of a certain kind of failure that went on early in computing where we took certain responsibility and gave it to other people. In the U.S., there’s a company called Microsoft, and I participated in some hearings. Microsoft was asserting that the customer is responsible for testing the software. Now, the head of that committee had just had a pacemaker installed, and I asked him if he thought he was responsible for testing Microsoft software in that pacemaker when it didn’t work. And when you deliver a software product, there’s a presumption that it will deliver accurate material, and you will provide some evidence and say it’s trained on so you know what to be worried about. But we should not be expected to have to be the people who test the results, so that when we get results from generative AI that says, well, now we’ve got to review the data and look things up to see if it got it right, the presumption is, and when it gets integrated in industry, is it will have gotten it right. The responsibility is on the developers and the testing to make sure we understand what the errors are and what those problems are. I am also there. But that’s a different thing. Keeps going off. This doesn’t like me, I’m telling you. When you develop a product, you then test it properly. Generative AI is a different story. You’re asking generative AI a question and getting some output. You have to not
Moira De Roche: test review the output to make sure it’s relevant, to make sure that it hasn’t gone off on its own little voyage of discovery, that it’s relevant to your topic. So it’s not testing what generative AI output is, it’s reviewing what generative AI gives you as output. So it’s different than a product that somebody needs to test. It’s more case of the generative AI tool giving you some information or ask it a question and then make sure that what it gives you as output does in fact answer the question you asked, not in detail because the generative AI gives you that detail. So it is a little different to normal product testing in that you’re not going to check every single fact in the output. You might check all the links to make sure that they’re valid, but it’s not the same as developing a product. It’s generating output on the fly. That concept called the responsibility gap. So I’m not responsible for the accuracy of the
Anthony Wong: information you have to test it. Thank you for that intervention. I’d like to read a statement from the new CEO of BCS, the Chartered Institute for IT. BCS is a member of IFIP and I’d just like to read her statement just released recently about the UK panel. And she said, quote, unless everyone responsible for the development, leadership and governance of technology is held accountable to robust professional standards, which is what and Mrs. Margaret Hethihewa for joining us for this evening’s discussion about with genuine authority, another tragedy like Horizon is inevitable. That accountability, she said, must extend to CEOs and boards, not just IT professionals, who are often without technical backgrounds, who understand the complex ethical challenges inherent in IT implementation. And she continued, Horizon is not self-aware AI acting at the moment, but can you imagine the devastation that could compound with AI agents running in many installations and many places. It could devastate lives because of a failure in professional behaviour and a lack of multidisciplinary understanding, spanning technology, the law. And I’d like this panel to ponder that statement and tell the world what IP3, which stands for professionalism, what should we do to address some of these challenges coming up. So Chair, Moderator Moira, if you can lead that discussion and come up with some concrete actions that we should start contemplating on in IFIP and IP3 to work with the BCS and our member societies in the world and with the UN agencies, not just talking about principles and standards, but how do we actually start the journey to look at human rights. Thank you for that question. It does switch itself off, trust me.
Moira De Roche: IP3, which stands for the International Professional Practice Partnership, is all about trying to make sure that and other IT professionals adhere to a level of standards and professionalism which include accountability, responsibility, ethical behavior, competence, etc. And we are against all those features. We also are moving towards doing some ISO accreditation around software engineering and software programming where we will make sure that people adhere to those ISO standards as well as the IP3 standards. What I plan to do in the coming weeks and months is to look at, as a result of several of the conversations I’ve had this week, is to look at developing a framework, in the generative AI in particular, skills and training across the board because we don’t only need to train users on generative AI, we need to train right from the CEO right down to the bottom of the organization. And that is how we will embed it in the processes of the organization and have the, it’s a little like putting a new mechanical process in where there are checks and balances everywhere and where we make sure everybody is trained at every point. So I want to develop a framework to say, OK, for our people at this level, what do they need to know about, and I’m talking specifically about generative AI, because AI is a big subject and AI is not new. We’ve had it on our phones, smartphones, since we’ve had smartphones. Everything on there is run by AI, but I’m talking about the generative AI, which is the tools that we have at our disposal. and Ms. Margaret Havey. I want to make sure that aligned to our professional standards, we have a generative AI and we can test it and make sure that people are adhering to the standards and the framework and even to a standard body of knowledge around a generic or generative AI. It’s such an easy tool. It’s like hey-ho, someone has developed a pencil and paper and therefore we think that they can write because they’ve got a pencil and paper, but we actually have to teach them how to write and have to teach them how to write properly and how to write sensibly. So we need to say, we need to almost go back to ground level and say yes, a very, very powerful tool that the users can use and make sure that they use it for good.
Anthony Wong: Moderator Moira, we’ve got a question from Elizabeth Eastwood who’s now online. Technicians, can you put her on the screen please to post the question?
Liz Eastwood: Thank you. Hello, everybody. Unfortunately, I can’t actually show my face at the moment for complicated reasons, so I do apologise. But yes, I have a question relating specifically to the British Post Office scandal. The scenario where clearly these problems evolved over time, but nobody was willing to put their hands up and say, well, actually, there might be something wrong with the software. So, you know, 26 years later after this scandal started, the report has come out and being exposed, as Anthony has pointed out, the British Computer Society. have in a nutshell said that what this report does is it exposes the deep deficiencies in professionalism from really every area from technology and law to the executive management. So it is the legal system and it is the CEOs, the executives who are not stopping to consider implications of what they’re doing and what they’re saying and what their responsibilities are. The worst part about this whole debacle was that the legal system relied upon computer evidence as trusted proof of the postmaster’s guilt. It’s shocking, it’s appalling. Completely, quite clearly, the computer evidence should not be trusted. Both the software company who wrote the software or inherited the software from previous incarnations of that particular company, the software company and the British Post Office, they both knew this. You should not have been trusting that software. So what could happen if a similar scenario arises when software has actually evolved using AI? How can ICT professionals, even if they have been trained, highly competent, how can they persuade top management to do the right thing such as pilot the software adequately and pilot in stages? I mean, in the UK, we were talking about something like many, many thousands of users scattered right across the country, all of whom are on different communication links, distances and communication types, which would not have helped. But still, they should have piloted the software. In stages, prior to having a national rollout, how does an ICT professional convince CEO management, top level management, to do the right thing and to accept responsibility for their decisions? So, we know that we must have qualified IT professionals. We know that organisations need to employ these people and make sure that they’re properly qualified. It’s not an easy task, so they need to be highly competent.
Margaret Havey: What is the best way to ensure that large companies like the post office do actually insist on employing fully qualified professionals in the IT sector within their organisation? That’s my question to the panel. It’s Margaret here. I think one of the best ways to do that is to make sure that they have, we can do that through a code of ethics, which we do have, and that includes the management and all the responsibilities to make sure that they know that they are liable. And that’s when they will pay attention, when they know they are liable, and when there’s teeth behind that. And that, I believe, is the way to do that.
Liz Eastwood: And how do we make sure there’s teeth behind it?
Margaret Havey: Teeth, how do we get teeth behind it? Well, that’s a matter of, let’s see, your leadership has to agree to work with, we do have that, and they do know that they’re responsible because the minister is a government organisation, so you’re headed by a minister, and they are responsible to their superiors, and it’s a very dire, dire consequence. and Ms. Margaret Hennigan. And then we have the first question. What are the consequences if they do something wrong? So we just, and they do know about it through our, not so much through our code of ethics, but through just their list of responsibilities, their heads will fall. And that does seem to work for us that’s in the government. Organizations at board level now, know that they have a responsibility for what happens in IT. And it’s a responsibility that company law, certainly in my company, my country, South Africa, and in countries that adhere to the kink, the board is responsible, ultimately responsible for what happens in IT.
Moira De Roche: And it’s up to them to actually know what’s going on. So hopefully we’re moving away from that. Before I get you Damith, can I ask Stephen, if he has any closing remarks? Stephen. Yeah, excuse me.
Stephen Ibaraki: My sound has gone a little bit askew. So hopefully you could hear me. So yeah, I think it’s really a perfect convener for this, but this is being addressed through corporations like Microsoft. They do have a AI for good program, but a responsible AI. And then because IFIP works with both industry and with countries and with UN agencies, we can act as that sort of central hub to address these things and to coordinate amongst all of these bodies. So I believe that we’re in a much better position than before because these kinds of issues are now being looked at. very seriously, and especially as we progress into generative AI. I already mentioned it again. Singapore, I believe, is one of the leaders in this area, so by the corporations as well. And then these concepts are infused throughout this AI for Good conference, but also through the WSIS conference here.
Moira De Roche: Thank you, Stephen, and thank you so much for being with us, to share your wisdom. You can appreciate your being with us at what must be very early in the morning for you. Stephen, I believe, never sleeps. He’s either sitting somewhere adjoined to a conference early in the morning, or he’s traveling. He could be an airline in his own right. Damith, was yours a question? Well, I will, because of the time, I will not go back to, I wanted to go back to Anthony’s question, but rather I will probably make some closing remarks instead. To add to what Stephen said, I-FIF probably can be the catalyst and the nucleus of this whole ethical ethics around AI amongst IT professionals, along with IP3, International Professional Practice Partnership. So Ms. Moira De Roche, firstly, I think Ms. Moira De Roche mentioned about the
Damith Hettihewa: guidelines or the framework development. So I would like to also mention the I-FIF can and will act as a neutral facilitator in this subject amongst the stakeholders. Of course, Ms. Moira De Roche mentioned the capacity building and the training, but I would like to also bring about another attribute, which is interoperability. I-FIF can advocate for interoperability. by collaborating with bodies like IEEE and BCS and all the professional, 40 professional bodies and the agencies ensure the imaging standards are not fragmented, compatible across frontiers and the borders. And finally, continue the ongoing dialogue using the platforms like WSIS and AI for Good, as well as with the partners like UNESCO, etc. on the guidelines or the framework being kept continually improved, looking at new risk techniques and challenges through living documents, regular international forums like WSIS and AI for Good. In short, I FIFC and the Bridge Builder and SAN, etc. Thank you very much.
Moira De Roche: Thank you for your comments and thank you all for attending and for participating and those of the panellists giving some good insights. Thank you to Stephen, Lisbeth and Elizabeth for joining remotely. And I do have some business cards with me if anybody wants to discuss this more carefully. As I say, I use generative AI every single day in my day job to create learning content. So it works for everybody and the saving in time is phenomenal. And the fact that I FIFC has a code of ethics that is adopted by multiple countries has been used, I can testify, in legal cases when people have said there are no computer standards. You show them the Code of Ethics and the number of people who have adopted it, and that’s one of the ways you can convince people to move positively or use it as a club, if you will, to threaten lawsuits.
Don Gotterbarn
Speech speed
131 words per minute
Speech length
1082 words
Speech time
492 seconds
AI ethics should not be treated as a separate discipline but as contextual application of existing ethical principles
Explanation
Gotterbarn argues that creating separate ‘AI ethics’ is a mistake and that ethical decisions should respond to contexts and situations rather than following rigid compliance checklists. He believes practitioners should apply flexible ethical judgment based on established values rather than top-down enforcement laws.
Evidence
He mentions the problem with compliance officers who think they’re doing ethics by checking boxes, and emphasizes that computing contexts are always changing requiring flexibility
Major discussion point
Ethics and Professional Standards in AI Development
Topics
Human rights principles | Digital standards
Disagreed with
– Margaret Havey
Disagreed on
Whether AI ethics should be treated as a separate discipline or as application of existing ethical principles
IFIP’s international code of ethics provides common professional standards across different countries and contexts
Explanation
Despite cultural differences worldwide, Gotterbarn argues that IFIP has successfully adopted an international code of ethics that establishes common professional responsibilities. He contends that fundamental professional duties like testing software and minimizing harm are universal regardless of local restrictions.
Evidence
He gives examples of common responsibilities across cultures: testing software, ensuring products don’t unintentionally harm people, and minimizing collateral damage. He notes that even with different restrictions (like not mentioning religion in some countries), the core development responsibilities remain the same
Major discussion point
Global Standards and Cultural Considerations
Topics
Digital standards | Human rights principles
Agreed with
– Stephen Ibaraki
– Damith Hettihewa
Agreed on
International cooperation and standardization are essential for responsible AI development
Developers bear primary responsibility for proper testing and ensuring systems work correctly before deployment
Explanation
Gotterbarn argues that developers should not shift responsibility to customers for testing software products. He emphasizes that when delivering software, there should be a presumption of accuracy and proper testing by the developers themselves.
Evidence
He references Microsoft hearings where the company asserted customers were responsible for testing software, and uses the example of a pacemaker to illustrate why customers shouldn’t be expected to test critical software
Major discussion point
Implementation and Testing of AI Systems
Topics
Digital standards | Consumer protection
Disagreed with
– Moira De Roche
Disagreed on
Who bears primary responsibility for testing and validating AI outputs
Margaret Havey
Speech speed
136 words per minute
Speech length
573 words
Speech time
252 seconds
Ethics in AI requires different considerations due to AI taking on personas, using likenesses, and affecting employment
Explanation
Havey argues that AI ethics presents unique challenges because AI systems are taking on human personas, using actor likenesses in movies, and replacing human workers. She contends that most people working with AI products are not developers but users who must deal with bias, data quality, and regulatory compliance.
Evidence
She provides examples of AI aging and de-aging actors in movies, reducing the need for multiple actors, and mentions that developers themselves will be replaced by AI
Major discussion point
Ethics and Professional Standards in AI Development
Topics
Human rights principles | Future of work | Intellectual property rights
Disagreed with
– Don Gotterbarn
Disagreed on
Whether AI ethics should be treated as a separate discipline or as application of existing ethical principles
Implementation failures stem from inadequate testing and poor organizational processes rather than technology itself
Explanation
Havey believes that failures like the Post Office scandal result from poor implementation practices, inadequate testing, and lack of proper organizational standards. She emphasizes that the problem lies in how systems are implemented and organized rather than the technology itself.
Evidence
She references the Post Office Horizon scandal as an example of implementation failure
Major discussion point
Implementation and Testing of AI Systems
Topics
Digital standards | Consumer protection
Agreed with
– Moira De Roche
– Liz Eastwood
Agreed on
Implementation failures stem from human and organizational issues rather than pure technology problems
Disagreed with
– Moira De Roche
Disagreed on
Primary cause of the Post Office scandal – technology vs. implementation vs. human management failure
Government organizations have clear accountability structures where leadership knows consequences of IT failures
Explanation
Havey explains that in government organizations, there are clear lines of accountability where ministers and leadership understand they are responsible for IT system failures. She suggests this accountability structure, where ‘heads will fall’ for failures, provides effective motivation for proper oversight.
Evidence
She mentions that in government, ministers are responsible to their superiors and face dire consequences for failures, and notes that boards now know they have responsibility for IT under company law
Major discussion point
Organizational Integration and Training
Topics
Digital standards | Consumer protection
Professional accountability requires making leadership legally liable for IT system failures
Explanation
Havey argues that the best way to ensure proper IT practices is through codes of ethics that include management responsibilities and make leaders legally liable for failures. She believes accountability with ‘teeth’ behind it is essential for getting leadership attention.
Evidence
She references company law in South Africa and countries that adhere to similar legal frameworks where boards are ultimately responsible for IT outcomes
Major discussion point
Case Study Analysis: UK Post Office Scandal
Topics
Legal and regulatory | Consumer protection
Agreed with
– Don Gotterbarn
– Jimson Olufuye
– Anthony Wong
– Moira De Roche
Agreed on
Professional accountability and responsibility are fundamental regardless of technology advancement
Jimson Olufuye
Speech speed
125 words per minute
Speech length
552 words
Speech time
264 seconds
Quality of AI output depends entirely on the quality of input data and training datasets
Explanation
Olufuye emphasizes that AI systems can only be as good as the data they are trained on, following the principle of ‘garbage in, garbage out.’ He stresses that developers must ensure data quality and ethical use of algorithms to achieve proper AI responses.
Evidence
He mentions his experience developing AI agents for government-to-citizen services and references seeing generative AI misused during Nigeria’s 2023 political period for massive deception and misinformation
Major discussion point
Implementation and Testing of AI Systems
Topics
Data governance | Digital standards
Agreed with
– Damith Hettihewa
– Audience
Agreed on
Data quality is fundamental to AI system performance and ethical outcomes
Professional responsibility must focus on accountability and following established codes of ethics regardless of technology advancement
Explanation
Olufuye argues that professionals must maintain accountability and follow ethical guidelines regardless of how advanced technology becomes. He believes there should be ‘key switches’ to ensure human accountability and that professionals are primarily responsible as creators of the technology.
Evidence
He references WSIS Action Line 10 on ethical dimensions and mentions that his organization (NCS) has a code of ethics with panels to address violations. He also notes that people died in the Post Office scandal, emphasizing the serious consequences of professional failures
Major discussion point
Ethics and Professional Standards in AI Development
Topics
Human rights principles | Digital standards
Agreed with
– Don Gotterbarn
– Margaret Havey
– Anthony Wong
– Moira De Roche
Agreed on
Professional accountability and responsibility are fundamental regardless of technology advancement
Anthony Wong
Speech speed
108 words per minute
Speech length
517 words
Speech time
285 seconds
Professional standards must extend to CEOs and boards, not just IT professionals, with genuine accountability
Explanation
Wong presents the BCS CEO’s statement that accountability for technology failures must extend beyond IT professionals to include CEOs and boards who often lack technical backgrounds. He emphasizes that without robust professional standards with genuine authority, tragedies like the Horizon scandal are inevitable.
Evidence
He quotes the BCS CEO’s statement about the Post Office Horizon scandal and warns that AI agents running in multiple installations could cause even more devastation due to failures in professional behavior and lack of multidisciplinary understanding
Major discussion point
Ethics and Professional Standards in AI Development
Topics
Digital standards | Consumer protection | Human rights principles
Agreed with
– Don Gotterbarn
– Margaret Havey
– Jimson Olufuye
– Moira De Roche
Agreed on
Professional accountability and responsibility are fundamental regardless of technology advancement
Stephen Ibaraki
Speech speed
130 words per minute
Speech length
390 words
Speech time
179 seconds
A world foundation model could help address fragmentation by amalgamating open source repositories globally
Explanation
Ibaraki discusses how different global repositories of data could be managed through initiatives like ITU’s work on data commons and mentions a world foundation model being developed that would amalgamate open source repositories worldwide. He sees this as a solution to data fragmentation issues.
Evidence
He references Singapore’s AI Foundation creating open source information for testing generative AI models, ITU’s work on data commons, and mentions Yong Kun (2018 Turing Award winner) working on a world foundation model
Major discussion point
Global Standards and Cultural Considerations
Topics
Digital standards | Data governance
International cooperation through bodies like IFIP can coordinate responsible AI development globally
Explanation
Ibaraki argues that IFIP is ideally positioned to coordinate responsible AI development because it works with industry, countries, and UN agencies. He believes IFIP can act as a central hub to address AI challenges and coordinate among various stakeholders.
Evidence
He mentions Microsoft’s AI for Good program and responsible AI initiatives, and notes that these concepts are being addressed through AI for Good conferences and WSIS conferences
Major discussion point
Future Directions and Solutions
Topics
Digital standards | Capacity development
Agreed with
– Damith Hettihewa
– Don Gotterbarn
Agreed on
International cooperation and standardization are essential for responsible AI development
Audience
Speech speed
123 words per minute
Speech length
309 words
Speech time
149 seconds
Different regions have varying cultural and regulatory requirements that affect AI training and deployment
Explanation
The audience member (Ian) points out that AI systems are trained on different datasets reflecting regional biases and cultural standards. He argues that Asian, African, European, and American standards differ significantly, making it challenging to create universally acceptable AI standards.
Evidence
He mentions that in Asia, some AI models are restricted from mentioning anything related to religion, and notes that different regions may feel discriminated against based on varying world views and standards
Major discussion point
Global Standards and Cultural Considerations
Topics
Cultural diversity | Digital standards | Human rights principles
Agreed with
– Jimson Olufuye
– Damith Hettihewa
Agreed on
Data quality is fundamental to AI system performance and ethical outcomes
Damith Hettihewa
Speech speed
101 words per minute
Speech length
346 words
Speech time
204 seconds
Data scientists need new ethical attributes aligned with data privacy and protection regulations
Explanation
Hettihewa argues that the emergence of data scientists as a new profession requires additional ethical attributes beyond traditional programming ethics. He emphasizes that these professionals need guidelines for managing data securely while protecting individual privacy.
Evidence
He mentions the fundamental shift where AI output quality depends on input data and training datasets, and references data privacy and protection regulations
Major discussion point
Implementation and Testing of AI Systems
Topics
Data governance | Privacy and data protection
Agreed with
– Jimson Olufuye
– Audience
Agreed on
Data quality is fundamental to AI system performance and ethical outcomes
IFIP can serve as a neutral facilitator and advocate for interoperability standards across borders
Explanation
Hettihewa proposes that IFIP can act as a neutral facilitator among stakeholders and advocate for interoperability by collaborating with professional bodies like IEEE and BCS. He envisions IFIP ensuring that AI standards are compatible across frontiers and borders rather than fragmented.
Evidence
He mentions collaboration with 40 professional bodies and agencies, and references ongoing dialogue through platforms like WSIS and AI for Good with partners like UNESCO
Major discussion point
Global Standards and Cultural Considerations
Topics
Digital standards | Capacity development
Agreed with
– Stephen Ibaraki
– Don Gotterbarn
Agreed on
International cooperation and standardization are essential for responsible AI development
Ongoing dialogue through platforms like WSIS and AI for Good is essential for continuous improvement
Explanation
Hettihewa emphasizes the importance of maintaining continuous dialogue through international platforms to keep AI guidelines and frameworks updated. He advocates for treating these as living documents that are regularly improved through international forums.
Evidence
He specifically mentions WSIS and AI for Good conferences as platforms for ongoing dialogue, and partnerships with UNESCO for framework development
Major discussion point
Future Directions and Solutions
Topics
Digital standards | Capacity development
Moira De Roche
Speech speed
127 words per minute
Speech length
1815 words
Speech time
856 seconds
AI must be embedded throughout organizational processes rather than isolated to specific teams
Explanation
De Roche argues that organizations should integrate generative AI throughout their processes rather than having separate teams using AI while the rest of the organization continues as before. She emphasizes the importance of making AI part of the overall organizational workflow.
Evidence
She mentions using generative AI daily in her work to create learning content and notes that it’s an excellent tool when prompts are set correctly and output is reviewed carefully
Major discussion point
Organizational Integration and Training
Topics
Digital business models | Capacity development
Users must review AI output carefully to ensure relevance and accuracy, applying human intelligence to artificial intelligence
Explanation
De Roche emphasizes that users must carefully review AI-generated output to ensure it’s relevant and accurate, describing this as ‘artificial intelligence with human intelligence.’ She distinguishes this from traditional product testing, noting that users need to verify that AI output actually answers the questions asked.
Evidence
She mentions that Microsoft’s image generation tools always get spelling wrong in images, and notes that while AI is a fantastic tool, it’s not perfect and requires human oversight
Major discussion point
Implementation and Testing of AI Systems
Topics
Digital standards | Consumer protection
Disagreed with
– Don Gotterbarn
Disagreed on
Who bears primary responsibility for testing and validating AI outputs
The scandal represents failure of human relations and management rather than pure technology failure
Explanation
De Roche argues that the Post Office scandal was primarily a failure of human resource management and decision-making rather than a technology failure. She contends that when multiple long-term employees suddenly receive bad reviews, management should recognize something is wrong rather than blindly trusting system output.
Evidence
She mentions that post office operators put their life savings into small post offices, were fired due to incorrect system output, and some even committed suicide. She references the ‘garbage in, garbage out’ principle from her programming days
Major discussion point
Case Study Analysis: UK Post Office Scandal
Topics
Consumer protection | Human rights principles
Agreed with
– Margaret Havey
– Liz Eastwood
Agreed on
Implementation failures stem from human and organizational issues rather than pure technology problems
Disagreed with
– Margaret Havey
Disagreed on
Primary cause of the Post Office scandal – technology vs. implementation vs. human management failure
Comprehensive training frameworks are needed from CEO level down to all organizational levels
Explanation
De Roche proposes developing comprehensive training frameworks for generative AI that span from CEO level to all organizational levels. She emphasizes that everyone in an organization needs appropriate training on AI, not just technical users.
Evidence
She compares this to implementing a new mechanical process with checks and balances everywhere, and uses the analogy of teaching people to write properly just because they have a pencil and paper
Major discussion point
Organizational Integration and Training
Topics
Capacity development | Digital standards
IFIP should develop frameworks for generative AI skills and training across organizational levels
Explanation
De Roche outlines plans to develop frameworks aligned with professional standards for generative AI implementation, including skills training, testing standards, and a standard body of knowledge. She emphasizes the need to ensure people adhere to professional standards when using these powerful tools.
Evidence
She mentions plans to look at ISO accreditation around software engineering and software programming, and references IP3 (International Professional Practice Partnership) standards
Major discussion point
Future Directions and Solutions
Topics
Digital standards | Capacity development
IFIP’s code of ethics can be used as legal evidence when people claim no computer standards exist
Explanation
De Roche testifies that IFIP’s code of ethics, adopted by multiple countries, has been successfully used in legal cases when people claim there are no computer standards. She suggests this can be used both positively to guide behavior and as a legal tool to threaten lawsuits.
Evidence
She personally testifies to using the code of ethics in legal cases and mentions the number of people who have adopted it as evidence of its legitimacy
Major discussion point
Case Study Analysis: UK Post Office Scandal
Topics
Legal and regulatory | Digital standards
Agreed with
– Don Gotterbarn
– Margaret Havey
– Jimson Olufuye
– Anthony Wong
Agreed on
Professional accountability and responsibility are fundamental regardless of technology advancement
Liz Eastwood
Speech speed
119 words per minute
Speech length
405 words
Speech time
202 seconds
Legal systems inappropriately relied on computer evidence as trusted proof without proper validation
Explanation
Eastwood highlights that the legal system relied on computer evidence as trusted proof of postmasters’ guilt in the Post Office scandal, which was shocking and appalling. She emphasizes that both the software company and British Post Office knew the computer evidence should not be trusted.
Evidence
She mentions that the scandal evolved over 26 years with nobody willing to admit software problems, and that the BCS report exposed deep deficiencies in professionalism across technology, law, and executive management areas
Major discussion point
Case Study Analysis: UK Post Office Scandal
Topics
Legal and regulatory | Consumer protection | Human rights principles
Agreed with
– Margaret Havey
– Moira De Roche
Agreed on
Implementation failures stem from human and organizational issues rather than pure technology problems
Agreements
Agreement points
Professional accountability and responsibility are fundamental regardless of technology advancement
Speakers
– Don Gotterbarn
– Margaret Havey
– Jimson Olufuye
– Anthony Wong
– Moira De Roche
Arguments
IFIP’s international code of ethics provides common professional standards across different countries and contexts
Professional accountability requires making leadership legally liable for IT system failures
Professional responsibility must focus on accountability and following established codes of ethics regardless of technology advancement
Professional standards must extend to CEOs and boards, not just IT professionals, with genuine accountability
IFIP’s code of ethics can be used as legal evidence when people claim no computer standards exist
Summary
All speakers agree that professional accountability and adherence to ethical standards are essential, with responsibility extending from developers to executive leadership. They support using established codes of ethics and legal frameworks to ensure accountability.
Topics
Digital standards | Human rights principles | Consumer protection
Implementation failures stem from human and organizational issues rather than pure technology problems
Speakers
– Margaret Havey
– Moira De Roche
– Liz Eastwood
Arguments
Implementation failures stem from inadequate testing and poor organizational processes rather than technology itself
The scandal represents failure of human relations and management rather than pure technology failure
Legal systems inappropriately relied on computer evidence as trusted proof without proper validation
Summary
Speakers agree that the Post Office scandal and similar failures result from poor human decision-making, inadequate testing, and organizational failures rather than inherent technology problems. They emphasize that proper oversight and validation processes are crucial.
Topics
Consumer protection | Digital standards | Legal and regulatory
Data quality is fundamental to AI system performance and ethical outcomes
Speakers
– Jimson Olufuye
– Damith Hettihewa
– Audience
Arguments
Quality of AI output depends entirely on the quality of input data and training datasets
Data scientists need new ethical attributes aligned with data privacy and protection regulations
Different regions have varying cultural and regulatory requirements that affect AI training and deployment
Summary
Speakers agree that the quality of AI systems depends fundamentally on the quality of input data and training datasets. They recognize that data governance, privacy protection, and cultural considerations are essential for ethical AI development.
Topics
Data governance | Privacy and data protection | Digital standards
International cooperation and standardization are essential for responsible AI development
Speakers
– Stephen Ibaraki
– Damith Hettihewa
– Don Gotterbarn
Arguments
International cooperation through bodies like IFIP can coordinate responsible AI development globally
IFIP can serve as a neutral facilitator and advocate for interoperability standards across borders
IFIP’s international code of ethics provides common professional standards across different countries and contexts
Summary
Speakers agree that international bodies like IFIP are crucial for coordinating global AI standards and facilitating cooperation across borders. They see IFIP as uniquely positioned to bridge different stakeholders and maintain ongoing dialogue.
Topics
Digital standards | Capacity development
Similar viewpoints
Both speakers emphasize that proper testing and implementation processes are the responsibility of developers and organizations, not end users. They reject shifting responsibility to customers or users for validating system accuracy.
Speakers
– Don Gotterbarn
– Margaret Havey
Arguments
Developers bear primary responsibility for proper testing and ensuring systems work correctly before deployment
Implementation failures stem from inadequate testing and poor organizational processes rather than technology itself
Topics
Digital standards | Consumer protection
Both speakers emphasize that AI governance and professional standards must involve all organizational levels, particularly executive leadership, rather than being limited to technical staff.
Speakers
– Moira De Roche
– Anthony Wong
Arguments
Comprehensive training frameworks are needed from CEO level down to all organizational levels
Professional standards must extend to CEOs and boards, not just IT professionals, with genuine accountability
Topics
Capacity development | Digital standards
Both speakers advocate for global coordination and continuous dialogue through international platforms to address AI challenges and prevent fragmentation of standards.
Speakers
– Stephen Ibaraki
– Damith Hettihewa
Arguments
A world foundation model could help address fragmentation by amalgamating open source repositories globally
Ongoing dialogue through platforms like WSIS and AI for Good is essential for continuous improvement
Topics
Digital standards | Capacity development
Unexpected consensus
Ethics should be contextual rather than rigid compliance
Speakers
– Don Gotterbarn
– Jimson Olufuye
Arguments
AI ethics should not be treated as a separate discipline but as contextual application of existing ethical principles
Professional responsibility must focus on accountability and following established codes of ethics regardless of technology advancement
Explanation
Despite coming from different perspectives, both speakers agree that ethics should be flexible and contextual rather than rigid compliance checklists, while still maintaining accountability to established professional standards.
Topics
Human rights principles | Digital standards
Human oversight remains essential even with advanced AI
Speakers
– Moira De Roche
– Don Gotterbarn
– Jimson Olufuye
Arguments
Users must review AI output carefully to ensure relevance and accuracy, applying human intelligence to artificial intelligence
Developers bear primary responsibility for proper testing and ensuring systems work correctly before deployment
Professional responsibility must focus on accountability and following established codes of ethics regardless of technology advancement
Explanation
Despite different roles and perspectives, all speakers agree that human oversight and responsibility cannot be abdicated to AI systems, whether at the development, deployment, or usage stages.
Topics
Digital standards | Human rights principles | Consumer protection
Overall assessment
Summary
The speakers demonstrate strong consensus on fundamental principles of professional accountability, the importance of proper testing and implementation processes, the need for comprehensive organizational training, and the value of international cooperation through bodies like IFIP. They agree that failures like the Post Office scandal stem from human and organizational issues rather than technology problems, and that data quality is fundamental to ethical AI outcomes.
Consensus level
High level of consensus on core principles with constructive dialogue on implementation approaches. The agreement spans technical, ethical, and organizational dimensions, suggesting a mature understanding of AI governance challenges. This consensus provides a strong foundation for developing practical frameworks and standards for responsible AI development and deployment through international cooperation.
Differences
Different viewpoints
Whether AI ethics should be treated as a separate discipline or as application of existing ethical principles
Speakers
– Don Gotterbarn
– Margaret Havey
Arguments
AI ethics should not be treated as a separate discipline but as contextual application of existing ethical principles
Ethics in AI requires different considerations due to AI taking on personas, using likenesses, and affecting employment
Summary
Gotterbarn argues that creating separate ‘AI ethics’ is a mistake and that practitioners should apply flexible ethical judgment based on established values. Havey contends that AI ethics presents unique challenges because AI systems are taking on human personas, using actor likenesses, and replacing human workers, requiring different considerations.
Topics
Human rights principles | Digital standards | Future of work
Who bears primary responsibility for testing and validating AI outputs
Speakers
– Don Gotterbarn
– Moira De Roche
Arguments
Developers bear primary responsibility for proper testing and ensuring systems work correctly before deployment
Users must review AI output carefully to ensure relevance and accuracy, applying human intelligence to artificial intelligence
Summary
Gotterbarn argues that developers should not shift responsibility to customers and that there should be a presumption of accuracy from developers. De Roche emphasizes that users must carefully review AI-generated output, distinguishing this from traditional product testing as AI generates output on the fly.
Topics
Digital standards | Consumer protection
Primary cause of the Post Office scandal – technology vs. implementation vs. human management failure
Speakers
– Margaret Havey
– Moira De Roche
Arguments
Implementation failures stem from inadequate testing and poor organizational processes rather than technology itself
The scandal represents failure of human relations and management rather than pure technology failure
Summary
Havey focuses on implementation failures, inadequate testing, and lack of proper organizational standards as the root cause. De Roche emphasizes it was primarily a failure of human resource management and decision-making, arguing that management should have recognized patterns when multiple long-term employees suddenly received bad reviews.
Topics
Digital standards | Consumer protection | Human rights principles
Unexpected differences
Scope of user responsibility in AI output validation
Speakers
– Don Gotterbarn
– Moira De Roche
Arguments
Developers bear primary responsibility for proper testing and ensuring systems work correctly before deployment
Users must review AI output carefully to ensure relevance and accuracy, applying human intelligence to artificial intelligence
Explanation
This disagreement is unexpected because both speakers are advocating for professional standards, yet they have fundamentally different views on where responsibility lies. Gotterbarn strongly opposes shifting responsibility to users, while De Roche, who uses AI daily, accepts user responsibility for output validation as a practical necessity. This reflects a tension between idealistic professional standards and practical AI implementation realities.
Topics
Digital standards | Consumer protection
Overall assessment
Summary
The main areas of disagreement center on: (1) whether AI requires new ethical frameworks or can use existing ones, (2) the distribution of responsibility between developers and users for AI output validation, and (3) the primary causes of system failures like the Post Office scandal. Additionally, there are nuanced differences on implementation approaches for professional standards and international coordination.
Disagreement level
The disagreement level is moderate but significant for practical implementation. While speakers generally agree on the need for professional standards, accountability, and international cooperation, their different approaches to achieving these goals could lead to conflicting policies and practices. The disagreements reflect deeper tensions between idealistic professional standards and practical implementation realities, which has important implications for how AI governance frameworks will be developed and enforced globally.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers emphasize that proper testing and implementation processes are the responsibility of developers and organizations, not end users. They reject shifting responsibility to customers or users for validating system accuracy.
Speakers
– Don Gotterbarn
– Margaret Havey
Arguments
Developers bear primary responsibility for proper testing and ensuring systems work correctly before deployment
Implementation failures stem from inadequate testing and poor organizational processes rather than technology itself
Topics
Digital standards | Consumer protection
Both speakers emphasize that AI governance and professional standards must involve all organizational levels, particularly executive leadership, rather than being limited to technical staff.
Speakers
– Moira De Roche
– Anthony Wong
Arguments
Comprehensive training frameworks are needed from CEO level down to all organizational levels
Professional standards must extend to CEOs and boards, not just IT professionals, with genuine accountability
Topics
Capacity development | Digital standards
Both speakers advocate for global coordination and continuous dialogue through international platforms to address AI challenges and prevent fragmentation of standards.
Speakers
– Stephen Ibaraki
– Damith Hettihewa
Arguments
A world foundation model could help address fragmentation by amalgamating open source repositories globally
Ongoing dialogue through platforms like WSIS and AI for Good is essential for continuous improvement
Topics
Digital standards | Capacity development
Takeaways
Key takeaways
AI ethics should be treated as contextual application of existing ethical principles rather than a separate discipline, with professional responsibility extending to all stakeholders including CEOs and boards
The quality of AI systems depends entirely on input data quality and proper implementation processes, with developers bearing primary responsibility for testing and validation
Professional standards must be globally coordinated while respecting cultural differences, with IFIP’s international code of ethics providing a foundation for common standards
AI must be integrated throughout organizational processes with comprehensive training from executive level down, rather than being isolated to specific teams
The UK Post Office scandal demonstrates the critical need for accountability and proper validation of computer evidence, showing how implementation failures can have devastating human consequences
Users must apply human intelligence to review AI outputs for relevance and accuracy, understanding that AI tools require responsible use and careful validation
Resolutions and action items
Moira De Roche committed to developing a framework for generative AI skills and training across organizational levels in the coming weeks and months
IFIP to explore ISO accreditation around software engineering and software programming to ensure adherence to standards
IFIP to act as a neutral facilitator and advocate for interoperability standards across borders through collaboration with IEEE, BCS, and other professional bodies
Continue ongoing dialogue through platforms like WSIS and AI for Good to keep guidelines and frameworks continuously improved as living documents
Develop a standard body of knowledge around generative AI aligned with professional standards
Unresolved issues
How to effectively balance innovation with professional responsibility when working with cutting-edge AI technology that evolves rapidly
How to ensure legal liability and enforcement mechanisms have sufficient ‘teeth’ to hold organizations and leadership accountable for AI system failures
How ICT professionals can effectively convince top management to implement proper testing, piloting, and responsible deployment practices
How to address fragmentation of AI standards across different cultural, regulatory, and regional contexts while maintaining global interoperability
How to ensure large organizations employ fully qualified IT professionals and maintain proper professional standards in AI implementation
Suggested compromises
Recognition that while cultural and regional differences exist in AI implementation (such as restrictions on religious content), common professional responsibilities like testing and harm minimization apply universally
Acknowledgment that both developer responsibility for proper testing and user responsibility for output validation are necessary, with clear delineation of roles
Acceptance that AI ethics requires both existing ethical principles and new considerations for emerging challenges like AI personas and employment impacts
Thought provoking comments
I think that it’s basically a mistake to invent something called AI ethics. What happens is ethical decisions in general have to respond to contexts and situations… When you make that list and you start to make details, it fits a very narrow context. One of the wonderful things I love about computing is the context is always changing, and so you have to have a certain kind of flexibility.
Speaker
Don Gotterbarn
Reason
This comment fundamentally challenged the premise that AI requires separate ethical frameworks, arguing instead that existing ethical principles should adapt to new contexts. It introduced a contrarian perspective that questioned the entire foundation of ‘AI ethics’ as a distinct discipline.
Impact
This comment immediately sparked disagreement from Margaret Havey, creating the first major debate in the discussion. It shifted the conversation from practical AI implementation issues to fundamental philosophical questions about whether AI ethics is categorically different from traditional computing ethics. This tension between top-down compliance versus practitioner-driven ethical decision-making became a recurring theme throughout the discussion.
So most, I’d say the vast majority of people out there working with these products are not developers… And we have to be concerned about the multiple agents, the multiple types of AI that are in use and all the different models and all the data and regulations. So it becomes a different problem.
Speaker
Margaret Havey
Reason
This comment provided a crucial reality check by highlighting that most AI users are not developers, introducing the complexity of real-world implementation across diverse organizational contexts. It challenged the developer-centric view and emphasized the multifaceted nature of AI deployment.
Impact
This response directly countered Don’s developer-focused perspective and broadened the discussion to include end-users, organizational implementation, and regulatory compliance. It introduced the concept that AI ethics must address multiple stakeholder perspectives, not just those of developers, fundamentally expanding the scope of the conversation.
As a world, as a global cooperation, how do we come up that the world, when we use a certain AI, we would be able to agree on what we’ll be using, when in fact we have so many different world views. The African might feel certain parts of the world might feel discriminated, the other parts of the world might be discriminated because of our standards.
Speaker
Ian (Audience member)
Reason
This comment introduced the critical dimension of cultural relativism and global diversity in AI standards, challenging the assumption that universal standards are achievable or desirable. It highlighted the inherent bias in AI training data and the impossibility of creating culturally neutral AI systems.
Impact
This question fundamentally shifted the discussion from technical and organizational issues to global governance and cultural sensitivity. It prompted responses about international cooperation through IFIP and led to discussions about world foundation models and data commons, elevating the conversation to address systemic global challenges in AI standardization.
How can ICT professionals, even if they have been trained, highly competent, how can they persuade top management to do the right thing such as pilot the software adequately and pilot in stages?… how does an ICT professional convince CEO management, top level management, to do the right thing and to accept responsibility for their decisions?
Speaker
Liz Eastwood
Reason
This comment cut to the heart of professional responsibility and power dynamics within organizations. Using the Post Office scandal as a concrete example, it highlighted the gap between technical competence and organizational authority, addressing the fundamental challenge of how technical professionals can influence executive decision-making.
Impact
This question brought the discussion full circle to practical governance issues and accountability. It prompted responses about legal liability, board responsibility, and the need for ‘teeth’ in professional standards. The comment grounded the theoretical discussion in real-world consequences and shifted focus to implementation strategies and enforcement mechanisms.
It was a failure in my opinion, more of human relations than of technology. So the technology let them down, but it was the actions taken by people using the output of their technology… Just relying on the data was a failure in human resource management, not so much a failure of the system, but a failure of what came out of the system.
Speaker
Moira De Roche
Reason
This comment reframed the Post Office scandal from a technical failure to a human judgment failure, introducing the crucial distinction between system output and human interpretation/action. It challenged the tendency to blame technology while highlighting human accountability in decision-making processes.
Impact
This perspective shifted the discussion toward the human element in AI implementation and the importance of human oversight. It reinforced the theme that emerged throughout the discussion about the need for human intelligence to complement artificial intelligence, and emphasized that professional standards must address human judgment and organizational culture, not just technical competence.
Overall assessment
These key comments shaped the discussion by creating productive tensions between different perspectives – developer-centric versus user-centric views, universal versus culturally-relative standards, technical versus human responsibility, and theoretical versus practical implementation challenges. The conversation evolved from a relatively straightforward discussion about AI professional standards into a nuanced exploration of global governance, cultural sensitivity, organizational power dynamics, and the complex interplay between human and artificial intelligence. The Post Office scandal served as a concrete case study that grounded abstract ethical discussions in real-world consequences, while the cultural diversity question elevated the conversation to address systemic global challenges. Together, these comments transformed what could have been a technical discussion into a comprehensive examination of the multifaceted challenges facing AI governance and professional responsibility in a globally connected world.
Follow-up questions
How do we ensure that those developing AI comply with the rules and follow professional standards?
Speaker
Jimson Olufuye
Explanation
This addresses the core challenge of enforcing professional standards and ethical compliance in AI development, particularly given the widespread misuse observed during political periods.
How do we embed generative AI in organizations so it becomes part of the process rather than something separate?
Speaker
Moira De Roche
Explanation
This is crucial for organizational integration and ensuring AI tools are used effectively and responsibly across all levels of an organization.
How should ICT professionals balance innovation with responsibility to standards and maintaining public trust when working with cutting-edge AI projects?
Speaker
Stephen Ibaraki (via Moira De Roche)
Explanation
This addresses the tension between rapid technological advancement and professional responsibility, which is critical as AI technology evolves quickly.
As a global cooperation, how do we come up with AI standards that would be acceptable across different world views and cultural contexts?
Speaker
Ian (audience member)
Explanation
This highlights the challenge of creating universal AI standards when different regions have varying cultural, religious, and ethical perspectives that influence AI training and deployment.
How can ICT professionals convince top management to do the right thing, such as piloting software adequately and accepting responsibility for their decisions?
Speaker
Liz Eastwood
Explanation
This addresses the critical issue of getting executive leadership to prioritize proper testing and implementation procedures, especially in light of the Post Office Horizon scandal.
What is the best way to ensure that large companies actually insist on employing fully qualified IT professionals?
Speaker
Liz Eastwood
Explanation
This focuses on the practical challenge of ensuring organizations hire competent professionals rather than cutting costs with unqualified personnel.
How do we make sure there are ‘teeth’ behind codes of ethics and professional standards?
Speaker
Liz Eastwood
Explanation
This addresses the enforcement challenge – how to ensure that ethical codes and professional standards have real consequences and are not just paper exercises.
What new attributes should be added to ethical standards for data scientists working with AI algorithms?
Speaker
Damith Hettihewa
Explanation
This recognizes that new AI professions may require additional ethical considerations beyond traditional IT ethics, particularly around data management and privacy.
How can IFIP develop a comprehensive framework for generative AI skills and training across organizational levels?
Speaker
Moira De Roche
Explanation
This addresses the need for structured training programs that cover everyone from CEOs to end users, ensuring proper understanding and use of generative AI tools.
How can IFIP advocate for interoperability standards that are compatible across borders and not fragmented?
Speaker
Damith Hettihewa
Explanation
This addresses the technical challenge of ensuring AI systems can work together globally while maintaining consistent ethical and professional standards.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.