Al and Global Challenges: Ethical Development and Responsible Deployment

29 May 2024 11:00h - 11:45h

Table of contents

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Full session report

International Forum on AI Governance Calls for Ethical Frameworks and Inclusive Policies

An international forum on artificial intelligence (AI) governance and ethics convened a diverse group of experts, including government representatives, private sector leaders, academics, technical community members, and civil society organizations (CSOs). The forum’s discussions emphasized the need for collaborative efforts in AI governance, highlighting the importance of inclusive representation in AI policy development to protect the rights and interests of marginalized and vulnerable communities.

Donny Utoyo underscored the vital role of CSOs, particularly in the global south, in shaping AI governance. He pointed out that CSOs bring a unique perspective to the AI discourse, representing voices often left unheard while advocating for their rights. With comprehensive contextual knowledge and grassroots expertise, CSOs can effectively address the specific AI challenges and opportunities within their regions.

Marlyn Tadros offered a cautionary perspective, focusing on AI from a human rights standpoint. She expressed concerns about AI’s potential to increase the efficiency of oppression and repression, the difficulty in opting out of AI systems, and the commodification of individuals as data sources for big corporations. Tadros criticized the reliance on big tech companies for ethical AI, highlighting the risks of privacy invasion and lack of transparency, and called for AI to be regulated based on human rights standards and international law.

Dr. Anuja Shukla discussed the integration of ethical AI in education, advocating for a regulatory framework that ensures AI tools are private, ethical, transparent, sustainable, and secure. She argued for the accessibility of AI tools for everyone, regardless of socio-economic status or location, and for ethical guidelines to be embedded in the development and usage of AI. Shukla called for transparency and accountability in AI usage, suggesting that users should declare their use of AI tools responsibly.

Dr. Martin Benjamin addressed the impact of AI on language diversity, particularly in Africa, where many languages are spoken. He criticized the disproportionate focus on AI as a solution for language preservation when it lacks the necessary data and investment, suggesting that this focus detracts from the real issues and needs of language diversity and preservation.

Waley Wang presented a business perspective on building responsible AI for enterprises, discussing the stages of AI integration in business and the ethical and safety challenges that accompany it. He advocated for technology openness, cooperation, and consensus governance to address issues of imbalance, fairness, and safety in AI.

Alfredo Ronchi provided a synthesis of the discussions, emphasizing the need for a balanced approach to AI that maximizes its benefits while mitigating risks. He stressed the importance of keeping humans at the center of AI development and ensuring that AI serves humanity without exacerbating existing inequalities or infringing on human rights.

The forum concluded with a consensus on the need for responsible and ethical AI governance that includes diverse stakeholders and considers the rights and interests of marginalized communities. There was a recognition of the challenges AI poses to cultural and linguistic diversity and the need for more localized and context-specific AI solutions. The discussions reflected a call for a balanced approach to AI that ensures AI serves humanity and does not exacerbate existing inequalities or infringe on human rights.

Key observations from the forum included the potential for AI to be misused in conflict zones and for surveillance, the need for strategic frameworks for the deployment of AI in education, and the importance of addressing the ethical challenges of AI in enterprise settings. The forum also highlighted the importance of global consensus and cooperation in AI governance to protect vulnerable groups, especially women and children.

Session transcript

Donny Utoyo:
and online safety vulnerability, especially for women and children. As AI rapidly transform our lives digitally and physically, we must harness its power for good while mitigating social risks. We believe that it requires a collaborative effort from all stakeholders, including governments, the private sector, academia, technical community, and of course the civil society organization. AI governance, in my belief, that cannot and may not be determined by only one or several stakeholders alone. So CSOs in the civil society organization, I mean in the global south, play a particularly vital role in this journey. They bring a unique perspective to the AI discourse, representing the voices of often marginalized and vulnerable communities while advocating for their rights and interests. CSOs also have comprehensive contextual knowledge and grassroots expertise, enabling them to effectively address the specific challenge and opportunity delivered by AI in their respective regions, our country, of course. As a modest example, as the civil society organization in Indonesia, we are actively engaged with other stakeholders on several occasions or opportunities to develop AI governance. For example, we follow and contribute to the IGF, Internet Governance Forum’s draft recommendation on the Policy Network on Artificial Intelligence. We contributed on submitting the suggestion for the draft of the IGF Policy Network on AI. And domestically, we are proactively involved and submit the recommendation for a circular letter of the Indonesian government. Indonesian MCIT, Concerning Artificial Intelligence Ethics. So Indonesia, we have the circulator, not yet the proper regulation, not yet, but we have the government already released the circulator about the artificial intelligence ethics. And we already submit the, we also inform on the submission of the draft. The submission was based on our four series of multi-stakeholder FGD beforehand with around 380 participants in Indonesia. So we create four different focus group discussion on AI, namely for youth development, child under protection, society engagement, and women empowerment. The documents already translated into English and you can find out in our booth downstairs. And with over 220 million internet user collaborating with the UNESCO, Indonesia is preparing to implement the AI readiness assessment methodology. A couple of days ago, Indonesia MCIT had a kick off on Jakarta, inviting multi-stakeholders, including SDWAT and several other prominent civil society organization. Of course, many other example of how the civil society can engage with their multi-stakeholder is already there, more solid, practical, and maybe meaningful by working collaboratively. I believe that CSO can share knowledge, resource, and best practices, amplifying their voices, and coordinating their advocacy. Collaborating is not only between multi-stakeholders, but also in their respective country, in the global south, especially. Multi-stakeholder is not easy. We do the IJF since 2020, 2010. Yeah and multi-stakeholder is not easy but it is not impossible to be done. It’s a spirit always voiced anytime, anywhere such as the WSS forum on the IGF. And my last comment is we have to continue to warm up the spirit by initiating and facilitating several multi-stakeholder discussions. In Indonesia, it’s only an example. We have the Indonesian Child Online Protection. It’s multi-stakeholders to focus on the child safety online. We also have the Indonesian Internet Governance Forum. And the last, the latest is the IDChange, Indonesia Climate Change Preparedness and Disaster Emergency Response Group with other civil society like Common Room, Port Gessmas, Indonesia City Volunteer, and Airport Information. In conclusion, FSO in the global south have a critical role in ensuring ethical and accountable air governance by developing their capacity, the civil society’s capacity, strengthening collaboration among the south, and actively engaging with global stakeholders. CSO can help shape the future of AI. We are not, civil society is not the technical body, but we understand how to, how the impact of the technology, even the AI. Maybe we don’t know. We can, because AI is something sophisticated, something very, quote-unquote, expensive. I’m sorry for that. Maybe it’s expensive and sophisticated. CSO maybe cannot afford it. But we know, because we have come from the grassroots, so we can ensure that AI can bring the benefit to our community and ensure no one is left behind, especially those from vulnerable people and marginalized community. Thank you.

Alfredo Ronchi:
Welcome. Thank you. Mass now, Marlyn, I just ask to intervene. You have eight minutes, seven, eight, please. Thank you. She is the Executive Director of Virtual Activism in the USA.

Marlyn Tadros:
I’m also Egyptian-American, so I’ll speak for both. I chose my title to be Prometheus Bound. And this is because Prometheus, if you know Greek mythology, Prometheus was actually given fire. He had the gift of fire and he gave that gift of fire to humankind and he was punished for it. Because it’s a great gift, but the gods did not want humankind to have it yet. So he was punished for it. And I say this because I am thinking of AI as a gift that we were given by developers, the developer gods. And it depends on how we use it. And frankly, my talk today is not very optimistic, so I have to warn you about that. Because first and foremost, I am a human rights defender as well. So from a human rights perspective, I’m not very thrilled with all what’s happening with AI. So first, I have to acknowledge that AI is increasing and will increase efficiency. Of course, in all fields, we all have to acknowledge that. But it will also increase the efficiency of oppression and repression. Some of us are benefiting. The majority of us are benefiting. are also losing. And it’s the losses that I want to speak about, because I keep hearing all these optimistic points of view. And it’s great and it’s fantastic to have to also look at what is under the table. So I will mention a few things. Six points very quickly. I will not talk about the bias, the racial bias, the gender bias. I will leave this to other people if they want to talk about that. But here are my concerns. I have six concerns. The first is that we cannot opt out of anything. We are being pushed into, literally pushed into connectivity. We are being pushed into AI and we are not able to opt out of it. Because anybody who is not connected now, and WSIS has been doing that, and I’ve been doing that for the past 20 years, which is pushing for connectivity. But the problem is a lot of people cannot opt out. And this to me is a problem because that’s an issue of choice and freedom of choice. We are being commodified, as we know. We are all a database and that’s the value of what we are. We are data for people and for these big corporations. We are not really looked at as the human beings that we should be looked at. The second issue is that big corporations and big tech companies, now we keep saying ethical AI, ethical connectivity. and all of that, ethics should not be left to the promises of big tech companies and big corporations. We cannot say, yes, we have to trust these corporations. When Google first started, it started with, do no evil. That was Google’s motto. Today, I cannot do anything without Google asking me to connect to all my contacts, to connect to all my photos, to connect. There is a massive invasion of privacy and we don’t know, and lack of transparency, so we don’t know what is happening with this data. Trust should not be in our vocabulary at all when we’re dealing with technology. We should trust its reliability, but we should not trust it with our data and with our privacy and with our security. The third point is the lack of privacy, surveillance, of course, how authoritarian governments, including my own government in Egypt, and including also these days in the United States, and probably everybody feels it everywhere, how governments are using it for censorship. Online has been used for, any connectivity has been used for censorship and for surveillance, but AI is going to facilitate more massive surveillances. that with Pegasus, we saw that, we see that everywhere actually. Think of its impact on freedom of speech as well. The fourth point is I will consider everything that I said before benign, relatively benign, compared to what’s happening in conflict and war zones. But before I get to that, I want to give you an example of what I’m talking about when I talk about free speech. So for example, Instagram recently replaced a translation of the term Palestinian and it translated it to terrorist. So everybody who tried, you know how it translates supposed? So every time anybody wrote the word Palestinian, it translated it to terrorist. And then they apologized when people started noticing and they apologized. There is a point of view of these big corporations and of what they want you to think. In war and conflict, well, we know that Google is cooperating in the Gaza war. It is cooperating with Israel and within a project called Nimbus. We don’t know anything about Nimbus because it is not transparent. It gives them information about people. We also have… How many heard of Lavender AI? Please read about Lavender AI, which is being used in the Gaza war. The gospel. Where’s Daddy? Project called Where’s Daddy, which targets Palestinians in their homes. So who’s cooperating on that? META is cooperating, and they’re providing WhatsApp data to Israel. Another point, point number five, very quickly. We are also now at the frontier of CHAT-GPT-4-O. O stands for Omni, and this will launch in six months, and it will analyze voice and facial recognitions and facial expressions. And I find that extremely disturbing at this point without the existence of regulations. Okay, what are the solutions? Very quickly, maybe I should mention a few solutions, but I don’t have any. I only have questions that we need to think about. For any technology that we launch and that we accept in the Global South, we need to think of these four issues. Privacy, actually there are five. It should be private, should be ethical, should be transparent, sustainable, and secure. And if it’s not applying all these five rules, then it should not be launched. It should be based on human rights standards and international law. And if it’s not, it should not be deployed, whether in the Global South or in the Global South. or even in the global north, it doesn’t matter where. Now the question is how do we regulate it and how do we implement this without stifling innovation and that’s a question I will leave for you to answer. Thank you.

Alfredo Ronchi:
Thank you very much Marlyn. So now we are better focusing the problems on different territories, different locations and situations and now we have the first remote speaker that is Anuja Shukla. Is Anuja connected? Yes. Am I audible guys?

Dr. Anuja Shukla:
Am I audible? Can you hear me?

Alfredo Ronchi:
The floor is yours for seven, eight minutes then we’ll probably add some more minutes at the end of the session for question and answer and remarks.

Dr. Anuja Shukla:
So very good afternoon to everyone. I’m Dr. Anuja Shukla and I’m speaking on behalf of Jaipur Institute of Management, Noida, India and being an educationist I would like to talk on First folder and that’s Anuja. Yeah, so am I audible now? Okay, so I’ll be representing Jaipur Institute of Management at the WSIS Forum and I wanted to talk about the ethics in AI. So as rightly my co-panelists have said that we can’t just rely on the makers of the GPUs. or the makers of the AIs to be ethical enough, what I… Okay. So today I’ll be talking on two, three pointers. So first my pointer is about what is ethical AI. So ethical AI, we are looking at a very broader perspective. It refers to the… Okay.

Alfredo Ronchi:
So I hope now, which are still connected.

Dr. Anuja Shukla:
Yeah, I’m talking, I think every time the host puts me on mute, sorry.

Alfredo Ronchi:
Sure, how do you?

Dr. Anuja Shukla:
Yeah, so may I? Can you please let the host know I’m a panelist? They’re kind of putting me on mute.

Alfredo Ronchi:
Audio is creating big troubles here. I don’t know what happens if there’s your mic or is any other device? I can’t hear you. You hear?

Dr. Anuja Shukla:
Yeah, I can hear you. All right, so let me continue. So in the evolving landscape of… Integration of ethical AI is very much pivotal because once we are into an education system, the students’ mindset needs to be developed in a certain way. So ethical AI can lead…

Alfredo Ronchi:
Possible to solve the problem and…

Dr. Anuja Shukla:
The principles of equity transfer…

Alfredo Ronchi:
Benjamin, can you put your hand on the remote after we come back? Okay, okay. Hello?

Dr. Anuja Shukla:
Yeah, hi.

Alfredo Ronchi:
Hello? I’m clearly audible. Am I audible? We’ll proceed and come back to you later. So sorry, I think…

Dr. Anuja Shukla:
I think now it’s working. now I’m not getting muted. May I continue? Yeah. So I’m getting on my screen the host muted. Sorry for this technician problems and that’s why we are here we are talking about the ICT development and we are talking about this. That’s right. This is of course this is. Why do they keep muting her? For this session just to let you understand which are the main difficulties. This field. Okay, so I have just three points to talk on that the past AI tool should be available to everyone despite their socio-economic backgrounds, caste, creed, color or religion or any geographical location. I also want to talk on that we should embed the ethical guidelines in the development of AI not just the usage but also the development part of the AI. Which might include rigorous auditing process scrutinizing the algorithms for the fairness and the implementation of the. You need the pointer and you need the presentation. In addition to that we’ll be also requesting the system to develop a policy where as a person who is using AI should be held accountable for this. For example, when we are writing research papers or when the students are we request them to put a note in the down below talking that how they have used their AI and they should feel accountable that yeah I’m using AI but responsibly. So that would be a thing we would be looking forward to integrate. So once we are able to integrate the ethical AI so through the transparency, through the responsibility, the educators and students what are the different types of AI tools which are operative and what are the reasons behind the decisions to these. Transparency builds a lot of users to engage more effectively with the technology. Also, the developers also strive to create user-friendly interfaces that simplify the AI functionalities, making them comprehensible for the non-technical users. Are we in the proper strategic framework to govern the deployment of AI tools in education that this framework should be enforced, that all the educational institutions should be in compliance with the ethical standards?

Audience:
Please, Anuja, can you stop so that the people who are talking in the background will know? We want to follow your presentation, but there are two men who are talking behind you.

Dr. Martin Benjamin:
That’s a quote, if you’re not familiar, it’s one of the more fascistic quotes from Donald Trump.

Audience:
Whoever said fascistic quotes from Donald Trump, please use your mic.

Dr. Martin Benjamin:
He said, I alone can fix it. Okay, so, and there will be, if you want later, you can download the presentation. There’s all the links and the images are there. Here’s one of the images. In Africa, there are about 2,000 languages spoken by about a billion and a half people across 55 countries, 55 member nations of the African Union. And of these languages, more than 100 are spoken by more than a million people. So, these are growing languages. Many of these languages have doubled in the number of speakers in the past 25 years.

Audience:
Who is the chair of this session?

Dr. Martin Benjamin:
Oops, we’ve got nothing on that. So, all of the languages have one thing in common. One thing in common is that from the time of the conquest of Africa, it has nothing to do with this session. Especially with the invasion of Africa into the colonial era. There was this thing called the mission civilisation.

Dr. Anuja Shukla:
May I close by giving my closing argument?

Dr. Martin Benjamin:
was their civilized Africa, and European languages were integrally involved in the civilizing mission. These were the languages that needed to be used for trade, for governance, for education. After independence, the African Union was set up for a mission of unity.

Dr. Anuja Shukla:
Once we have the NF regulatory framework in the teaching and education process towards the responsible AI, number one, we’ll be able to get improved teaching pedagogies. And number two, we’ll be also able to get personalized learning experience. Because through AI, we can see at what part of the module the student is being, and we can human forces, automate the system to give them good education. And after integrating the AI in the education system, probably we’ll be able to come up with the leadership of the students who are in the school. So yeah, that was everything from my side, what I wanted to propose. Thank you.

Dr. Martin Benjamin:
This thing was starting to emerge. And so they had their first meeting, had another meeting, a series of meetings. And over 20 years have developed this thing in the SAAB called ADAMA, the Platform for African Language Empowerment. So it’s a pretty comprehensive platform. And the organization that I run, BUSI, is going to be taking the lead in implementing, the technical lead in implementing this, it seems. Why are we listening to this? If you go on to that site, kamuda.si.com, I would explain it about all the things that are being planned for this forum. I’m Izanu Jain. Also, everybody on the Zoom, I think that there is a mistake happening there on site. So now is Dr. Martin transcending on site from Geneva. I think that after this, Ms. Anuja, you can try to present again your presentation. Yeah, sure. I’ll be on my real knowledge coming from real people’s brains, not artificial intelligence. Are we cancelling this session because we cannot follow it? What’s going on? Why? This is the Africa session we are listening to. We didn’t opt for that. Okay. Why is that important? Well, one of the reasons it’s important is because artificial intelligence can’t work for African languages. Why can’t it work for African languages? Because first of all, there’s almost no data. The data is in people’s brains. You have to get the data from people’s brains to the computer before you can actually do anything with AI, right? It’s more coherent. You can’t just sit there and take it, suck it out of the air. There’s very little money that’s being invested in it, but whenever anybody talks about to me here or anywhere else, you know, so what are you doing? Oh, you must be doing AI, right? That’s the only thing that anybody in places like this could actually be useful and can be impactful for us. Should we all leave the session and come in again and maybe we’ll get the correct session? What should we do? There’s no evidence of this, right? There’s no evidence that AI can do anything. It must be something on their end, right? Yeah, something on their end, but at the beginning I heard Alfredo in the chair who was giving the floor to speakers. Where has he gone? Another generation of students who have to go to school in languages that they don’t understand. Another generation sacrificed to the altar of we know better. I don’t know. There was a similar issue for a previous session. And at that time we just could not reach the host. Again, the solution given is, what about AI? Well, what can AI do for an endangered language? Well, maybe it can transcribe some audio. Maybe it can find some grammatical patterns. but it’s not going to generate useful information. It’s not going to teach future generations. But, and so here you can see Dr. Manu Barfo doing actual field research in Ghana among the languages that now only has three speakers left. Those three speakers are, you know, when they pass on, there will be no more dump language, dump language. But we’ve said, okay, well, I don’t want to center out there with AI. And somehow the AI is going to maybe do something for that language. This obsession means that we’re actually making language extinction, but we’re rapid. When it comes to a larger language like Bambara spoken by 10 million people, there’s a ridiculous projects to generate, use AI to generate children’s books. There’s a much more intelligent project that somebody here has been showing graphic languages where you actually use humans to write books and illustrate them and for children. But the focus is money is on AI. Where there might be some money, here’s an initiative that Microsoft has announced. They’re going to do something with, they’re putting in a billion dollars, right? Well, if you read closely, this billion dollars that they’re putting in, they’ve got a little bit of money for a project for Swahili that has not been touched, has not gone through the computer science department at the University of Nairobi knows nothing about it. Working on AI, the head of the Akelan Cyberspace Committee knows nothing about it. You can also read about why it’s impactful for farming. But really what they’re talking about is infrastructure. They want to put in a data center because they want to invest a billion dollars for a data center because there are a billion and a half consumers in Africa. So, what they’re doing for languages is just sort of window dressing, so there’s not really much there, actually. So, an even less effective announcement from the UK saying they’re going to be spending $100 million. If you divide it down, it comes down to a little over a penny a person for five years, so there’s not much that can be done, but if you look on the bottom there, they’re going to help Sub-Saharan Africa have a bigger voice in influencing how AI is used. All this other stuff, please read it your own, what they’re saying they’re going to do. They’re going to help have a voice, okay. Here are just, this is my last slide. Last week, at a conference, I heard somebody from Google saying, we have the answers, we just need to sell them, right? The Swiss government has said to Akelan, basically, no, our financial support to AU initiatives is based on the policies and objectives of our programs, so language equity is not on the agenda of any funder, but AI is, even though the budgets are small, profit-driven, and maybe the ICT ministries are greatly in favor of them, but they’re not talking with the farming ministries and the language ministries, right? So, things that are grown in Africa, the frames and answers that are grown in Africa that are not AI have no voice. It’s the artificial intelligentsia who says, okay, we know what’s good for Africa, it’s AI. We don’t actually know what AI is, but we know it’s good for them. So, Africans, please drink up.

Alfredo Ronchi:
Thank you, Martin. So, let’s try to come back to Anousheh if possible. Now, the audio problem is solved. Hello, Anousheh? Not yet solved or not yet connected? The anticipation is important. In the meantime, I’ll try to tell you some stories to entertain you. Sorry. I’m really scared because we have another session where all the speakers are online. That means it will last three hours, more or less. 40 minutes for each speaker to connect. Is there revenge or air frights? It takes less time to come by plane here and back. Hello, Anousheh? Are you connected? Yes, we can switch to the next, that is Nick. Nick Hailey is connected. Let’s switch to the presentation. It seems like we need more intelligence technical. No, not this one, it’s written Nick in the name of the presentation, but if we can connect, it’s already connected, this one, now it’s a Nick PowerPoint, yes, this one, Nick, hello connection is on, it’s not connected, very good, okay, I think now it’s your turn, let

Nick Hajli:
me touch, it’s physically present, yes, okay, or it will disappear, the hologram, yes, one, yes, it’s the one you showed before, the professor, CCAT member, Correct? Yes. Okay. All right, take four. Incredible effort and discipline. Back online, Dr. Nick? I need an appointment. Thanks. Let me just find something. You know how to use it? I don’t know. I don’t know. Let me share the screen.

Alfredo Ronchi:
This is the second session in this room. And in this session, we have a lot of audience. However, we met this technician issues. That’s the thing. And I think that’s why we are here. And we are talking about the ICT development. Right? Hopefully, we have a wonderful session for now. And hopefully, I hope we can solve this problem as soon as possible. And thanks, everyone, for coming here and attending this session. And I hope you can also join the other sessions through the event. And we apologize for the technical issues. Again. Sorry. It’s not a problem. Can you do that? Okay. Yes. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Okay. Okay. The floor is yours. Yes.

Waley Wang:
Ladies and gentlemen. Dear friends. Good afternoon. My name is Willy. As a member of CCIT. It’s my honor to discuss the important topic. about Responsible AI with you. I worked in AI research for nearly 17 years and I lead an AI company for seven years. Today, I want to share my thoughts on Build Responsible AI for Enterprise and Humanity. As we all know, AI has made remarkable progress recently. Innovation like chat GPT and numerous large language model have transformed our work and lives. A report from A16Z and IDC shows that the average of global enterprise investments in AI surge from $7 million to $70 million to $18 million every year. In China, LLMs grew from 16 to 300 million in 2018 last year with over 18% focus on industry-specific applications. It’s clear that we are entering a new era of artificial intelligence. Enterprise AI have proven valuable in fields like government operations, ESGs, supply chains, and defense intelligence. selling, in analysis, forecasting, for decision-making, of optimizing, optimization, and risk monitoring. People often ask me how to use leverage AI, especially generative AI, to boost productivity. Based on experience, we foresee a promising future. For enterprise in three phases. The first one is a model-centralized stage. Companies integrate LLMs into use case directory, building copulators on base models, offering APIs, and integrating enterprise data, for IEG purpose. However, static models often fail to address actually business scenarios effectively. Most companies haven’t progressed beyond the stage one. The second one is business-first stage. LLMs focus on business scenarios, continuously pre-training and pre-tuning models on industry-specific data and knowledge. This enhances AI capabilities, allowing models to understand scenarios and support business. Marking the start of AI driverless productivity, this is the stage two. The third one is. Decision intelligence stage. Future AI will blow down complex problems into smaller taskers, each reserved by different models. AI agents and multi-agents collaboration frameworks will optimize decision-making and action planning, integrating AI into workflows, data streams, and decision processes. We proposed a three-step methodology for successful enterprise AI transformation. Step one is model engineering, and step two is data engineering, and step three is domain engineering. These steps can drive the AI transformation from business and governments. Our team has trained a model named Yayi from scratch and made it contribute to open-source community. Based on the Yayi, we have served over 100 government clients and more than 1,000 industry clients. In healthcare, AI brings advanced technology to enhance human well-being. In education, AI, all-in-one machines, improve educational safety. AI is a general-purpose technology like printing and steam, augmenting our capability and making. making us smarter and more productive. However, AI faces ethical and safety challenges. The first issue is unbalance. Core AI technology are developed by big companies in a few countries, leading to regional disparities and widening economical gaps. Many industries haven’t fully leveraged AI’s potential. The second issue is fairness. Various factories limit access to and effective use of AI and the regional restrictions worsening inequality. The third issue is safety. An imbalanced data set can introduce bias, leading to incorrect decision. AI can be misused to create deep fakes, manipulate public opinions and commit fraud impacting our lives. To build the responsible AI, we must address these challenges. Firstly, promoting technology openness, reduce regional and industrial imbalance by making AI model open source and accessible, especially in development and developed regions. Secondly, fostering cooperation, mitigating unfair usage restrictions. through cooperation. AI, a costly technology, benefits from collaborative efforts, bridging the gap between those with or without access, dribbling, efficiency, enhancing satisfaction, and reduce costs. Lastly, establishing consensus governance. Enhanced AI safety and exploring safe boundaries and developing robust governance mechanisms to minimize bias and discrimination. It’s crucial to reach a global consensus consensus to protect vulnerable groups, especially women and children. In Concordia, united by spirit, openness, cooperation, and a consistent strategy of governance, we believe we can build responsible AI to achieve greater human well-being and societal advancement. AI is our greatest challenge and opportunity. Together, we must make this journey a success. Thank you for listening. Thank you very much.

Alfredo Ronchi:
Most interesting presentation from the standpoint of China. Thanks a lot for this date. And now we will try again, the challenge to connect with Ellucian. Let’s try. Please, there are no more notes. Ellucian, please. Connect, connection is. I sent two email messages to reconnect please, he’s not available as well. Any other answers, because that’s me. Okay, I will just print some brief notes. I left my contribution as the last one, just to have this opportunity to give much more time to the speakers. So probably some of the answers were transferred yesterday on the occasion of the discussion, but I think that the AI sector seems to have currently the most relevant impact on a large part of society involving privacy, freedom, labor, security, lifestyle, and more. And of course, there are different approaches to this. Someone is considering this a kind of imminent disaster. Other people think that it will solve all the problems of humanity. Basically, the extensive use of artificial intelligence, machine learning, and big data, apart from several ethical issues, could lead to some relevant roadblocks looking at the empty part of the glass. While AI will benefit citizens, businesses, and public interest, it will create a risk to fundamental rights due to potential biases, privacy infringement, or this is a typical case, AI proxy-based solution of serious ethical dilemmas, releasing citizens from a personal ethical analysis and related responsibilities. We were just talking about that. It’s on the third floor at the UNESCO session. So risk to mix up our responsibilities with something that is provided from the top, from AI. We feed ML systems mainly with big data coming from Western countries. This can lead, as it happened, or it happens still, in case it means it has other languages. We heard about Acalan. And you mentioned a name that is very famous in that sector that is Adama, as Adama Samasekou, former minister of Mali and good friend. So this may lead to the disappearance of other intelligence that are not the one based on our big data. How to remove biases in machine learning models that could potentially discriminate against underrepresented groups and cultures. Citizens are, there was a suggestion this morning, again, in the UNESCO session, to create local bots. So to fit local AIs or machine learning systems with local context. But there’s still a big gap in between the amount of big data coming from, let’s say, the Western countries and the one that is produced in small countries, minoritized countries, and so on. So citizens are increasing using AI bots to carry out different activities, ranging from writing a poem to creating a deep fake. How can we identify a human product from the machine product? Local content will be soon generated by local bots. This is the option they applied today. Typical example based on the car crash and the ethical decision to save the baby or the grandfather. I think you are familiar with this typical example about the moral and ethical responsibilities transferred to AI. So there’s a car crash and in the middle, before this, the AI system has to choose if you deroute the car and kill the one on his own car or you crash against the other one, on one versus grandfather, on the other one versus a baby. And the outcome could be different if we can see the cultural model of Eastern countries like China, for instance, where elderly people are very high-ranked in the society or the Western country where babies are high-ranked in terms of being protected. And this means we need to use different approaches. Actually, publishers and event organizers are asking to the contributors to sign a declaration about the use or not about AI-based content, text, images, and movies. Is this simply an integration of paternity, human plus cyber? So to say, okay, this is double paternity or it is based on the risk to infringe IPRs or the AI-produced content. It never happens that open AI will claim for the rights for the product of its own, let’s say, human, cyber humans. Some researchers suggest to issue a regulation to impose the insertion of an invisible watermarking to each AI-generated output in order to be able automatically to understand if an image, text, something is produced by AI or AI-generated or human-generated. This is another idea. What about AI and the AI for good versus AI for bad? Our friend team, I don’t know if he’s in the room, anyway, professor from UK said, okay, why do not create an AI for bad to connect all together? all the bad ideas so the malicious use of AI that is still active and probably in some part of the operations are much more evolved than the AI for good and there are quite a lot of problems related to this like AI generated deep fakes and AI created in order to find out and identify deep fakes but this is a limited let’s say use but much more such kind of malicious use can lead to the reduction of human rights and some higher surveillance systems that will connect together different big data in order to track much more than Google does today what happens what are doing so on. Opinion formation is a complex dynamic process mediated by the interaction among individuals both offline and online. Social media has drastically changed the way in which opinion dynamics evolve. Social media has become a battlefield on which opinions are exchanged often violently and the progress on of AI has allowed and to the development of much more powerful mechanism thanks to the effectiveness of statistical and inferential AI systems. Post-reality is changing the value system with a new normality the new ethics cockles into questions personal free will and freedom of choice traditional cultural regulators of social relationships and processes are being replaced about automated social algorithms about quite a lot of example even meeting systems and similar things. Public perception is shaped more by addressing predetermined feelings and opinions rather than facts. Furthermore a massive decrease on the level of critical thinking and the emergence of wave of information epidemics are observable nationally and globally. Public perception shaped more by addressing predominant feelings. The challenge for the upcoming years are the ways to sustain the human’s role and the invaluable right to freedom and personal privacy in the era of unlimited collection and reuse of information. Once again, the need to find the proper balance between humanities and technologies is omnipresent. Social sciences and humanities must establish a tight cooperation in the design or co-creation of cyber technologies, always keeping humans in the focus. Thank you for your attention. So, I’m here in reality, so no measure problems, connection. I’m very sorry for the technical problem we face today, but this is part of our, let’s say, research on field, so it happens. And thanks again for being here. We’ll have another session in the afternoon that is about digital transition and the impact on society. It will be starting from three, I think, o’clock in another room. Shall we take a picture? Yes, sure. Let’s take a picture. Thank you for those who survived the session.

AR

Alfredo Ronchi

Speech speed

131 words per minute

Speech length

1897 words

Speech time

867 secs

A

Audience

Speech speed

194 words per minute

Speech length

55 words

Speech time

17 secs

DU

Donny Utoyo

Speech speed

135 words per minute

Speech length

784 words

Speech time

348 secs

DA

Dr. Anuja Shukla

Speech speed

160 words per minute

Speech length

777 words

Speech time

291 secs

DM

Dr. Martin Benjamin

Speech speed

166 words per minute

Speech length

1381 words

Speech time

500 secs

MT

Marlyn Tadros

Speech speed

116 words per minute

Speech length

1147 words

Speech time

591 secs

NH

Nick Hajli

Speech speed

91 words per minute

Speech length

81 words

Speech time

53 secs

WW

Waley Wang

Speech speed

102 words per minute

Speech length

821 words

Speech time

484 secs