Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies

13 May 2025 12:30h - 13:30h

Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies

Session at a glance

Summary

This workshop focused on the perception and implementation of AI tools in business operations, with an emphasis on building trustworthy and rights-respecting technologies. The discussion began with a presentation of research showing a significant increase in AI use by businesses, from 35% to 75% within a year, with mixed employee reactions. Key challenges identified included the need for upskilling, improved governance, and ethical policies.


Speakers emphasized the importance of transparency, monitoring, and evaluation in AI implementation, as well as the necessity of human rights impact assessments. The discussion highlighted existing frameworks such as the UN Guiding Principles on Business and Human Rights and the Council of Europe’s recommendations as foundations for responsible AI use. Experts stressed the need for interdisciplinary approaches and collaboration between businesses, governments, and civil society to address AI-related challenges.


The telecom sector was presented as a case study, demonstrating both benefits and risks of AI implementation in areas such as network optimization and customer engagement. Speakers also discussed the potential of AI-powered human rights tracking tools to support businesses and governments in monitoring rights compliance.


Throughout the discussion, participants underscored the importance of education, stakeholder engagement, and clear regulatory frameworks to ensure AI’s responsible development and use. The session concluded by acknowledging the ongoing nature of AI governance challenges and the need for continued dialogue between public and private sectors to address emerging issues.


Keypoints

Major discussion points:


– The rapid adoption of AI in businesses, increasing from 35% to 75% usage in one year


– The need for education, upskilling, and governance frameworks around AI implementation


– The importance of human rights considerations and impact assessments when deploying AI


– The potential for AI to both help and hinder human rights monitoring and compliance


– The specific applications and challenges of AI in the telecommunications sector


The overall purpose of the discussion was to explore how AI is being perceived and implemented in business operations, with a focus on building trustworthy and rights-respecting AI technologies. The speakers aimed to highlight both the opportunities and risks of AI adoption from business, human rights, and regulatory perspectives.


The tone of the discussion was generally informative and cautiously optimistic. Speakers acknowledged the rapid growth and potential benefits of AI, while emphasizing the need for responsible implementation and human oversight. The tone became slightly more urgent when discussing the need for education and governance frameworks to keep pace with AI adoption. Overall, the conversation maintained a balanced approach, recognizing both the promise and challenges of AI in business contexts.


Speakers

– Tigran Karapetyan: Head of Transversal Challenges and Multilateral Projects Division at the Council of Europe


– Domenico Zipoli: Senior Research Fellow and Project Coordinator at the Geneva Human Rights Platform


– Moderator: Alice Marns, participant in YouthDig


– Angela Coriz: Public Policy Officer at Connect Europe


– Katarzyna Ellis: Partner and leader of the People Consulting Team at EY Poland


– Lyra Jakulevičienė: Member of the UN Working Group on Business and Human Rights


– Jörn Erbguth:


Additional speakers:


– Monika Stachon: Cybersecurity Strategic Analysis Expert from NASK, National Research Institute of Poland


– Biljana Nikolic: Colleague of Tigran Karapetyan at the Council of Europe


– Pablo Arrenovo: Representative from Telefonica


Full session report

AI Adoption in Business: Opportunities, Challenges, and Human Rights Considerations


This workshop explored the perception and implementation of AI tools in business operations, with a focus on building trustworthy and rights-respecting technologies. The discussion brought together experts from various fields to address the rapid growth of AI adoption, its impact on employees, and the need for responsible implementation frameworks.


Current State of AI Adoption


Katarzyna Ellis from EY Poland presented findings from the global EY Work Reimagined survey, which included responses from over 17,000 employees across 22 countries and 26 industries. The survey revealed that 75% of employees reported using generative AI for work, compared to 35% last year. This rapid adoption has led to mixed reactions among employees. While a slight majority view AI positively, more than 50% of employees report feeling scared or negative about AI in the workplace. This dichotomy highlights the urgent need for education and change management strategies as AI becomes increasingly prevalent in business operations.


The telecommunications sector was presented as a case study by Angela Coriz from Connect Europe, demonstrating both the benefits and challenges of AI implementation. Specific examples of AI use in telecom include network optimization, predictive maintenance, customer service chatbots, and personalized recommendations. Coriz also noted the regulatory uncertainty and cybersecurity risks that businesses face when implementing AI technologies.


Challenges in Responsible AI Implementation


A key theme that emerged was the lack of preparedness in many companies for AI implementation. Ellis highlighted significant gaps in upskilling, governance structures, and ethical policies. This sentiment was echoed by other speakers, who emphasised the need for transparency, monitoring, and evaluation of AI systems.


Lyra Jakulevičienė, a member of the UN Working Group on Business and Human Rights, stressed the importance of human rights impact assessments when deploying AI systems that pose potential risks. She noted that all human rights, not just privacy, can be impacted by AI technologies. Jakulevičienė also mentioned an upcoming UN Working Group report on AI, business, and human rights, which will provide further guidance for companies.


The regulatory landscape for AI and human rights was described as complex and fragmented. Jakulevičienė mentioned the identification of around 1,000 various standards worldwide dealing with AI technologies and their relationship with human rights. This complexity underscores the challenges businesses face in navigating diverse regulations and the need for harmonised standards.


Frameworks and Tools for Responsible AI


Several existing frameworks and tools were discussed as foundations for responsible AI use. Jakulevičienė highlighted the UN Guiding Principles on Business and Human Rights as an important guide for companies. Tigran Karapetyan from the Council of Europe mentioned the Council’s 2016 Recommendation on Human Rights and Business, which builds upon the UN Guiding Principles, and ongoing efforts by the Council and the EU to develop AI-specific regulations.


Domenico Zipoli from the Geneva Human Rights Platform introduced innovative AI-powered human rights tracking tools, such as CIMORE and OHCHR’s National Recommendations Tracking Database. He explained the ABC model (Alerts, Benchmarking, Coordination) used in these tools to support businesses and governments in monitoring rights compliance by bringing different stakeholders into a shared workflow. Zipoli also mentioned the upcoming AI for Good Global Summit in Geneva and the Geneva Human Rights Platform’s expert roundtables on digital human rights tracking tools.


Building Trust and Addressing Challenges


The speakers unanimously agreed on the importance of education and stakeholder engagement in building trust in AI systems. Ellis emphasised the crucial role of AI education for employees and stakeholders. Zipoli stressed the need for explainability and human oversight of AI systems to ensure accountability and trust.


The discussion also touched on the environmental impact of AI, with Coriz noting that while AI raises challenges in terms of energy consumption, particularly in data centres, it also has the potential to reduce emissions in telecom networks. This dual nature of AI’s impact on sustainability demonstrates the complex trade-offs involved in AI adoption.


Recommendations for Responsible AI Implementation


Jakulevičienė provided specific recommendations for businesses implementing AI:


1. Conduct human rights impact assessments


2. Ensure transparency in AI decision-making processes


3. Implement effective grievance mechanisms


4. Provide remedies for AI-related harms


5. Engage in multi-stakeholder dialogue and collaboration


Unresolved Issues and Future Directions


Several unresolved issues emerged from the discussion, including:


1. Balancing the potential benefits and challenges of AI adoption


2. Implementing effective human oversight in AI systems


3. Addressing regulatory uncertainty, especially in classifying high-risk AI scenarios


4. Managing potential increases in data traffic and associated investment needs in telecom networks


5. Assessing the long-term impacts of the EU AI Act and Council of Europe Framework Convention on AI


The speakers agreed on the need for continued dialogue between lawmakers, the public sector, and the private sector on AI governance. They also suggested potential compromises, such as using AI to both optimise business operations and support human rights due diligence efforts, and leveraging open-source public sector tools to support private sector AI implementation.


Conclusion


The workshop highlighted the rapid and complex nature of AI adoption in business, emphasising the need for responsible implementation that considers human rights, employee concerns, and regulatory compliance. As AI continues to transform business operations, ongoing collaboration between diverse stakeholders will be crucial to address emerging challenges and harness the technology’s potential for positive impact.


Session transcript

Tigran Karapetyan: Good afternoon everyone, good afternoon to all the people present here in Palais de l’Europe of the Council of Europe and all those who are joining us online. Very nice to see you all and I hope that you had interesting sessions before this one and they will be followed by others afterwards. And I pass the word to the organiser right now to give us the technical details on how this is going to all go. Alice, please.


Moderator: Hello, this is mostly for the online participants. Hello everyone and welcome to workshop six on the perception of AI tools in business operations, building trustworthy and rights respecting technologies. My name is Alice Marns, I’m a participant in this year’s YouthDig, the youth segment of EuroDig and will be remote moderating this session. We will briefly go over the session rules now. So the first one, please enter with your full name. To ask a question, raise your hand using the Zoom function. You will be unmuted when the floor is given to you. And when speaking, switch on the video, state your name and affiliation. And please do not share links to the Zoom meetings, even with your colleagues. Thank you. Thank you very much, Alice. With these few simple rules, we can now start this session. So warm welcome to everyone once again.


Tigran Karapetyan: And this workshop on perception of AI tools in business operations, where we will speak about building trustworthy and rights respecting technology. My name is Tigran Karapetan, I’m head of Transversal Challenges and More. and the Multilateral Projects Division at the Council of Europe. And before we start, I would like to say my words of thanks to co-organizers and the speakers that we’re going to be hearing later on. Monika Stachon, Cybersecurity Strategic Analysis Expert from NASK, National Research Institute of Poland, that unfortunately could not be with us in person but will be joining us online. Katarzyna Ellis, partner and leader of the People Consulting Team at EY Poland, that is also joining us virtually. And Angela Coriz, who is joining us personally here from Connect Europe, a public policy officer there. Today’s workshop is the result of strong collaboration and great coordination among all the partners involved, so I’d like to thank you once again for that. Furthermore, I’d like to also extend my gratitude to distinguished panelists that will be speaking today. Professor Lyra Jakulevičienė, member of the UN Working Group on Business and Human Rights, and Domenico Zipoli, Senior Research Fellow and Project Coordinator at the Geneva Human Rights Platform. This workshop has been organized in the framework of the Council of Europe’s pilot project on human rights and environmentally responsible business practices. It’s a project that is run in my division, and the Council of Europe’s initiative reinforces the protection of human rights and environmental sustainability within business operations in line with the existing international frameworks and standards. Through cooperation, the project supports the member states and businesses in aligning with the human rights standards, addresses gaps, and encourages cooperation among governments, businesses and civil society. As a result of the collective efforts under the project and collaboration with Monika, Katarzyna and Angela, we are pleased to be here today. This interactive workshop is designed to explore how AI is perceived within companies, the challenges involved in its implementation, including human rights challenges, and the vital role of ensuring compliance with human rights. Hopefully, we can also have a word on whether AI can help companies in fact comply with human rights, along with increasing productivity. So, without further ado, I would like to now invite Katarzyna Ellis from EY to present a recently published report on how Polish companies implement AI. Katarzyna, please be mindful of the time. We’ve got one hour for the entire session, so you’ve got your chance now. Please, go ahead.


Katarzyna Ellis: Fabulous. Thank you, Jörn, and thank you for such a warm welcome, really. It’s such a pleasure to be here. If you don’t mind, I will share a presentation to give you some insights from our research. I’ll do share. Three, two, one. Do we all see? I’ll go into the presentation mode. Can everybody see the presentation? I can see now that you can. Yes. What I want to share with you today is at EY we have been doing a global report on how the future of work will look like for the next 5, 10, 15 and 20 years. We call it EY Work Reimagined and that survey has been done across the globe, 15,000 employees, over 1,500 different entities globally that we have spoken to. The results are quite exciting, that’s why I will share with you today not only the Polish insights, which we have done this year, but also what we see globally that might be a better representation to what you see in your respective countries. Firstly, we have been asking the question about the work technology and the generative AI for the past two years and we see a massive, significant impact of the last 12 months on the ways that we work using genAI. At the moment, from what we see, the use of genAI for work is around 75%. That’s what the employees are reporting in our global survey. Just to give you an answer from last year, it was 35%. When you see what an extra potential growth the genAI technology is that we see across different businesses, it is quite incredible. 90% of the organizations that we have spoken to use the genAI technology already or have plans to use it very shortly. And 60% use it in government and public sector. We thought that was going to be a big impact. lower, but it isn’t. So surprisingly, public sector is not far behind. When we look at the adoption of GenAI, employee and employer sentiments are still net positive. We’re seeing that employees see that GenAI tools will help the employee productivity, it will help the ways of working that we do in specific sectors, and also it enhances the ability to focus on high value work, while giving away the low value administrative repetitive tasks to be managed through either automation or with the help of GenAI. Furthermore, what we see is that in pairing it up, so GenAI technology and the investments that the businesses are making in GenAI goes hand in hand with the need of enabling the upskilling and reskilling the organizations. So what we see is that there is a massive gap when it comes to what organizations have and what organizations should have in order to be able to utilize GenAI effectively, but also ethically. So what we see is that employees and employers are more aligned on the need to learn the skills, but they are less aligned on whether they have the opportunities at work to learn those skills. And then what we also see is that what we at EY call talent health, so the way that the organization is able to deliver business value through high potential talent is directly connected with the amount of skills and the usage of GenAI across the organizations. So as I said at the moment, 75% of employees are reporting that they’re using GenAI for for work. So not only at work, but for work. But that also is connected with different parts of business operations. Because I look at people function, and I believe that people are the greatest asset of every organization, and without people, the organizations cannot perform and bring business value. I looked quite deeply into how the HR or people functions are utilizing those Gen-AI tools, and I think that will be very deeply connected with the conversations that we’ll be having here today, so with ethics and the human rights. So how do we use that within the people function, specifically within recruitment, talent acquisition, performance, and employee engagement? And what we see is that quite a small amount of HR departments, because less than a quarter, are effectively using Gen-AI tools currently. But more and more HR or people function leaders are thinking of deploying those. And this is the question, how are we going to do it in order to impact the organizations in the most positive manner? And I believe we do have a mentee here. So Monika, I think you might have to help me here. I will share the mentee. So what I wanted to ask you is how do you think the employees feel about AI at work? And then I’ll show you what we found out, surprisingly, within the Polish market. Do you think employees have a positive or negative attitude towards the AI tools at work? Please scan the QR code, yes, or you can join by using the code in the upper corner. I think we’ll give you a minute or so. Curious, yeah. Cautious, yeah. Most will like it because it makes their work easier. Positive, makes feeling, depends on the functions. E4HR purpose perhaps not positive, for automation of tasks positive. Very good. So, I think that, let me go back to the presentation then. So, you are very, very much spot on to what we’ve discovered. So, in Poland, we wanted to deepen the global research and just focus on the organisations within our geography. And what we have discovered is that, let me just move this a little bit here, that just a little bit less than 50% have a positive attitude towards using Gen AI at work. But 40% were really negative. And they were negative because they really worried that firstly, Gen AI will take away their jobs, or they change their jobs beyond what they are actually. So, also, what was very interesting is that 4% of all the people that we’ve asked said that they are already tired of AI discussions because they hear about it everywhere and continuously and constantly, and it doesn’t really bring much value to them. But yet, 36% believe that the use of AI is inevitable, and they’re trying to acquire as much knowledge and skills in this area. So why is this a very important message? It is the fact that more than 50% of employees of the organizations on average are scared of using AI or have a negative affiliation when it comes to AI


Tigran Karapetyan: at work. And as we know, the AI is here to stay. It’s not going to go away. It’s there already. So it’s not the question of how or if, it’s the question of when every organization will use it effectively. So what do we do with that over 50% of all the workforce that actually does not believe or that does not have positive feelings about using those tools? It’s a big problem. So what I talk about is education, education, education. And there is one additional point that employees and employers have a very different understanding of the actual utilization of the Gen AI tools. So employers do overestimate how effectively and how much the employees are using the Gen AI tools that they invested in. So that’s another part, education, again, education, education. So what do we need to do? How can we actually, within the private and public sector, when we work with the organizations, empower people to bring us the return on investment that we’ve all committing to while we’re now purchasing the AI solutions? Firstly, it’s building trust. So companies should really clearly define the allowed areas for experimenting with AI. And we need to ensure that employees have the value of increased productivity, but it will not lay to layoffs of the people, of the staff. So we need to create a sense of psychological safety, that is very crucial. Rewarding innovation. So the organizations should really focus on bringing the innovation at the forfeit of every work that we do, and allow that innovation and also reward for it. So significant rewards should be introduced for revealing and sharing the ways to utilize AI. So for example, I’ve worked with multiple clients when we do AI hackathons, or we do prompt competitions, who writes the best prompt within each department. And then there are some prizes at the end of it. So using gamification to ensure that that reward for innovation is there. Driving by example. So it’s very often that we, at the management level, we say you should be using it, but we don’t stand and do it ourselves. So the management should be driving, leading by example. Creating spaces for knowledge sharing, communities of practice, or hackathons as well, just to ensure that people know where to go to when they want to get better. Ensuring access to tools and training, and that’s very important. So driving the upskilling and reskilling agenda across the organizations. Mentoring programs and individual support for the individuals. So creating a network of internal mentors, or inviting external experts who can talk to the employees and they can answer them questions. Again, building that trust to the organization, building that psychological safety. And also, last but not least, the regular evaluation and feedback loops. We need to know what’s going right and what’s going wrong, and we need to address what’s going wrong very quickly and reward what’s going right. So these are the elements that we really should focus on. on when driving the GEN-AI agenda across the organizations. And just as a last maybe food for thought for you is that we’ve discovered within the Polish study is that recently, within the last year, I think 18 months, 32% of the companies that we’ve spoken to established new dedicated to AI teams. However, only 16% of those companies have invested in hiring new employees with the necessary skills to implement the AI processes. So where do they get people to fill those vacancies that they created by creating the teams? They take them from the organization. So the companies are very likely redirecting the existing employees to do new AI teams. And we do see that there is a significant shortage of AI specialists in the job market. But that’s why the organization might have difficulty in finding and attracting qualified experts. So they’re looking from within. So how to minimize that gap? Again, it’s education, education, education. So re-skilling, up-skilling. That’s very important


Katarzyna Ellis: and changing the cultural aspects of the organization. And in addition, very recently, I read one of the Polish employment studies. And there for every 1,100 people in Poland that go into retirement, only 435 people entered the workplace. So if you think about that in that manner, what a huge gap of talent we actually have and how we’re going to address that. So that’s a question for you. and some food for thought. Hopefully, I have given you an overview, a very brief one. And now I’ll pass on the stage to the next speaker. Thank you.


Tigran Karapetyan: Thank you very much, Katarzyna. This was very, very interesting. Thank you for the report. And I think what we’ll do is we’ll open the floor for one minute, one, two minutes for quick reactions. Anyone in the audience, maybe from the panelists or online would like to react? Do you have any of the panelists? You would like to? Yeah, please go ahead.


Lyra Jakulevičienė: We can start with you, please. Thank you very much. Thank you for this opportunity to participate here. And there are quite a number of UN people here. But the reason why I’m here is also to find synergies of our common work on common issues. Now, on the report, very interesting report, I had only possibility to look briefly at it, but a few observations. Firstly, what has been mentioned about the lack of expertise within the business sector and gap in knowledge concerning the technological aspects. Now, I’m coming from business and human rights topics, so I can only echo that not only on the technological aspects, but also on the human rights aspects. So it’s another burden, let’s say, another challenge for businesses to address these aspects. And as here, we are going also to speak about business and human rights while using AI. I think this is very relevant. So just to echo what was said. Secondly, the report emphasizes the growing importance of regulatory compliance. And just also to illustrate that at the moment, we have identified around 1000 various standards that exist everywhere in the world. that deal with AI technologies and relationship with human rights. And of course, needless to say, but it’s extremely important because these are the first two initiatives that have materialized for mandatory standards. It’s the Council of Europe Framework Convention on AI, Human Rights, Democracy and Rule of Law, and of course the EU-AI Act that was already figuring in the discussions today. So I think there will be more of this pressure that we see, that it’s going more to mandatory regulations, so businesses will have to actually embed besides the business case and the run for sustainability. What I was a bit surprised, but maybe it’s probably the methodology that was used, that in a way sometime it’s mentioned that AI in the workplace has to be used responsibly, but then I only found reference to foundations of sustainable growth of companies through the system security and some other aspects. So not really mention of compliance with human rights and human rights due diligence, which is the topic for today, and I hope that we can dwell a little bit more on that. And the last point, which is important, and that goes back a little bit to capacities and the gap in knowledge, is the interdisciplinary approach that will have to be applied in this field, because clearly the businesses will not become the tech people, unless these are tech companies, and also will not become the human rights specialists. So clearly there will be needs for interdisciplinary teams, and I would really like to echo on this what was also mentioned in the report. So very briefly on the report. Thank you, thank you very much.


Tigran Karapetyan: Please, Mr. Zipoli.


Domenico Zipoli: Thank you, thank you very much. And as this is the first time that I’m taking the floor, just if… if you can allow me to briefly introduce our work, represent the Geneva Human Rights Platform of the Geneva Academy where we lead a global initiative on digital human rights tracking tools and databases and these are essentially digital systems that help governments, UN and regional human rights bodies, civil society, national human rights institutions, equality bodies track how human rights recommendations and decisions as well as the SDGs are implemented. Increasingly, the use of AI is being used to manage complexity, clustering data, detecting gaps, generating alerts and against this backdrop the EY report is highly relevant. I was in fact surprised that only 90% of companies report readiness to scale AI and that most have a formal governance framework in place and this is of course encouraging. In our field, in the public sector, we’ve learned that readiness is not just technical, it’s institutional in fact and success I’d say depends on governance, transparency and ethical safeguards so of course the highlight of trustworthiness is key. I think the report also showed 60% of companies experienced efficiency gains if I’m not mistaken, yes, so we’ve seen similar trends in the public sector where AI supported digital tracking tools can now analyze hundreds of recommendations in seconds, a task that beforehand took weeks of course, but again with these gains come responsibility and without fairness and inclusivity in design, AI risks amplifying the very inequalities that we’re trying to fix. So in a sense, whether in civic tech or corporate systems, human oversight and bias audits must be built from the start. I think if we want AI in business to be rights respecting, we don’t need to start from scratch. The public sector indeed has a blueprint, a little bit following up on what was just said, with tested frameworks that could be adapted for business use. But I’ll talk a little bit more about the use of AI in public sector digital tools in a bit. Thank you.


Tigran Karapetyan: Thank you very much, Mr. Zipoli. Please, Dr. Erbguth.


Jörn Erbguth: Thank you for the presentation. I have a little question. As we have the AI Act now in force, does the AI Act answer those questions that have been opened? For example, the AI Act requires education of people using AI. This is mandatory and already in force. Do we see that the EU AI Act in Poland already has consequences? And how does this play into this research? Does it go in the right direction? Do we see that it will support this process or do we see things missing or going in the wrong direction?


Katarzyna Ellis: I think we’ll have the answers to all those questions that are deeply valid with the next, probably, the next iteration of the report, because it’s only just the beginning of what we’re seeing. We are already seeing that the education and skills gap is a massive issue, not only in Poland, but across the globe to enable the AI Act to be enforced properly. So most likely we will repeat this survey within the next six to 12 months, and then we’ll see the impact to what we see at the moment.


Tigran Karapetyan: Thank you. Thank you. Thank you very much. I think given the time constraints, we have to now move on. Thank you very much Katarzyna for this wonderful presentation, this is very interesting. And thank you to Mr. Zipoli and Ms. Jakulevičienė and Mr. Erbguth for their interventions as well. So as we continue, we move on to the existing frameworks and international standards, which already have been mentioned a few times by some of the speakers, that play a crucial role in guiding the responsible use of AI in business operations. In this context, I’d like to draw particular attention to the Council of Europe’s 2016 Recommendation on Human Rights and Business, which reinforces the UN’s guiding principles on business and human rights. Together, these instruments offer a solid foundation for ensuring that AI solutions used in business operations are developed and implemented in alignment with human rights standards. That’s on top of the AI-specific regulations that were mentioned. Many of you also heard yesterday from colleagues from the Framework Convention on Artificial Intelligence and Huderia about the guidance on the risk and impact assessment of AI systems on human rights, democracy and rule of law. Today’s workshop will build on those discussions, hopefully, and offer a complementary perspective to that with a particular emphasis on how these frameworks intersect with business practices. With that, I’m pleased to pass the word back to our next speaker, Professor Ljera Jakulevic-Jene, to share her insights on the link between international standards such as the UN guiding principles and the implementation of the AI tools.


Lyra Jakulevičienė: Thank you. Indeed, I should have done it in the beginning, but it’s never late. I just want also to say, with your permission, Chair, a few words on what the UN Working Group on Business and Human Rights does, not just for formality, but because there are ways how you could use also the work of the Working Group and create synergies also with Council of Europe work. So, first of all, we are independent experts in the Working Group, so we are working on voluntary basis and we are mandated by the Human Rights Council. The mandate is covering several functions. The most important is that we are mandated to disseminate and to support the implementation of the UN guiding principles on business and human rights, which is at the moment the only global standard in this area. Secondly, we also prepared the thematic reports for the UN General Assembly and Human Rights Council. And just to announce, because it is relevant to the topic that we are discussing today, that in June we will be presenting report on AI, business and human rights, and use of artificial intelligence by states, by businesses in procurement and also deployment. So, we are not looking really into the technological aspects or the development of AI, but rather in something what was less explored is the procurement and deployment. So, this report will be out soon and it might be contributing also to the discussion that we have today. Then, we are also having the communication procedure, so we are not a quasi or judicial body, but we have the opportunity to, and we are mandated, to examine complaints against companies, sometimes states, and we are engaging in dialogue. So, we don’t have the, let’s say, mandatory decision, but this is also quite a quick way if you compare with the judicial bodies, We are a judicial body, so we examine complaints every year, around 100 complaints, because our capacities are small, but it’s always possible to address us through the communication procedure, which is also confidential, so there are no issues to be feared of. And we also hold country visits, which allow us to discuss the issues on business and human rights, both with businesses, but also with the states. So just briefly on what the working group does, and I really hope that we can engage with some of you. Now, going back to the topic of today’s, how businesses use AI, a lot we have heard about the report, and the main conclusion is, of course, that companies are increasingly uptaking the use of AI in various ways, and it would be difficult to enumerate all the possible ways of use of AI. But just to also mention, when we talk to businesses, the interesting conclusion sometimes is made that even companies are saying that, well, we don’t even know which AI tools we’re using within the company, what tools our persons, our employees are using. So this also demonstrates that firstly, we have to start from there, from the knowledge, what exactly do we use? Do we use the generative? Do we use the narrow AI? Do we use other systems? And of course, with open AI systems that are available, then sometimes within the company, you may have the use of AI that you may not as a management behavior. That’s why it’s so important to have certain policies to establish certain rules within companies for the use of AI. Now, with regard to various ways where AI is being used, of course, here I just put the slight example of using the workplace. But of course, the use of AI in the workplace is not only about human resources, workforce management. and management of people. It’s also about people in the workplace using, for example, big data, that need to collect data, that need to process and to work with this. Then marketing custom relations, a lot of AI driven personalization, targeted advertising, pricing algorithms, a lot of possibilities to use. Regulatory compliance, AI is being used for human rights due diligence, not always positive, but it’s also used quite a lot, in particular in the value chain assessments. Then in decision making, for example, algorithmic or automated decision making is not really anything new. If you talk with businesses from healthcare, finance sector, insurance, retail operations, so they use for quite some time this automated business decisions. But what is more complicated now with AI, that it introduces this new levels and scale of complexity. That’s why we have to really not only talk about it, but also educate ourselves, as has been also mentioned by our colleague from EY. Now, what is also essential, as we increasingly use those systems both in the private sector, but also in the public sector, that these systems are used in a transparent, explainable and understandable way. And that stakeholders are also involved, both before the deployment in discussing and maybe even auditing, let’s say, some of the systems in order to prevent some discriminatory or other uses. So it is extremely important that there are policies, there are practices, and there are also people behind in the companies, but also in the state institutions for the use of those systems. Now… Indeed, a lot could be told about benefits and risks of using AI. I just try to exemplify this in the slide in front of you. And I just want to emphasize that, and this is what they try to show in the slide, that if we look at certain benefits, certain use of AI has both sides, they stick to two ends. So, for instance, if we see that use of AI has played a really important role in monitoring, for example, the air quality, fatigue levels in the workplace, certain workplace risks, in particular in those sectors, for example, mining, where it is extremely important to observe that people are not tired, because that could create some health and safety risks for the workers themselves. So this is, let’s say, the benefit side. But then on the other side, we see that AI is being used for monitoring productivity, for calculations, how people work in the workplace, how doctors or how lawyers, or even judges, how many cases and how quickly they are being processed and so on, to have all kinds of indicators. So that, on the other hand, is meant to boost the productivity. But on the other hand, it creates a lot of stress. I think there was something about the mental health said already in the beginning of the panel. So it could work also negatively, because it creates a lot of pressure, a lot of stress. And this sometimes pushes the workers then to ignore certain safety standards. So there could be both ways. And I think this is important also to emphasize that we all always have to look both at benefits and risks. And in reality, what we see that when we speak about use of AI, it’s quite frequently that benefits are being emphasized. for promoting use of AI in businesses but also in the public sector. Now if we go back to the standards, indeed the UN guiding principles work on three pillars. So the first pillar is what the states have to do in order to make sure that companies do not engage in violations or to prevent violations and if they do happen to provide the opportunities for remedying. So there are obligations for states and indeed sometimes people say that UN guiding principles, this is soft law. But if we look at the first pillar where the obligations for states are embodied, this is reliant on the international mandatory obligations. So all the UN treaties that I don’t need to present here, but it is not that much soft law, so to say. Then we have the pillar for businesses and there of course we emphasize a lot in the guiding principles on human rights to diligence, which should help the businesses to identify, prevent, mitigate and address potential negative impacts on human rights. And then we have the pillar on the effective remedy and here what is particular with AI and why it is so important to be transparent, why it is so important to disclose that you use AI, whether in the workplace or elsewhere, is because if something happens you cannot have remedy without the disclosure, without transparency. Because you as a person, as a worker or even as a partner, you cannot know sometimes that AI has been used. So if you don’t know, how can you apply for certain remedy to certain oversight institution court or any kind of commission and so on. So that’s why the remedy issue is very important. Now, I am aware of time, but let me just maybe summarize on some of the steps that could be useful to bear in mind for the companies, but also I think it’s equally important for the public institutions, because we have seen a number of challenges for certain governments around Europe also, as a result of which the governments also started to do human rights due diligence. So, several steps. Firstly, to start with knowledge mapping, what are you using exactly in the company, in the state institution, and then work on identification of impacts by doing human rights impact assessment. Now, the impact assessment does not mean that you will have to address everything, like in anti-money laundering field, if you identify certain risks, you must address, because you cannot just leave it for the future. Now, here we emphasize with human rights due diligence, it is important to prioritize, because not everything can be done at the same time. So, of course, the recommendation is to look for, let’s say, crucial risks for businesses with regard to severity of the impacts, something that has to be addressed immediately and something that has to be addressed later. Now, the AI Act, for example, looks also through risk-based analysis, through the high risk and less risk, and depending on that, there are different obligations. Then, of course, if risks are identified and prioritized, then it’s important to address those impacts, be it with preventive measures or actual measures. And here, what is extremely, extremely important is to talk to stakeholders, because talking to stakeholders may also help to understand the severity or the importance of certain risks. that the use of AI involves. And when we talk about stakeholders, it’s not only the trade units and workers as we speak about the workplace, but also the broader stakeholders, civil society organizations. Disclosure and ensuring the transparency is extremely important. So if AI is being used, it has to be disclosed, be it to the employees or be it to other stakeholders. Collaboration among businesses is extremely helpful, in particular if we talk about SMEs, small and medium businesses, because they have even less capacities to address issues and increasingly they use AI because this helps them with their productivity and other aspects. So if there is collaboration between businesses, in particular in the value chain, then the certain issues could be leveraged much easier. Then sometimes also the state support is needed, in particular if we talk about SMEs. But this collaboration can help also to address those challenges more effectively and in a more optimal way. And it’s extremely important also to ensure effective and timely communication because in this process where a lot is unknown, both by using the AI itself, but also to know how AI is impacting on different stakeholders is extremely, extremely important to communicate because that could build trust, it could strengthen the relationships and also to demystify certain myths that we have seen also as part of the process. And this is where I stop, even though I have many things to say, but I’m aware of the time and I don’t want to be rude also to my colleagues.


Tigran Karapetyan: Thank you very, very, very much. This is very interesting and I think that this session is way too short to actually discuss all the things that need discussing. So let’s take this only as an inspiration for further reading and further exploration. and on this I would like to also mention about the Council of Europe materials as being sources of standards and it was very nice your reference to the fact that it’s a soft standard but not really because it’s based on hard standards and the positive obligations of the state being a very specific one and this is where also the European Court of Human Rights case law comes in this is where also monitoring reports by the Council of Europe, various Council of Europe monitoring bodies can become helpful for businesses to do their due diligence. So given the time now I’m moving on to the next panelist, I’d like to invite back Mr. Domenico Zipoli to discuss how human rights digital tracking tools can support not only public institutions but also businesses in conducting their rights due diligence. So let’s speak about how to use AI for the good.


Domenico Zipoli: Thank you, thank you very much Chair and yes, as I said I come from Geneva, a city like Strasbourg after all that has long championed the idea of human rights by design and today as AI becomes crucial and embedded in business operations and government workflows alike that principle is more urgent than ever and our contribution to this discussion indeed builds on our work on digital tracking tools and as I mentioned earlier on, these platforms have transformed how states monitor their human rights obligations and the point that I’d like to make today is that increasingly their architecture and logic may be relevant to business actors as well especially those navigating ESG risks, regulatory pressure and impact investment frameworks. So essentially over the last decade we’ve seen a rise of human rights software We divide them in different categories. Digital human rights tracking tools, such as CIMORE+, that is present in the Latin American region. CIMORE stands for Sistema de Monitoreo de Recomendaciones. IMPACT, open source software, that is more present in the Pacific region. Or indeed the Office of the High Commission of Human Rights National Recommendations Tracking Database. We then have information management systems, where you might know the Office of the High Commission of Human Rights, Universal Human Rights Index, but indeed the Council of Europe’s very own European Court of Human Rights knowledge sharing platform. So all these systems help governments track progress on human rights recommendations, be it treaty bodies, special procedures, regional courts. And what these platforms create is a holistic and organized way to understand what’s happening on the ground. What Biljana, thank you so much, is now sharing on the screen is a directory that we hold on our website. We don’t have a fancy QR as you do, but we’re definitely taking that idea back home. If you want, you can check the directory yourself. Just put digital human rights tracking tools directory on your search engine. And within this directory you can see a selection of what is more now than 20 of these digital tracking tools and databases. With a description of its functions, users, and the link to the tools themselves. I think we can describe the value of these tools according to what we call an ABC model. Alerts, benchmarking, and coordination. And always have businesses in mind when I go through this framework. When it comes to alerts, AI-powered platforms can act as early warning systems. They automatically scan social media, news, reports for red flags, such as spikes for instance in hate speech or disinformation ahead of elections. And in the business world, this same logic could… be applied to supply chain grievance monitoring, for instance, or reputational risk detection, allowing companies to intervene before a situation escalates. B stands for benchmarking. So, AI-powered databases allow clear benchmarking of human rights performance. The European Court of Human Rights Knowledge Sharing Platform, now it’ll be interesting to discuss with your colleagues how you’ll intend on leveraging AI for its use. But what is the Knowledge Sharing Platform doing? It essentially organizes and visualizes case law by thematic area, legal principles, helping national authorities understand how rights are being implemented. And for businesses in multiple jurisdictions, such a resource can be invaluable. Oftentimes, these resources are only used by us, you know, in the human rights space, in the public sector space. But for businesses, it would allow legal teams and compliance officers to benchmark corporate policy, for instance, conduct against emerging human rights standards, for instance, in areas like data privacy, freedom of expression, anti-discrimination, so not based on, you know, assumptions, surveys, but on actual jurisprudence. And this kind of structured legal insight can meaningfully support ESG alignment, risk mitigation, and innovation. And finally, coordination. I won’t go much into detail about this, but this is something that I’m particularly fond of. Digital tracking tools like the CIMORE or OHCHR’s National Recommendations Tracking Database, they bring different ministries, courts, civil society into one shared workflow. And everyone sees the same data and tracks the progress that is shared. So let’s talk about Impact OSS. You can go and take a look at the software itself. As it’s an open source… system, it allows diverse actors, including the public, to follow implementation efforts. You can see SADATA is the tool that the Ministry of Foreign Affairs of Samoa, for instance, is using. Today, us all can see how Samoa is faring when it comes to UN human rights recommendations. So indeed, for businesses, this is what I believe could be the next frontier, something that one could co-create. Just quickly on, you know, how these public sector tools relate to businesses, so indirectly, but also increasingly with more of a direct use. They establish shared benchmarks, they clarify state commitments, and illuminate risks. And for companies that wish to align with international norms, the data and logic embedded in these tools that you see in the directory offer a ready-made structure that I think is worth thinking about for due diligence, ESG monitoring, and the like. And the last point I want to make here is also that there can be a compelling investment case here, because many digital human rights tracking tools are open source, public goods. They benefit states, as I said, international organizations. But by supporting these platforms through funding, through private funding, businesses can help build infrastructure that they themselves would benefit from. And this aligns with the principles of impact investment. So this is a space that we’d like to create, the Geneva Human Rights Platform. We have expert roundtables every year where we invite representatives from different sectors around the table to discuss the emergence of these tools and how these tools can be supported. You were mentioning the AI for Good. Now in July, the AI for Good Global Summit will take place in Geneva. There will be a dedicated workshop on AI for human rights monitoring. So I am, of course, inviting you all, if you’re in Geneva, to attend. It’s on the 8th of July. And yes, indeed, there has to be more engagement between the private sector and the public sector, specifically when it relates to human rights monitoring, a space that hopefully will have more attention in the future.


Tigran Karapetyan: Thank you. Thank you very much. This is very interesting. And I just understood that I inadvertently plugged in the next summit that is going to be held. But please feel encouraged to take part. Absolutely. And it’s I think it’s also interesting and something that is worth looking into maybe in another session somewhere. The fact that once the data on AI, sorry, on HR performance is tracked, it can actually be assigned certain value. So that’s another area that needs exploration and might become an investment case or a business case. So tracking of human rights data, I think, is extremely important. And that makes that data eventually can even turn it into a commodity, de facto. So now we move to the to the next speaker, Angela Coriz, from Connect Europe, who will share positive case examples from the telecom sector, please, Angela.


Angela Coriz: Thank you. I will try to be quick. So I work at Connect Europe. This is a trade association that represents the leading providers of telecoms in Europe. And so today, what I wanted to do was to quickly show a snapshot of the business side and specifically the telecom side uses of AI that are already happening. And as we saw in the. presentations from the reports already. It’s not of a question of when this is happening, it’s already happening. So it’s more how this will happen. So also in the telecom sector, our members are still exploring some of the potential benefits and solutions that AI can offer. But also looking at the drawbacks and risks. And in the meantime, we’re operating within the within the AI Act. Since our members are European, we are entering the implementation phase of the AI Act. And there’s a lot to be considered there. Well, and a lot of still remaining questions from businesses that will need to be answered along the way. So just to share just a few examples of how AI is being used in telecoms now. One thing that’s very specific to the sector that is helpful for AI now is within the network. It can be very helpful, for example, to optimize network investment choices, from finding the best location to place a network antenna, to also improving network capacity planning and optimizing the traffic flow through the networks. There’s also a lot of benefits that can be found with predictive maintenance. So helping technicians to actually fix issues within the network, summarizing trouble tickets, and basically speeding up that process. There’s also benefits within the radio access network or RAN. This is an area where a lot of operators are already using AI solutions. 25% of operators have already deployed some functionality in this area and over 80% have some sort of AI activity of some kinds. This can be in commercial trial test or R&D phase, but this has already gotten started on quite a big scale. Then if we look at greener connectivity networks, as we’ve said before, AI offers benefits and risks and it’s a really a two-sided coin. So while AI has raised challenges in terms of energy consumption, specifically with data centers, it also can be potentially beneficial for reducing emissions in telecom networks. We have one of our members, Orange, that uses AI systems to monitor their energy consumption of their routers and that’s resulted in 12% decrease in consumption. A couple of our members, Telenor and Ericsson, have also collaborated on a project that saves energy with RAN and it’s resulted in a 4% reduction of energy usage for radio units. So it’s about balancing these potential rewards and also these challenges that AI can bring to sustainability. There’s also functions for customer engagement, for customer service, and internal solutions that help employees within their companies navigate their own, finding the right people they need to find internally within their own companies. That’s also a potential benefit. But as we said, there’s plenty of challenges that come along with these. One of them in Europe is regulatory uncertainty. That being, we will need some clarification along the way, and some of these are coming through the form of guidelines from the Commission. But in order to follow, especially a push from the Commission to embrace AI and become an AI continent, in order to develop these solutions, clear definitions on how to classify high-risk scenarios, for example, are important. And also, for telecoms, classifying AI-based safety components is really, really essential. There’s also cybersecurity risks. AI can increase these threats, and there can be modified contents. For example, we see this impersonation fraud within the telecom sector, and not only telecom sector, also on platforms. And on the other hand, you also have solutions that are being built with AI. So again, this kind of counterbalance, for example, our member Telefonica has a solution called TU-Verify, which can detect content generated or modified by AI. So you have these kind of counterbalances as well. Then the other thing to be considered is that AI will likely cause a spike in data traffic. So if we’re spiking data traffic on telecom networks, this raises a big investment need from a telecom side. And it’s also a bit difficult to say exactly how much more data will be used, because it depends on what kind of AI will be used. So this is an area that should also be kept in mind. And already mentioned in the report as well, we see skills development as very important, especially with employees, like several people have already said. So training both citizens and people working in companies is really crucial. We have some members who have taken some steps in this direction for training programs, both internal and external, but this is certainly an area that’s crucial. And we’ve seen the statistics within the report. There’s still a lot of work to be done there. So I hope I haven’t gone too fast. I was trying to make that really quick. But just to conclude, it’s an ongoing developing area, and there’s a lot of potential to be found. But of course, we need to be operating within a framework that keeps ethical principles in mind and is rights respecting. And in order to do that, we need to continue having these discussions with lawmakers, with the public sector, with the private sector. So yes, there’s a lot still to be done, of course. Thank you.


Tigran Karapetyan: Thank you very much, Ms. Korys, for this. It was very interesting. Indeed, as you said, there’s still lots of questions as things develop, and some of them aren’t going to be very easy to answer. Having said that, I think we are over our time already, but if there is one or two questions, and we still need five minutes to summarize the session. Thank you. My name is Pablo Arrenovo from Telefonica. I’ll ask you if the panel is on the implementation of AI by companies of different sectors or public administrations.


Audience: My question would be if you feel that human rights and ethics is something taken into account when thinking about implementing AI systems by private or public sector. Thank you. Any of the panelists would like to respond? Please go ahead.


Domenico Zipoli: Thank you very much. It’s always fascinating to be in a room with both stakeholders coming from companies and from the public sector. The short question is, as I think everyone here and in the morning has said, there’s still a lot of trust to be built around AI. We need to get AI governance right. Whenever we talk about AI design, I personally always talk about four main challenges that we always have to bear in mind. One is the key fundamental one, which is bias. AI is as representative as the data that we have. If data is unrepresentative, it can reinforce discrimination rather than solving it. Then there’s transparency, of course. Stakeholders must understand. from EY was, you know, referring to education, education, education. We must understand how AI tools reach their conclusions. The International Telecommunications Union has this beautiful initiative that we’re part of called the AI Skills Coalition. So, indeed, coalitions such as this that educate not just the public, but also us, you know, stakeholders, I think, is crucial. And explainability is no longer optional. We need to be part of this discussion. Privacy, of course, and oversight. And this is the last thing that I’d like to say, but I keep on repeating, is that human rights-based AI demands a human-in-the-loop scenario, or governance. Whether it’s a state actor deploying a rights-tracking system, the ones that we study, or a business automating compliance reviews, accountability cannot be outsourced to an algorithm. So, it’s a work in progress, but I think that with this discussion between companies, regulatory bodies, equality bodies, this academia, this conundrum can be solved. I don’t think that we have the solution yet, but we’re there.


Tigran Karapetyan: Thank you very much, Mr. Zipoli. If you turn off your mic. Just two short points.


Lyra Jakulevičienė: Firstly, if we talk about impact of use of AI on both positive, but also adverse impact on human rights, I think there is also another myth sometimes that we are usually talking about privacy rights or something like that. So, I just want to underline for the end that actually all human rights are involved. And there could be impact for all human rights, including, if we talk about environment, this new right on healthy and sustainable environment. Now, in this respect, I think there is not a big difference when the state is using it. and state authorities are using AI or companies because lots of rights could be involved. Now, the second point that I would like to also mention is just to highlight that here we got the good practices from the telecommunications sector. Indeed, the International Telecommunications Union is developing lots of standards for technological companies, but not only and increasingly realizes and acknowledges the need for human rights due diligence. But the report that I mentioned that is coming up of UN Working Group on Business and Human Rights in June is actually looking into different sectors. We try to look at the state as regulators, state as deployers, state as procurer of AI, but also we try to look in different sectors of businesses or different functions, let’s say, of businesses to see if there are some differences and if there is some specifics. And of course, there is some specifics. So, I just want to say that there will be many good practices as we manage to identify emerging practices in this report that you can also benefit to be inspired how businesses indeed or state authorities could undertake this road of human rights due diligence with regard to use of AI.


Tigran Karapetyan: Thank you very much. We’ll all be looking forward to that report. And I’m passing the word to Dr. Erkut for giving us a short summary of the session. Please.


Moderator: Thank you. Thank you for those presentations. They were quite diverse on different topics and I tried to summarize them. And please correct me if I’m missing very important things. We will have the platform online. So, if you want to improve the wording, this is not the place here to do it. Otherwise, we might be taking half an hour to do that. So I understood from the first presentation that within one year, the use of AI in business has risen from 35% to 75%. Employees have a mixed feeling about it, with a slight majority being positive and a strong minority being negative. I think this is the basis we have been starting from. We see there is a lack of upskilling, governance, ethical policies in place. The use of AI needs to be made transparent, monitored, and evaluated. Impact assessments are needed when there is a possible risk to human rights. It is too early to see an impact of the EU AI Act and even more the impact of the COE Framework Convention of AI, and legal certainty, as we have learned, of course, has to settle in as well. This needs to be evaluated in the future. UN guiding principles of business and human rights, and if I am correct, they are from 2011, they provide important guidance as well, so we do not have to start from scratch regarding regulation. There are human rights tracking tools that can track the adherence to and implementation of human rights by nations and corporations. I think that is it. My personal view is that if you start to use human rights, AI-based human rights tracking tools to track people and to track if they are using hate speech, then you are doing exactly AI not for good. So I would have some concerns about that, but when I think about nations and corporations, I think this is a good point. So if you allow me this last comment, but this is not, of course, in the text. So if you agree with this. with these messages and we will forward them to being finalized. Thank you very much. As you said in the very beginning, the panelists will have a chance to


Tigran Karapetyan: actually introduce changes later on. Given the time constraints, I think we can do that and then the final version will be possible. If you see that there is a strong disagreement with a point, please voice it now. If there is a little wording thing, we can do that later. Okay. With this, then, feeling really pressed now for time, I’m going to pass the word back to Alice, but before doing that, I’d like to thank the co-organizers and the panelists, as well as my own colleague, Biljana Nikolic here, who’s actually worked hard to organize the session, as well as Dr. Erbgut for giving us a great summary. With this, and all those who were present and listened to us here and online, thank you all very much for your interest, for your questions, for your participation. Alice, the floor is back to you, please. So, thank you, and the next session, Workshop 8, How AI Impacts Society and Security, Opportunities and Vulnerabilities, will start at 4.30 p.m., and we look forward to seeing you back then. Thank you.


K

Katarzyna Ellis

Speech speed

125 words per minute

Speech length

1286 words

Speech time

612 seconds

AI use in businesses has risen from 35% to 75% in one year

Explanation

Katarzyna Ellis presented data showing a significant increase in AI adoption by businesses over the past year. This rapid growth demonstrates the accelerating pace of AI integration in business operations.


Evidence

Data from EY’s global report on the future of work


Major discussion point

Current state of AI adoption in business


Agreed with

– Moderator

Agreed on

Rapid increase in AI adoption by businesses


Employees have mixed feelings about AI adoption, with slight majority positive

Explanation

Ellis reported that employees have varied attitudes towards AI in the workplace. While a slight majority view AI positively, there is still significant apprehension among workers.


Evidence

Survey results from Polish companies implementing AI


Major discussion point

Current state of AI adoption in business


Agreed with

– Moderator

Agreed on

Mixed employee attitudes towards AI in the workplace


Lack of upskilling, governance, and ethical policies in many companies

Explanation

Ellis highlighted that many companies are not adequately prepared for AI implementation. There is a significant gap in employee skills, governance structures, and ethical guidelines for AI use.


Evidence

Findings from EY’s research on AI readiness in businesses


Major discussion point

Challenges in implementing AI responsibly


Agreed with

– Moderator

Agreed on

Lack of preparedness in companies for AI implementation


A

Angela Coriz

Speech speed

120 words per minute

Speech length

935 words

Speech time

464 seconds

Telecom sector already using AI for network optimization and customer service

Explanation

Coriz explained that telecommunications companies are actively implementing AI in various aspects of their operations. This includes optimizing network performance and enhancing customer service capabilities.


Evidence

Examples from telecom companies using AI for network antenna placement and customer engagement


Major discussion point

Current state of AI adoption in business


Regulatory uncertainty and cybersecurity risks for businesses

Explanation

Coriz highlighted the challenges faced by businesses in implementing AI, particularly in the telecom sector. These include unclear regulations and increased cybersecurity threats associated with AI adoption.


Evidence

Mention of the need for clear definitions on high-risk AI scenarios and classification of AI-based safety components


Major discussion point

Challenges in implementing AI responsibly


L

Lyra Jakulevičienė

Speech speed

149 words per minute

Speech length

2787 words

Speech time

1122 seconds

UN Guiding Principles on Business and Human Rights provide important guidance

Explanation

Jakulevičienė emphasized the relevance of existing frameworks like the UN Guiding Principles in addressing AI-related human rights issues. These principles offer a foundation for responsible AI implementation in business contexts.


Evidence

Reference to the UN Guiding Principles as a global standard for business and human rights


Major discussion point

Human rights considerations for AI


Need for human rights impact assessments when AI poses risks

Explanation

Jakulevičienė stressed the importance of conducting human rights impact assessments for AI systems. This process helps identify and mitigate potential negative effects of AI on human rights.


Major discussion point

Human rights considerations for AI


Agreed with

– Domenico Zipoli
– Audience

Agreed on

Need for human rights considerations in AI implementation


All human rights can potentially be impacted by AI, not just privacy

Explanation

Jakulevičienė highlighted that AI can affect a wide range of human rights beyond just privacy concerns. This underscores the need for comprehensive consideration of human rights in AI development and deployment.


Evidence

Mention of the right to a healthy and sustainable environment as an example of broader human rights implications


Major discussion point

Human rights considerations for AI


Need for interdisciplinary teams and collaboration between sectors

Explanation

Jakulevičienė emphasized the importance of diverse expertise in addressing AI challenges. Collaboration between different sectors and disciplines is crucial for developing responsible AI solutions.


Major discussion point

Tools and frameworks for responsible AI


D

Domenico Zipoli

Speech speed

139 words per minute

Speech length

1694 words

Speech time

726 seconds

Human rights digital tracking tools can support due diligence

Explanation

Zipoli discussed the potential of digital tools in monitoring human rights compliance. These tools can help businesses and governments track and assess their adherence to human rights standards in relation to AI implementation.


Evidence

Examples of digital tracking tools like CIMORE+, IMPACT, and the OHCHR National Recommendations Tracking Database


Major discussion point

Tools and frameworks for responsible AI


Agreed with

– Lyra Jakulevičienė
– Audience

Agreed on

Need for human rights considerations in AI implementation


Disagreed with

– Jörn Erbguth

Disagreed on

Use of AI-based human rights tracking tools


Explainability and human oversight of AI systems is necessary

Explanation

Zipoli stressed the importance of transparency and human control in AI systems. He argued that AI decisions must be explainable and that human oversight is crucial for ensuring accountability.


Evidence

Reference to the concept of ‘human-in-the-loop’ governance for AI systems


Major discussion point

Building trust in AI systems


T

Tigran Karapetyan

Speech speed

133 words per minute

Speech length

2175 words

Speech time

981 seconds

Council of Europe and EU developing AI-specific regulations

Explanation

Karapetyan mentioned ongoing regulatory efforts by European institutions. These include the development of AI-specific frameworks to guide responsible AI implementation.


Evidence

Reference to the Council of Europe’s Framework Convention on AI and the EU AI Act


Major discussion point

Tools and frameworks for responsible AI


M

Moderator

Speech speed

141 words per minute

Speech length

522 words

Speech time

220 seconds

AI use in business has risen dramatically in one year

Explanation

The moderator summarized that AI use in businesses increased from 35% to 75% in just one year. This rapid growth demonstrates the accelerating pace of AI integration in business operations.


Evidence

Data presented earlier in the session


Major discussion point

Current state of AI adoption in business


Agreed with

– Katarzyna Ellis

Agreed on

Rapid increase in AI adoption by businesses


Employees have mixed feelings about AI adoption

Explanation

The moderator noted that employees have varied attitudes towards AI in the workplace. While a slight majority view AI positively, there is still significant apprehension among workers.


Evidence

Survey results mentioned earlier in the session


Major discussion point

Current state of AI adoption in business


Agreed with

– Katarzyna Ellis

Agreed on

Mixed employee attitudes towards AI in the workplace


Lack of preparedness in companies for AI implementation

Explanation

The moderator highlighted that many companies are not adequately prepared for AI implementation. There are significant gaps in employee skills, governance structures, and ethical guidelines for AI use.


Evidence

Findings from research presented earlier in the session


Major discussion point

Challenges in implementing AI responsibly


Agreed with

– Katarzyna Ellis

Agreed on

Lack of preparedness in companies for AI implementation


J

Jörn Erbguth

Speech speed

129 words per minute

Speech length

97 words

Speech time

45 seconds

Concerns about using AI-based human rights tracking tools for monitoring individuals

Explanation

Erbguth expressed reservations about using AI-based human rights tracking tools to monitor individuals for hate speech. He suggested this could be an example of using AI not for good purposes.


Major discussion point

Ethical considerations in AI applications


Disagreed with

– Domenico Zipoli

Disagreed on

Use of AI-based human rights tracking tools


Support for using AI-based tracking tools for nations and corporations

Explanation

While expressing concerns about individual tracking, Erbguth indicated support for using AI-based human rights tracking tools to monitor nations and corporations. This suggests a differentiation in the ethical considerations based on the subject being monitored.


Major discussion point

Tools and frameworks for responsible AI


A

Audience

Speech speed

102 words per minute

Speech length

41 words

Speech time

24 seconds

Concern about human rights and ethics consideration in AI implementation

Explanation

The audience member questioned whether human rights and ethics are being taken into account when implementing AI systems in private or public sectors. This reflects concerns about responsible AI development and deployment.


Major discussion point

Ethical considerations in AI implementation


Agreed with

– Lyra Jakulevičienė
– Domenico Zipoli

Agreed on

Need for human rights considerations in AI implementation


Agreements

Agreement points

Rapid increase in AI adoption by businesses

Speakers

– Katarzyna Ellis
– Moderator

Arguments

AI use in businesses has risen from 35% to 75% in one year


AI use in business has risen dramatically in one year


Summary

There is a consensus that AI adoption in businesses has increased significantly over the past year, demonstrating the accelerating pace of AI integration in business operations.


Mixed employee attitudes towards AI in the workplace

Speakers

– Katarzyna Ellis
– Moderator

Arguments

Employees have mixed feelings about AI adoption, with slight majority positive


Employees have mixed feelings about AI adoption


Summary

Both speakers agree that employees have varied attitudes towards AI in the workplace, with a slight majority viewing it positively but significant apprehension remaining.


Lack of preparedness in companies for AI implementation

Speakers

– Katarzyna Ellis
– Moderator

Arguments

Lack of upskilling, governance, and ethical policies in many companies


Lack of preparedness in companies for AI implementation


Summary

There is agreement that many companies are not adequately prepared for AI implementation, with significant gaps in employee skills, governance structures, and ethical guidelines.


Need for human rights considerations in AI implementation

Speakers

– Lyra Jakulevičienė
– Domenico Zipoli
– Audience

Arguments

Need for human rights impact assessments when AI poses risks


Human rights digital tracking tools can support due diligence


Concern about human rights and ethics consideration in AI implementation


Summary

Multiple speakers emphasize the importance of considering human rights implications when implementing AI systems, suggesting the use of impact assessments and digital tracking tools.


Similar viewpoints

Both speakers emphasize the importance of existing frameworks and tools for ensuring responsible AI implementation with respect to human rights.

Speakers

– Lyra Jakulevičienė
– Domenico Zipoli

Arguments

UN Guiding Principles on Business and Human Rights provide important guidance


Human rights digital tracking tools can support due diligence


Both speakers highlight the challenges faced by businesses in implementing AI responsibly and the need for collaboration across different sectors and disciplines.

Speakers

– Angela Coriz
– Lyra Jakulevičienė

Arguments

Regulatory uncertainty and cybersecurity risks for businesses


Need for interdisciplinary teams and collaboration between sectors


Unexpected consensus

Broad impact of AI on human rights

Speakers

– Lyra Jakulevičienė
– Domenico Zipoli

Arguments

All human rights can potentially be impacted by AI, not just privacy


Explainability and human oversight of AI systems is necessary


Explanation

There is an unexpected consensus on the wide-ranging impact of AI on various human rights, beyond just privacy concerns. This broader perspective on AI’s human rights implications was not initially anticipated but emerged as a significant point of agreement.


Overall assessment

Summary

The main areas of agreement include the rapid adoption of AI in businesses, mixed employee attitudes towards AI, lack of preparedness in companies for AI implementation, and the need for comprehensive human rights considerations in AI development and deployment.


Consensus level

There is a moderate level of consensus among the speakers on the current state of AI adoption and the challenges faced in implementing AI responsibly. This consensus suggests a shared recognition of the importance of addressing human rights and ethical concerns in AI development, which could potentially lead to more collaborative efforts in creating responsible AI frameworks and policies.


Differences

Different viewpoints

Use of AI-based human rights tracking tools

Speakers

– Domenico Zipoli
– Jörn Erbguth

Arguments

Human rights digital tracking tools can support due diligence


Concerns about using AI-based human rights tracking tools for monitoring individuals


Summary

While Zipoli advocates for the use of AI-based human rights tracking tools to support due diligence, Erbguth expresses concerns about using such tools to monitor individuals, particularly for hate speech detection.


Unexpected differences

Overall assessment

Summary

The main areas of disagreement revolve around the specific approaches to implementing responsible AI and the use of AI-based tools for human rights monitoring.


Disagreement level

The level of disagreement among speakers is relatively low. Most speakers agree on the importance of responsible AI implementation and human rights considerations. The disagreements are mainly about specific methods or applications rather than fundamental principles. This suggests a general consensus on the need for ethical AI development, with ongoing discussions on best practices and potential risks.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize the importance of existing frameworks and tools for ensuring responsible AI implementation with respect to human rights.

Speakers

– Lyra Jakulevičienė
– Domenico Zipoli

Arguments

UN Guiding Principles on Business and Human Rights provide important guidance


Human rights digital tracking tools can support due diligence


Both speakers highlight the challenges faced by businesses in implementing AI responsibly and the need for collaboration across different sectors and disciplines.

Speakers

– Angela Coriz
– Lyra Jakulevičienė

Arguments

Regulatory uncertainty and cybersecurity risks for businesses


Need for interdisciplinary teams and collaboration between sectors


Takeaways

Key takeaways

AI adoption in businesses has rapidly increased, rising from 35% to 75% in one year


Employees have mixed feelings about AI adoption, with a slight majority being positive


There is a lack of upskilling, governance, and ethical policies in many companies implementing AI


Transparency, monitoring and evaluation of AI systems is crucial


UN Guiding Principles on Business and Human Rights provide important guidance for responsible AI implementation


All human rights can potentially be impacted by AI, not just privacy rights


Human rights digital tracking tools can support due diligence efforts


Interdisciplinary teams and collaboration between sectors is needed for responsible AI development


Education on AI for employees and stakeholders is crucial for building trust


Explainability and human oversight of AI systems is necessary


Resolutions and action items

Businesses need to conduct human rights impact assessments when AI poses potential risks


Companies should establish policies for AI use and disclosure within their organizations


More education and training on AI is needed for employees and stakeholders


Continued discussions between lawmakers, public sector, and private sector on AI governance


Unresolved issues

How to effectively balance the potential benefits and challenges of AI adoption


Specific ways to implement human oversight in AI systems


How to address the regulatory uncertainty surrounding AI, especially in classifying high-risk scenarios


How to manage the potential spike in data traffic and associated investment needs in telecom networks due to AI


Long-term impacts of the EU AI Act and Council of Europe Framework Convention on AI


Suggested compromises

Using AI to both optimize business operations and support human rights due diligence efforts


Balancing innovation in AI development with ethical considerations and rights-respecting practices


Leveraging open-source public sector tools and frameworks to support private sector AI implementation


Thought provoking comments

75% of employees are reporting that they’re using GenAI for work. But that also is connected with different parts of business operations.

Speaker

Katarzyna Ellis


Reason

This statistic highlights the rapid and widespread adoption of AI in workplaces, raising important questions about its impact.


Impact

Set the stage for discussing both benefits and challenges of AI adoption in business, leading to exploration of topics like employee perceptions, skills gaps, and ethical considerations.


More than 50% of employees of the organizations on average are scared of using AI or have a negative affiliation when it comes to AI at work. And as we know, the AI is here to stay.

Speaker

Katarzyna Ellis


Reason

Reveals a significant disconnect between AI adoption and employee comfort/acceptance, highlighting a major challenge.


Impact

Shifted discussion towards the importance of education, trust-building, and change management in AI implementation.


We have identified around 1000 various standards that exist everywhere in the world that deal with AI technologies and relationship with human rights.

Speaker

Lyra Jakulevičienė


Reason

Illustrates the complex and fragmented regulatory landscape for AI and human rights.


Impact

Prompted discussion on the need for harmonized standards and the challenges businesses face in navigating diverse regulations.


Digital tracking tools like the CIMORE or OHCHR’s National Recommendations Tracking Database, they bring different ministries, courts, civil society into one shared workflow. And everyone sees the same data and tracks the progress that is shared.

Speaker

Domenico Zipoli


Reason

Introduces an innovative use of AI for human rights monitoring and coordination.


Impact

Expanded the conversation to include positive applications of AI in governance and human rights, showing how AI can be used for good.


While AI has raised challenges in terms of energy consumption, specifically with data centers, it also can be potentially beneficial for reducing emissions in telecom networks.

Speaker

Angela Coriz


Reason

Highlights the dual nature of AI’s impact on sustainability, showing both challenges and opportunities.


Impact

Broadened the discussion to include environmental considerations, demonstrating the complex trade-offs involved in AI adoption.


Overall assessment

These key comments shaped the discussion by highlighting the rapid adoption of AI in business, the challenges this poses for employees and regulators, and the complex impacts on human rights and sustainability. The conversation evolved from initial statistics on AI usage to deeper explorations of ethical considerations, regulatory challenges, and innovative applications of AI for monitoring and improving human rights. The discussion emphasized the need for education, transparent governance, and balanced approaches to harness AI’s benefits while mitigating its risks.


Follow-up questions

What will be the impact of the EU AI Act on businesses’ AI implementation and education efforts?

Speaker

Jörn Erbguth


Explanation

This question aims to understand how the newly enacted EU AI Act will affect companies’ AI strategies and training programs, which is crucial for compliance and effective AI adoption.


How can the gap in AI expertise within businesses, both in technological and human rights aspects, be addressed?

Speaker

Lyra Jakulevičienė


Explanation

This area requires further research to ensure companies can effectively implement AI while respecting human rights, addressing a critical skills shortage identified in the discussion.


How can interdisciplinary teams be developed and integrated into businesses to address AI implementation challenges?

Speaker

Lyra Jakulevičienė


Explanation

This question explores how companies can build the necessary diverse skill sets to handle the complex challenges of AI implementation, including technical, ethical, and human rights considerations.


How can AI-powered digital tracking tools be adapted for business use in conducting human rights due diligence?

Speaker

Domenico Zipoli


Explanation

This area for research could help businesses leverage existing public sector tools for their own human rights compliance efforts, potentially improving efficiency and effectiveness.


What will be the impact of increased AI usage on data traffic and telecom network investment needs?

Speaker

Angela Coriz


Explanation

This question addresses the potential infrastructure challenges that may arise from widespread AI adoption, which is important for planning and resource allocation in the telecom sector.


How can clear definitions and classifications of high-risk AI scenarios be developed to provide regulatory certainty for businesses?

Speaker

Angela Coriz


Explanation

This area requires further research to help businesses navigate the regulatory landscape and ensure compliance with AI regulations like the EU AI Act.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.