Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies

13 May 2025 12:30h - 13:30h

Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies

Session at a glance

Summary

This workshop at the Council of Europe focused on the perception of AI tools in business operations and building trustworthy, rights-respecting technologies. The session brought together experts from various sectors to examine how companies are implementing AI and the associated human rights challenges.


Katarzyna Ellis from EY presented research showing that AI usage in business has dramatically increased from 35% to 75% within just one year, with 90% of organizations either using or planning to use generative AI. However, employee attitudes remain mixed, with Polish research revealing that while nearly 50% have positive attitudes toward AI at work, 40% express negative feelings due to fears about job displacement and workplace changes. The research highlighted significant gaps in AI skills and governance frameworks, emphasizing the critical need for education, upskilling, and building psychological safety within organizations.


Professor Lyra Jakulevičienė discussed the UN Guiding Principles on Business and Human Rights, explaining how these 2011 standards provide a foundation for responsible AI implementation through human rights due diligence. She emphasized that AI impacts extend beyond privacy concerns to potentially affect all human rights, requiring companies to conduct impact assessments, prioritize risks, and maintain transparency about AI usage. The UN Working Group is preparing a report on AI, business, and human rights to be released in June.


Domenico Zipoli presented how digital human rights tracking tools used in the public sector could support businesses in conducting rights due diligence, offering an “ABC model” of alerts, benchmarking, and coordination. Angela Coriz shared positive examples from the telecommunications sector, including network optimization and energy reduction, while acknowledging challenges like regulatory uncertainty and cybersecurity risks.


The discussion concluded that while AI adoption is accelerating rapidly, significant work remains in developing ethical frameworks, ensuring transparency, and building trust among stakeholders across both public and private sectors.


Keypoints

## Major Discussion Points:


– **Rapid AI Adoption in Business Operations**: The discussion highlighted dramatic growth in AI usage, with employee adoption rising from 35% to 75% in just one year, and 90% of organizations either using or planning to use AI. However, there’s a significant skills gap and mixed employee sentiment, with many workers fearful that AI will replace their jobs.


– **Human Rights Framework for AI Implementation**: Panelists emphasized the importance of applying existing international standards, particularly the UN Guiding Principles on Business and Human Rights and the Council of Europe’s recommendations, to AI deployment. The discussion stressed that all human rights (not just privacy) can be impacted by AI systems.


– **Need for Transparency, Education, and Due Diligence**: A central theme was the critical importance of transparency in AI use, comprehensive employee education programs, and conducting human rights impact assessments. Companies need clear policies about AI usage and must disclose when AI systems are being employed.


– **Regulatory Compliance and Implementation Challenges**: The conversation addressed the emerging regulatory landscape, including the EU AI Act and the Council of Europe Framework Convention on AI, while noting it’s too early to assess their practical impact. Businesses face uncertainty about compliance requirements and classification of high-risk AI scenarios.


– **Practical Applications and Digital Tracking Tools**: The discussion included concrete examples from the telecommunications sector showing both benefits (energy reduction, network optimization) and risks of AI implementation, plus exploration of how digital human rights tracking tools could support business due diligence processes.


## Overall Purpose:


The workshop aimed to explore how AI is perceived and implemented in business operations while ensuring compliance with human rights standards. The goal was to bridge the gap between rapid AI adoption and responsible, rights-respecting deployment through discussion of frameworks, practical examples, and collaborative approaches between public and private sectors.


## Overall Tone:


The discussion maintained a professional, collaborative tone throughout, characterized by cautious optimism. Participants acknowledged both the tremendous potential and significant risks of AI in business contexts. The tone was educational and solution-oriented, with speakers sharing research findings, practical experiences, and regulatory guidance. There was a consistent emphasis on the urgency of addressing human rights considerations in AI deployment, balanced with recognition that this is an evolving field requiring ongoing dialogue and adaptation.


Speakers

– **Katarzyna Ellis** – Partner and leader of the People Consulting Team at EY Poland


– **Jörn Erbguth** – Dr. Erbguth (specific role/title not clearly mentioned in transcript)


– **Angela Coriz** – Public policy officer at Connect Europe


– **Audience** – Audience member/participant (Pablo Arrenovo from Telefonica mentioned as one audience member)


– **Lyra Jakulevičienė** – Professor, member of the UN Working Group on Business and Human Rights


– **Tigran Karapetyan** – Head of Transversal Challenges and Multilateral Projects Division at the Council of Europe


– **Moderator** – Alice Marns, participant in YouthDig (youth segment of EuroDig), remote moderating the session


– **Domenico Zipoli** – Senior Research Fellow and Project Coordinator at the Geneva Human Rights Platform


**Additional speakers:**


– **Monika Stachon** – Cybersecurity Strategic Analysis Expert from NASK, National Research Institute of Poland (mentioned as co-organizer but participated online)


– **Biljana Nikolic** – Colleague of Tigran Karapetyan who helped organize the session (role/title not specified)


Full session report

# Comprehensive Workshop Report: AI Tools in Business Operations and Human Rights-Respecting Technologies


## Executive Summary


This workshop, held at the Council of Europe under their pilot project on human rights and environmentally responsible business practices, brought together experts from diverse sectors to examine the rapidly evolving landscape of artificial intelligence implementation in business operations and the critical importance of building trustworthy, rights-respecting technologies. The session was moderated by Alice Marns (remote) and Tigran Karapetyan (in-person), with co-organization by Monika Stachon, Cybersecurity Strategic Analysis Expert from NASK, National Research Institute of Poland.


The discussion revealed a striking acceleration in AI adoption across business sectors, with usage rates increasing from 35% to 75% globally over the last 12 months. However, this rapid implementation has exposed significant gaps in skills, governance frameworks, and employee acceptance, creating complex challenges that require interdisciplinary solutions combining technical expertise, human rights knowledge, and effective change management.


## Detailed Discussion Analysis


### Current State of AI Adoption in Business


Katarzyna Ellis from EY Poland presented compelling research data that illustrated the dramatic transformation occurring in business AI adoption. The statistics revealed that AI usage in business operations has increased from 35% to 75% globally over the last 12 months, with 90% of organizations either currently using generative AI technology or having plans to use it very shortly. Surprisingly, the government and public sector showed 60% usage rates, higher than many expected.


During an interactive polling session with the audience, Ellis explored employee attitudes toward AI implementation. The research uncovered significant challenges beneath these impressive adoption figures, with employee attitudes toward AI remaining deeply divided. Polish research indicated that whilst nearly 50% of workers maintain positive attitudes toward AI in workplace settings, approximately 40% express negative feelings, primarily stemming from fears about job displacement and concerns about workplace changes.


The skills gap presents another critical challenge. Ellis emphasized that there exists “a massive gap between what organizations have and what they need for effective AI utilization.” Whilst 32% of Polish companies have established dedicated AI teams, only 16% have hired new AI specialists, suggesting that organizations are attempting to build AI capabilities through internal restructuring rather than external recruitment.


### Human Rights Framework for AI Implementation


Professor Lyra Jakulevičienė provided crucial insights into the human rights dimensions of AI implementation, drawing upon her expertise as a member of the UN Working Group on Business and Human Rights. She highlighted that around 1,000 various standards exist worldwide dealing with the relationship between AI technologies and human rights, creating a complex regulatory landscape that businesses must navigate.


A fundamental point of her presentation was the recognition that all human rights can be impacted by AI use, not merely privacy rights as commonly assumed. This comprehensive view requires businesses to conduct thorough human rights due diligence to “identify, prevent, mitigate and address potential negative impacts.” The UN Guiding Principles on Business and Human Rights, established in 2011, along with the Council of Europe’s 2016 Recommendation on Human Rights and Business, provide foundational frameworks for this approach.


Jakulevičienė emphasized the critical importance of transparency and disclosure in AI implementation. She argued that “stakeholders cannot seek remedy without knowing AI is being used,” highlighting a fundamental principle that connects technical transparency requirements to access to justice and accountability mechanisms.


The professor also addressed the dual nature of AI applications, providing examples of how the same technology can simultaneously benefit and harm workers. For instance, whilst AI can effectively monitor air quality and fatigue levels in workplaces to protect worker safety, it can also be used for productivity monitoring that creates significant stress and anxiety among employees.


### Digital Human Rights Tracking Tools and Business Applications


Domenico Zipoli from the Geneva Human Rights Platform presented innovative approaches to supporting businesses in conducting human rights due diligence through digital tracking tools. He mentioned an “ABC model” for these platforms, though emphasized a crucial principle: “accountability cannot be outsourced to algorithms.” He stressed that human-in-the-loop governance remains essential, regardless of how sophisticated AI systems become.


These AI-powered platforms can serve as early warning systems for human rights violations, scanning social media, news sources, and other digital platforms for potential concerns. The tools provide structured legal insights that can support businesses in conducting due diligence and ESG monitoring more effectively.


The discussion revealed potential for cross-sector collaboration, with digital human rights tracking tools originally designed for public sector use potentially being adapted to support private sector compliance efforts.


### Telecommunications Sector: Practical Applications and Challenges


Angela Coriz from Connect Europe provided concrete examples of AI implementation within the telecommunications sector, demonstrating both the benefits and challenges of real-world AI deployment. Her presentation highlighted that 25% of telecom operators have already deployed AI functionality in radio access networks, with applications including network optimization, predictive maintenance, customer service, and internal navigation tools.


Coriz provided specific examples of successful implementations, including Orange’s achievement of 12% energy consumption reduction and the Telenor-Ericsson collaboration that achieved 4% energy reduction. She also mentioned Telefonica’s TU-Verify solution for detecting AI-generated content.


The telecommunications sector exemplifies AI’s dual nature regarding sustainability. Whilst AI can significantly reduce energy consumption through optimized network management, it simultaneously increases data traffic demands, creating additional infrastructure requirements.


Coriz identified several key challenges facing the telecommunications sector in AI implementation, including regulatory uncertainty, cybersecurity risks, and the need for clear definitions regarding high-risk AI scenarios under emerging regulations such as the EU AI Act.


### Regulatory Landscape and Compliance Challenges


The discussion addressed the emerging regulatory framework governing AI implementation, with particular focus on the EU AI Act and the Council of Europe Framework Convention on AI. Tigran Karapetyan noted that these represent significant developments in AI governance, marking a shift toward more structured regulatory approaches.


However, Dr. Erbguth cautioned that it remains too early to assess the practical impact of the EU AI Act on business practices. This regulatory uncertainty creates challenges for businesses attempting to ensure compliance whilst continuing to innovate and implement AI solutions.


The UN Guiding Principles on Business and Human Rights provide important foundational guidance that predates current AI-specific regulations. An upcoming UN Working Group report focusing specifically on procurement and deployment of AI, scheduled for release in June, may provide important additional guidance.


### Building Trust and Governance Frameworks


The workshop identified trust-building as a fundamental challenge in AI implementation. Ellis emphasized the importance of companies clearly defining allowed areas for AI experimentation whilst ensuring psychological safety for employees, recognizing that successful AI implementation requires not only technical capabilities but also organizational cultures that support innovation whilst addressing legitimate concerns.


Trust-building requires transparency, explainability, and meaningful stakeholder involvement. Jakulevičienė stressed that effective communication is crucial to build trust and demystify AI myths that may be contributing to negative employee attitudes.


The discussion identified key AI governance challenges including bias, transparency, privacy, and oversight. Addressing these challenges requires interdisciplinary approaches that combine technical expertise with human rights knowledge, legal compliance understanding, and effective change management capabilities.


### Skills Development and Education Challenges


Education emerged as a central theme throughout the workshop, with Ellis’s emphasis on “education, education, education” capturing the consensus that addressing workforce concerns and building AI capabilities requires comprehensive educational approaches.


The skills challenge operates on multiple levels. Businesses need technical expertise to implement AI systems effectively, human rights knowledge to ensure compliance with emerging standards, and change management capabilities to address workforce concerns. Jakulevičienė noted that businesses “cannot become tech or human rights specialists,” highlighting the need for collaborative approaches and external partnerships.


The education challenge extends beyond internal workforce development to include stakeholder engagement and public communication about AI capabilities, limitations, and governance measures.


## Key Points of Consensus and Ongoing Challenges


The workshop demonstrated consensus on several fundamental principles. All speakers agreed that education and skills development are essential for successful AI implementation, though they emphasized different aspects of this challenge. Transparency and disclosure emerged as universally accepted principles, with speakers agreeing that stakeholders cannot seek remedy or provide meaningful consent without understanding when and how AI systems are being used.


However, some tensions emerged around the appropriate use of AI for monitoring purposes. While Zipoli advocated for AI-powered platforms that can scan for early warning signs of human rights violations, Dr. Erbguth expressed concerns about using AI to track individuals for hate speech detection, viewing such applications as potentially problematic surveillance.


Several critical issues remained unresolved, including how to effectively address negative employee attitudes toward AI that affect a significant portion of the workforce, and how businesses can navigate the complex landscape of AI-related standards while ensuring meaningful human rights compliance.


## Future Directions and Next Steps


The workshop pointed toward several concrete next steps. EY plans to conduct a follow-up survey within 6-12 months to assess whether regulatory frameworks are effectively addressing identified challenges. The upcoming UN Working Group report on AI procurement and deployment, scheduled for June release, may provide important guidance for businesses.


Pablo Arrenovo from Telefonica raised a final question about ensuring human rights considerations in AI implementation, highlighting the ongoing need for practical guidance in this area.


## Conclusion


This workshop demonstrated that AI implementation in business operations has reached a critical juncture where rapid technological adoption must be balanced with comprehensive governance frameworks, skills development, and human rights protection. The discussion revealed both the tremendous potential and significant risks associated with AI deployment, emphasizing that success requires coordinated efforts across technical, legal, social, and governance dimensions.


The consensus around education, transparency, human oversight, and stakeholder collaboration provides a foundation for moving forward, whilst the identified challenges highlight the urgency of developing practical solutions. The workshop’s emphasis on interdisciplinary approaches and collaborative governance suggests that addressing AI implementation challenges will require continued cooperation between businesses, governments, civil society, and affected communities to ensure that AI development serves human flourishing whilst respecting fundamental rights.


Session transcript

Tigran Karapetyan: Good afternoon everyone, good afternoon to all the people present here in Palais de l’Europe of the Council of Europe and all those who are joining us online. Very nice to see you all and I hope that you had interesting sessions before this one and they will be followed by others afterwards. And I pass the word to the organiser right now to give us the technical details on how this is going to all go. Alice, please.


Moderator: Hello, this is mostly for the online participants. Hello everyone and welcome to workshop six on the perception of AI tools in business operations, building trustworthy and rights respecting technologies. My name is Alice Marns, I’m a participant in this year’s YouthDig, the youth segment of EuroDig and will be remote moderating this session. We will briefly go over the session rules now. So the first one, please enter with your full name. To ask a question, raise your hand using the Zoom function. You will be unmuted when the floor is given to you. And when speaking, switch on the video, state your name and affiliation. And please do not share links to the Zoom meetings, even with your colleagues. Thank you. Thank you very much, Alice. With these few simple rules, we can now start this session. So warm welcome to everyone once again.


Tigran Karapetyan: And this workshop on perception of AI tools in business operations, where we will speak about building trustworthy and rights respecting technology. My name is Tigran Karapetan, I’m head of Transversal Challenges and More. and the Multilateral Projects Division at the Council of Europe. And before we start, I would like to say my words of thanks to co-organizers and the speakers that we’re going to be hearing later on. Monika Stachon, Cybersecurity Strategic Analysis Expert from NASK, National Research Institute of Poland, that unfortunately could not be with us in person but will be joining us online. Katarzyna Ellis, partner and leader of the People Consulting Team at EY Poland, that is also joining us virtually. And Angela Coriz, who is joining us personally here from Connect Europe, a public policy officer there. Today’s workshop is the result of strong collaboration and great coordination among all the partners involved, so I’d like to thank you once again for that. Furthermore, I’d like to also extend my gratitude to distinguished panelists that will be speaking today. Professor Lyra Jakulevičienė, member of the UN Working Group on Business and Human Rights, and Domenico Zipoli, Senior Research Fellow and Project Coordinator at the Geneva Human Rights Platform. This workshop has been organized in the framework of the Council of Europe’s pilot project on human rights and environmentally responsible business practices. It’s a project that is run in my division, and the Council of Europe’s initiative reinforces the protection of human rights and environmental sustainability within business operations in line with the existing international frameworks and standards. Through cooperation, the project supports the member states and businesses in aligning with the human rights standards, addresses gaps, and encourages cooperation among governments, businesses and civil society. As a result of the collective efforts under the project and collaboration with Monika, Katarzyna and Angela, we are pleased to be here today. This interactive workshop is designed to explore how AI is perceived within companies, the challenges involved in its implementation, including human rights challenges, and the vital role of ensuring compliance with human rights. Hopefully, we can also have a word on whether AI can help companies in fact comply with human rights, along with increasing productivity. So, without further ado, I would like to now invite Katarzyna Ellis from EY to present a recently published report on how Polish companies implement AI. Katarzyna, please be mindful of the time. We’ve got one hour for the entire session, so you’ve got your chance now. Please, go ahead.


Katarzyna Ellis: Fabulous. Thank you, Jörn, and thank you for such a warm welcome, really. It’s such a pleasure to be here. If you don’t mind, I will share a presentation to give you some insights from our research. I’ll do share. Three, two, one. Do we all see? I’ll go into the presentation mode. Can everybody see the presentation? I can see now that you can. Yes. What I want to share with you today is at EY we have been doing a global report on how the future of work will look like for the next 5, 10, 15 and 20 years. We call it EY Work Reimagined and that survey has been done across the globe, 15,000 employees, over 1,500 different entities globally that we have spoken to. The results are quite exciting, that’s why I will share with you today not only the Polish insights, which we have done this year, but also what we see globally that might be a better representation to what you see in your respective countries. Firstly, we have been asking the question about the work technology and the generative AI for the past two years and we see a massive, significant impact of the last 12 months on the ways that we work using genAI. At the moment, from what we see, the use of genAI for work is around 75%. That’s what the employees are reporting in our global survey. Just to give you an answer from last year, it was 35%. When you see what an extra potential growth the genAI technology is that we see across different businesses, it is quite incredible. 90% of the organizations that we have spoken to use the genAI technology already or have plans to use it very shortly. And 60% use it in government and public sector. We thought that was going to be a big impact. lower, but it isn’t. So surprisingly, public sector is not far behind. When we look at the adoption of GenAI, employee and employer sentiments are still net positive. We’re seeing that employees see that GenAI tools will help the employee productivity, it will help the ways of working that we do in specific sectors, and also it enhances the ability to focus on high value work, while giving away the low value administrative repetitive tasks to be managed through either automation or with the help of GenAI. Furthermore, what we see is that in pairing it up, so GenAI technology and the investments that the businesses are making in GenAI goes hand in hand with the need of enabling the upskilling and reskilling the organizations. So what we see is that there is a massive gap when it comes to what organizations have and what organizations should have in order to be able to utilize GenAI effectively, but also ethically. So what we see is that employees and employers are more aligned on the need to learn the skills, but they are less aligned on whether they have the opportunities at work to learn those skills. And then what we also see is that what we at EY call talent health, so the way that the organization is able to deliver business value through high potential talent is directly connected with the amount of skills and the usage of GenAI across the organizations. So as I said at the moment, 75% of employees are reporting that they’re using GenAI for for work. So not only at work, but for work. But that also is connected with different parts of business operations. Because I look at people function, and I believe that people are the greatest asset of every organization, and without people, the organizations cannot perform and bring business value. I looked quite deeply into how the HR or people functions are utilizing those Gen-AI tools, and I think that will be very deeply connected with the conversations that we’ll be having here today, so with ethics and the human rights. So how do we use that within the people function, specifically within recruitment, talent acquisition, performance, and employee engagement? And what we see is that quite a small amount of HR departments, because less than a quarter, are effectively using Gen-AI tools currently. But more and more HR or people function leaders are thinking of deploying those. And this is the question, how are we going to do it in order to impact the organizations in the most positive manner? And I believe we do have a mentee here. So Monika, I think you might have to help me here. I will share the mentee. So what I wanted to ask you is how do you think the employees feel about AI at work? And then I’ll show you what we found out, surprisingly, within the Polish market. Do you think employees have a positive or negative attitude towards the AI tools at work? Please scan the QR code, yes, or you can join by using the code in the upper corner. I think we’ll give you a minute or so. Curious, yeah. Cautious, yeah. Most will like it because it makes their work easier. Positive, makes feeling, depends on the functions. E4HR purpose perhaps not positive, for automation of tasks positive. Very good. So, I think that, let me go back to the presentation then. So, you are very, very much spot on to what we’ve discovered. So, in Poland, we wanted to deepen the global research and just focus on the organisations within our geography. And what we have discovered is that, let me just move this a little bit here, that just a little bit less than 50% have a positive attitude towards using Gen AI at work. But 40% were really negative. And they were negative because they really worried that firstly, Gen AI will take away their jobs, or they change their jobs beyond what they are actually. So, also, what was very interesting is that 4% of all the people that we’ve asked said that they are already tired of AI discussions because they hear about it everywhere and continuously and constantly, and it doesn’t really bring much value to them. But yet, 36% believe that the use of AI is inevitable, and they’re trying to acquire as much knowledge and skills in this area. So why is this a very important message? It is the fact that more than 50% of employees of the organizations on average are scared of using AI or have a negative affiliation when it comes to AI


Tigran Karapetyan: at work. And as we know, the AI is here to stay. It’s not going to go away. It’s there already. So it’s not the question of how or if, it’s the question of when every organization will use it effectively. So what do we do with that over 50% of all the workforce that actually does not believe or that does not have positive feelings about using those tools? It’s a big problem. So what I talk about is education, education, education. And there is one additional point that employees and employers have a very different understanding of the actual utilization of the Gen AI tools. So employers do overestimate how effectively and how much the employees are using the Gen AI tools that they invested in. So that’s another part, education, again, education, education. So what do we need to do? How can we actually, within the private and public sector, when we work with the organizations, empower people to bring us the return on investment that we’ve all committing to while we’re now purchasing the AI solutions? Firstly, it’s building trust. So companies should really clearly define the allowed areas for experimenting with AI. And we need to ensure that employees have the value of increased productivity, but it will not lay to layoffs of the people, of the staff. So we need to create a sense of psychological safety, that is very crucial. Rewarding innovation. So the organizations should really focus on bringing the innovation at the forfeit of every work that we do, and allow that innovation and also reward for it. So significant rewards should be introduced for revealing and sharing the ways to utilize AI. So for example, I’ve worked with multiple clients when we do AI hackathons, or we do prompt competitions, who writes the best prompt within each department. And then there are some prizes at the end of it. So using gamification to ensure that that reward for innovation is there. Driving by example. So it’s very often that we, at the management level, we say you should be using it, but we don’t stand and do it ourselves. So the management should be driving, leading by example. Creating spaces for knowledge sharing, communities of practice, or hackathons as well, just to ensure that people know where to go to when they want to get better. Ensuring access to tools and training, and that’s very important. So driving the upskilling and reskilling agenda across the organizations. Mentoring programs and individual support for the individuals. So creating a network of internal mentors, or inviting external experts who can talk to the employees and they can answer them questions. Again, building that trust to the organization, building that psychological safety. And also, last but not least, the regular evaluation and feedback loops. We need to know what’s going right and what’s going wrong, and we need to address what’s going wrong very quickly and reward what’s going right. So these are the elements that we really should focus on. on when driving the GEN-AI agenda across the organizations. And just as a last maybe food for thought for you is that we’ve discovered within the Polish study is that recently, within the last year, I think 18 months, 32% of the companies that we’ve spoken to established new dedicated to AI teams. However, only 16% of those companies have invested in hiring new employees with the necessary skills to implement the AI processes. So where do they get people to fill those vacancies that they created by creating the teams? They take them from the organization. So the companies are very likely redirecting the existing employees to do new AI teams. And we do see that there is a significant shortage of AI specialists in the job market. But that’s why the organization might have difficulty in finding and attracting qualified experts. So they’re looking from within. So how to minimize that gap? Again, it’s education, education, education. So re-skilling, up-skilling. That’s very important


Katarzyna Ellis: and changing the cultural aspects of the organization. And in addition, very recently, I read one of the Polish employment studies. And there for every 1,100 people in Poland that go into retirement, only 435 people entered the workplace. So if you think about that in that manner, what a huge gap of talent we actually have and how we’re going to address that. So that’s a question for you. and some food for thought. Hopefully, I have given you an overview, a very brief one. And now I’ll pass on the stage to the next speaker. Thank you.


Tigran Karapetyan: Thank you very much, Katarzyna. This was very, very interesting. Thank you for the report. And I think what we’ll do is we’ll open the floor for one minute, one, two minutes for quick reactions. Anyone in the audience, maybe from the panelists or online would like to react? Do you have any of the panelists? You would like to? Yeah, please go ahead.


Lyra Jakulevičienė: We can start with you, please. Thank you very much. Thank you for this opportunity to participate here. And there are quite a number of UN people here. But the reason why I’m here is also to find synergies of our common work on common issues. Now, on the report, very interesting report, I had only possibility to look briefly at it, but a few observations. Firstly, what has been mentioned about the lack of expertise within the business sector and gap in knowledge concerning the technological aspects. Now, I’m coming from business and human rights topics, so I can only echo that not only on the technological aspects, but also on the human rights aspects. So it’s another burden, let’s say, another challenge for businesses to address these aspects. And as here, we are going also to speak about business and human rights while using AI. I think this is very relevant. So just to echo what was said. Secondly, the report emphasizes the growing importance of regulatory compliance. And just also to illustrate that at the moment, we have identified around 1000 various standards that exist everywhere in the world. that deal with AI technologies and relationship with human rights. And of course, needless to say, but it’s extremely important because these are the first two initiatives that have materialized for mandatory standards. It’s the Council of Europe Framework Convention on AI, Human Rights, Democracy and Rule of Law, and of course the EU-AI Act that was already figuring in the discussions today. So I think there will be more of this pressure that we see, that it’s going more to mandatory regulations, so businesses will have to actually embed besides the business case and the run for sustainability. What I was a bit surprised, but maybe it’s probably the methodology that was used, that in a way sometime it’s mentioned that AI in the workplace has to be used responsibly, but then I only found reference to foundations of sustainable growth of companies through the system security and some other aspects. So not really mention of compliance with human rights and human rights due diligence, which is the topic for today, and I hope that we can dwell a little bit more on that. And the last point, which is important, and that goes back a little bit to capacities and the gap in knowledge, is the interdisciplinary approach that will have to be applied in this field, because clearly the businesses will not become the tech people, unless these are tech companies, and also will not become the human rights specialists. So clearly there will be needs for interdisciplinary teams, and I would really like to echo on this what was also mentioned in the report. So very briefly on the report. Thank you, thank you very much.


Tigran Karapetyan: Please, Mr. Zipoli.


Domenico Zipoli: Thank you, thank you very much. And as this is the first time that I’m taking the floor, just if… if you can allow me to briefly introduce our work, represent the Geneva Human Rights Platform of the Geneva Academy where we lead a global initiative on digital human rights tracking tools and databases and these are essentially digital systems that help governments, UN and regional human rights bodies, civil society, national human rights institutions, equality bodies track how human rights recommendations and decisions as well as the SDGs are implemented. Increasingly, the use of AI is being used to manage complexity, clustering data, detecting gaps, generating alerts and against this backdrop the EY report is highly relevant. I was in fact surprised that only 90% of companies report readiness to scale AI and that most have a formal governance framework in place and this is of course encouraging. In our field, in the public sector, we’ve learned that readiness is not just technical, it’s institutional in fact and success I’d say depends on governance, transparency and ethical safeguards so of course the highlight of trustworthiness is key. I think the report also showed 60% of companies experienced efficiency gains if I’m not mistaken, yes, so we’ve seen similar trends in the public sector where AI supported digital tracking tools can now analyze hundreds of recommendations in seconds, a task that beforehand took weeks of course, but again with these gains come responsibility and without fairness and inclusivity in design, AI risks amplifying the very inequalities that we’re trying to fix. So in a sense, whether in civic tech or corporate systems, human oversight and bias audits must be built from the start. I think if we want AI in business to be rights respecting, we don’t need to start from scratch. The public sector indeed has a blueprint, a little bit following up on what was just said, with tested frameworks that could be adapted for business use. But I’ll talk a little bit more about the use of AI in public sector digital tools in a bit. Thank you.


Tigran Karapetyan: Thank you very much, Mr. Zipoli. Please, Dr. Erbguth.


Jörn Erbguth: Thank you for the presentation. I have a little question. As we have the AI Act now in force, does the AI Act answer those questions that have been opened? For example, the AI Act requires education of people using AI. This is mandatory and already in force. Do we see that the EU AI Act in Poland already has consequences? And how does this play into this research? Does it go in the right direction? Do we see that it will support this process or do we see things missing or going in the wrong direction?


Katarzyna Ellis: I think we’ll have the answers to all those questions that are deeply valid with the next, probably, the next iteration of the report, because it’s only just the beginning of what we’re seeing. We are already seeing that the education and skills gap is a massive issue, not only in Poland, but across the globe to enable the AI Act to be enforced properly. So most likely we will repeat this survey within the next six to 12 months, and then we’ll see the impact to what we see at the moment.


Tigran Karapetyan: Thank you. Thank you. Thank you very much. I think given the time constraints, we have to now move on. Thank you very much Katarzyna for this wonderful presentation, this is very interesting. And thank you to Mr. Zipoli and Ms. Jakulevičienė and Mr. Erbguth for their interventions as well. So as we continue, we move on to the existing frameworks and international standards, which already have been mentioned a few times by some of the speakers, that play a crucial role in guiding the responsible use of AI in business operations. In this context, I’d like to draw particular attention to the Council of Europe’s 2016 Recommendation on Human Rights and Business, which reinforces the UN’s guiding principles on business and human rights. Together, these instruments offer a solid foundation for ensuring that AI solutions used in business operations are developed and implemented in alignment with human rights standards. That’s on top of the AI-specific regulations that were mentioned. Many of you also heard yesterday from colleagues from the Framework Convention on Artificial Intelligence and Huderia about the guidance on the risk and impact assessment of AI systems on human rights, democracy and rule of law. Today’s workshop will build on those discussions, hopefully, and offer a complementary perspective to that with a particular emphasis on how these frameworks intersect with business practices. With that, I’m pleased to pass the word back to our next speaker, Professor Ljera Jakulevic-Jene, to share her insights on the link between international standards such as the UN guiding principles and the implementation of the AI tools.


Lyra Jakulevičienė: Thank you. Indeed, I should have done it in the beginning, but it’s never late. I just want also to say, with your permission, Chair, a few words on what the UN Working Group on Business and Human Rights does, not just for formality, but because there are ways how you could use also the work of the Working Group and create synergies also with Council of Europe work. So, first of all, we are independent experts in the Working Group, so we are working on voluntary basis and we are mandated by the Human Rights Council. The mandate is covering several functions. The most important is that we are mandated to disseminate and to support the implementation of the UN guiding principles on business and human rights, which is at the moment the only global standard in this area. Secondly, we also prepared the thematic reports for the UN General Assembly and Human Rights Council. And just to announce, because it is relevant to the topic that we are discussing today, that in June we will be presenting report on AI, business and human rights, and use of artificial intelligence by states, by businesses in procurement and also deployment. So, we are not looking really into the technological aspects or the development of AI, but rather in something what was less explored is the procurement and deployment. So, this report will be out soon and it might be contributing also to the discussion that we have today. Then, we are also having the communication procedure, so we are not a quasi or judicial body, but we have the opportunity to, and we are mandated, to examine complaints against companies, sometimes states, and we are engaging in dialogue. So, we don’t have the, let’s say, mandatory decision, but this is also quite a quick way if you compare with the judicial bodies, We are a judicial body, so we examine complaints every year, around 100 complaints, because our capacities are small, but it’s always possible to address us through the communication procedure, which is also confidential, so there are no issues to be feared of. And we also hold country visits, which allow us to discuss the issues on business and human rights, both with businesses, but also with the states. So just briefly on what the working group does, and I really hope that we can engage with some of you. Now, going back to the topic of today’s, how businesses use AI, a lot we have heard about the report, and the main conclusion is, of course, that companies are increasingly uptaking the use of AI in various ways, and it would be difficult to enumerate all the possible ways of use of AI. But just to also mention, when we talk to businesses, the interesting conclusion sometimes is made that even companies are saying that, well, we don’t even know which AI tools we’re using within the company, what tools our persons, our employees are using. So this also demonstrates that firstly, we have to start from there, from the knowledge, what exactly do we use? Do we use the generative? Do we use the narrow AI? Do we use other systems? And of course, with open AI systems that are available, then sometimes within the company, you may have the use of AI that you may not as a management behavior. That’s why it’s so important to have certain policies to establish certain rules within companies for the use of AI. Now, with regard to various ways where AI is being used, of course, here I just put the slight example of using the workplace. But of course, the use of AI in the workplace is not only about human resources, workforce management. and management of people. It’s also about people in the workplace using, for example, big data, that need to collect data, that need to process and to work with this. Then marketing custom relations, a lot of AI driven personalization, targeted advertising, pricing algorithms, a lot of possibilities to use. Regulatory compliance, AI is being used for human rights due diligence, not always positive, but it’s also used quite a lot, in particular in the value chain assessments. Then in decision making, for example, algorithmic or automated decision making is not really anything new. If you talk with businesses from healthcare, finance sector, insurance, retail operations, so they use for quite some time this automated business decisions. But what is more complicated now with AI, that it introduces this new levels and scale of complexity. That’s why we have to really not only talk about it, but also educate ourselves, as has been also mentioned by our colleague from EY. Now, what is also essential, as we increasingly use those systems both in the private sector, but also in the public sector, that these systems are used in a transparent, explainable and understandable way. And that stakeholders are also involved, both before the deployment in discussing and maybe even auditing, let’s say, some of the systems in order to prevent some discriminatory or other uses. So it is extremely important that there are policies, there are practices, and there are also people behind in the companies, but also in the state institutions for the use of those systems. Now… Indeed, a lot could be told about benefits and risks of using AI. I just try to exemplify this in the slide in front of you. And I just want to emphasize that, and this is what they try to show in the slide, that if we look at certain benefits, certain use of AI has both sides, they stick to two ends. So, for instance, if we see that use of AI has played a really important role in monitoring, for example, the air quality, fatigue levels in the workplace, certain workplace risks, in particular in those sectors, for example, mining, where it is extremely important to observe that people are not tired, because that could create some health and safety risks for the workers themselves. So this is, let’s say, the benefit side. But then on the other side, we see that AI is being used for monitoring productivity, for calculations, how people work in the workplace, how doctors or how lawyers, or even judges, how many cases and how quickly they are being processed and so on, to have all kinds of indicators. So that, on the other hand, is meant to boost the productivity. But on the other hand, it creates a lot of stress. I think there was something about the mental health said already in the beginning of the panel. So it could work also negatively, because it creates a lot of pressure, a lot of stress. And this sometimes pushes the workers then to ignore certain safety standards. So there could be both ways. And I think this is important also to emphasize that we all always have to look both at benefits and risks. And in reality, what we see that when we speak about use of AI, it’s quite frequently that benefits are being emphasized. for promoting use of AI in businesses but also in the public sector. Now if we go back to the standards, indeed the UN guiding principles work on three pillars. So the first pillar is what the states have to do in order to make sure that companies do not engage in violations or to prevent violations and if they do happen to provide the opportunities for remedying. So there are obligations for states and indeed sometimes people say that UN guiding principles, this is soft law. But if we look at the first pillar where the obligations for states are embodied, this is reliant on the international mandatory obligations. So all the UN treaties that I don’t need to present here, but it is not that much soft law, so to say. Then we have the pillar for businesses and there of course we emphasize a lot in the guiding principles on human rights to diligence, which should help the businesses to identify, prevent, mitigate and address potential negative impacts on human rights. And then we have the pillar on the effective remedy and here what is particular with AI and why it is so important to be transparent, why it is so important to disclose that you use AI, whether in the workplace or elsewhere, is because if something happens you cannot have remedy without the disclosure, without transparency. Because you as a person, as a worker or even as a partner, you cannot know sometimes that AI has been used. So if you don’t know, how can you apply for certain remedy to certain oversight institution court or any kind of commission and so on. So that’s why the remedy issue is very important. Now, I am aware of time, but let me just maybe summarize on some of the steps that could be useful to bear in mind for the companies, but also I think it’s equally important for the public institutions, because we have seen a number of challenges for certain governments around Europe also, as a result of which the governments also started to do human rights due diligence. So, several steps. Firstly, to start with knowledge mapping, what are you using exactly in the company, in the state institution, and then work on identification of impacts by doing human rights impact assessment. Now, the impact assessment does not mean that you will have to address everything, like in anti-money laundering field, if you identify certain risks, you must address, because you cannot just leave it for the future. Now, here we emphasize with human rights due diligence, it is important to prioritize, because not everything can be done at the same time. So, of course, the recommendation is to look for, let’s say, crucial risks for businesses with regard to severity of the impacts, something that has to be addressed immediately and something that has to be addressed later. Now, the AI Act, for example, looks also through risk-based analysis, through the high risk and less risk, and depending on that, there are different obligations. Then, of course, if risks are identified and prioritized, then it’s important to address those impacts, be it with preventive measures or actual measures. And here, what is extremely, extremely important is to talk to stakeholders, because talking to stakeholders may also help to understand the severity or the importance of certain risks. that the use of AI involves. And when we talk about stakeholders, it’s not only the trade units and workers as we speak about the workplace, but also the broader stakeholders, civil society organizations. Disclosure and ensuring the transparency is extremely important. So if AI is being used, it has to be disclosed, be it to the employees or be it to other stakeholders. Collaboration among businesses is extremely helpful, in particular if we talk about SMEs, small and medium businesses, because they have even less capacities to address issues and increasingly they use AI because this helps them with their productivity and other aspects. So if there is collaboration between businesses, in particular in the value chain, then the certain issues could be leveraged much easier. Then sometimes also the state support is needed, in particular if we talk about SMEs. But this collaboration can help also to address those challenges more effectively and in a more optimal way. And it’s extremely important also to ensure effective and timely communication because in this process where a lot is unknown, both by using the AI itself, but also to know how AI is impacting on different stakeholders is extremely, extremely important to communicate because that could build trust, it could strengthen the relationships and also to demystify certain myths that we have seen also as part of the process. And this is where I stop, even though I have many things to say, but I’m aware of the time and I don’t want to be rude also to my colleagues.


Tigran Karapetyan: Thank you very, very, very much. This is very interesting and I think that this session is way too short to actually discuss all the things that need discussing. So let’s take this only as an inspiration for further reading and further exploration. and on this I would like to also mention about the Council of Europe materials as being sources of standards and it was very nice your reference to the fact that it’s a soft standard but not really because it’s based on hard standards and the positive obligations of the state being a very specific one and this is where also the European Court of Human Rights case law comes in this is where also monitoring reports by the Council of Europe, various Council of Europe monitoring bodies can become helpful for businesses to do their due diligence. So given the time now I’m moving on to the next panelist, I’d like to invite back Mr. Domenico Zipoli to discuss how human rights digital tracking tools can support not only public institutions but also businesses in conducting their rights due diligence. So let’s speak about how to use AI for the good.


Domenico Zipoli: Thank you, thank you very much Chair and yes, as I said I come from Geneva, a city like Strasbourg after all that has long championed the idea of human rights by design and today as AI becomes crucial and embedded in business operations and government workflows alike that principle is more urgent than ever and our contribution to this discussion indeed builds on our work on digital tracking tools and as I mentioned earlier on, these platforms have transformed how states monitor their human rights obligations and the point that I’d like to make today is that increasingly their architecture and logic may be relevant to business actors as well especially those navigating ESG risks, regulatory pressure and impact investment frameworks. So essentially over the last decade we’ve seen a rise of human rights software We divide them in different categories. Digital human rights tracking tools, such as CIMORE+, that is present in the Latin American region. CIMORE stands for Sistema de Monitoreo de Recomendaciones. IMPACT, open source software, that is more present in the Pacific region. Or indeed the Office of the High Commission of Human Rights National Recommendations Tracking Database. We then have information management systems, where you might know the Office of the High Commission of Human Rights, Universal Human Rights Index, but indeed the Council of Europe’s very own European Court of Human Rights knowledge sharing platform. So all these systems help governments track progress on human rights recommendations, be it treaty bodies, special procedures, regional courts. And what these platforms create is a holistic and organized way to understand what’s happening on the ground. What Biljana, thank you so much, is now sharing on the screen is a directory that we hold on our website. We don’t have a fancy QR as you do, but we’re definitely taking that idea back home. If you want, you can check the directory yourself. Just put digital human rights tracking tools directory on your search engine. And within this directory you can see a selection of what is more now than 20 of these digital tracking tools and databases. With a description of its functions, users, and the link to the tools themselves. I think we can describe the value of these tools according to what we call an ABC model. Alerts, benchmarking, and coordination. And always have businesses in mind when I go through this framework. When it comes to alerts, AI-powered platforms can act as early warning systems. They automatically scan social media, news, reports for red flags, such as spikes for instance in hate speech or disinformation ahead of elections. And in the business world, this same logic could… be applied to supply chain grievance monitoring, for instance, or reputational risk detection, allowing companies to intervene before a situation escalates. B stands for benchmarking. So, AI-powered databases allow clear benchmarking of human rights performance. The European Court of Human Rights Knowledge Sharing Platform, now it’ll be interesting to discuss with your colleagues how you’ll intend on leveraging AI for its use. But what is the Knowledge Sharing Platform doing? It essentially organizes and visualizes case law by thematic area, legal principles, helping national authorities understand how rights are being implemented. And for businesses in multiple jurisdictions, such a resource can be invaluable. Oftentimes, these resources are only used by us, you know, in the human rights space, in the public sector space. But for businesses, it would allow legal teams and compliance officers to benchmark corporate policy, for instance, conduct against emerging human rights standards, for instance, in areas like data privacy, freedom of expression, anti-discrimination, so not based on, you know, assumptions, surveys, but on actual jurisprudence. And this kind of structured legal insight can meaningfully support ESG alignment, risk mitigation, and innovation. And finally, coordination. I won’t go much into detail about this, but this is something that I’m particularly fond of. Digital tracking tools like the CIMORE or OHCHR’s National Recommendations Tracking Database, they bring different ministries, courts, civil society into one shared workflow. And everyone sees the same data and tracks the progress that is shared. So let’s talk about Impact OSS. You can go and take a look at the software itself. As it’s an open source… system, it allows diverse actors, including the public, to follow implementation efforts. You can see SADATA is the tool that the Ministry of Foreign Affairs of Samoa, for instance, is using. Today, us all can see how Samoa is faring when it comes to UN human rights recommendations. So indeed, for businesses, this is what I believe could be the next frontier, something that one could co-create. Just quickly on, you know, how these public sector tools relate to businesses, so indirectly, but also increasingly with more of a direct use. They establish shared benchmarks, they clarify state commitments, and illuminate risks. And for companies that wish to align with international norms, the data and logic embedded in these tools that you see in the directory offer a ready-made structure that I think is worth thinking about for due diligence, ESG monitoring, and the like. And the last point I want to make here is also that there can be a compelling investment case here, because many digital human rights tracking tools are open source, public goods. They benefit states, as I said, international organizations. But by supporting these platforms through funding, through private funding, businesses can help build infrastructure that they themselves would benefit from. And this aligns with the principles of impact investment. So this is a space that we’d like to create, the Geneva Human Rights Platform. We have expert roundtables every year where we invite representatives from different sectors around the table to discuss the emergence of these tools and how these tools can be supported. You were mentioning the AI for Good. Now in July, the AI for Good Global Summit will take place in Geneva. There will be a dedicated workshop on AI for human rights monitoring. So I am, of course, inviting you all, if you’re in Geneva, to attend. It’s on the 8th of July. And yes, indeed, there has to be more engagement between the private sector and the public sector, specifically when it relates to human rights monitoring, a space that hopefully will have more attention in the future.


Tigran Karapetyan: Thank you. Thank you very much. This is very interesting. And I just understood that I inadvertently plugged in the next summit that is going to be held. But please feel encouraged to take part. Absolutely. And it’s I think it’s also interesting and something that is worth looking into maybe in another session somewhere. The fact that once the data on AI, sorry, on HR performance is tracked, it can actually be assigned certain value. So that’s another area that needs exploration and might become an investment case or a business case. So tracking of human rights data, I think, is extremely important. And that makes that data eventually can even turn it into a commodity, de facto. So now we move to the to the next speaker, Angela Coriz, from Connect Europe, who will share positive case examples from the telecom sector, please, Angela.


Angela Coriz: Thank you. I will try to be quick. So I work at Connect Europe. This is a trade association that represents the leading providers of telecoms in Europe. And so today, what I wanted to do was to quickly show a snapshot of the business side and specifically the telecom side uses of AI that are already happening. And as we saw in the. presentations from the reports already. It’s not of a question of when this is happening, it’s already happening. So it’s more how this will happen. So also in the telecom sector, our members are still exploring some of the potential benefits and solutions that AI can offer. But also looking at the drawbacks and risks. And in the meantime, we’re operating within the within the AI Act. Since our members are European, we are entering the implementation phase of the AI Act. And there’s a lot to be considered there. Well, and a lot of still remaining questions from businesses that will need to be answered along the way. So just to share just a few examples of how AI is being used in telecoms now. One thing that’s very specific to the sector that is helpful for AI now is within the network. It can be very helpful, for example, to optimize network investment choices, from finding the best location to place a network antenna, to also improving network capacity planning and optimizing the traffic flow through the networks. There’s also a lot of benefits that can be found with predictive maintenance. So helping technicians to actually fix issues within the network, summarizing trouble tickets, and basically speeding up that process. There’s also benefits within the radio access network or RAN. This is an area where a lot of operators are already using AI solutions. 25% of operators have already deployed some functionality in this area and over 80% have some sort of AI activity of some kinds. This can be in commercial trial test or R&D phase, but this has already gotten started on quite a big scale. Then if we look at greener connectivity networks, as we’ve said before, AI offers benefits and risks and it’s a really a two-sided coin. So while AI has raised challenges in terms of energy consumption, specifically with data centers, it also can be potentially beneficial for reducing emissions in telecom networks. We have one of our members, Orange, that uses AI systems to monitor their energy consumption of their routers and that’s resulted in 12% decrease in consumption. A couple of our members, Telenor and Ericsson, have also collaborated on a project that saves energy with RAN and it’s resulted in a 4% reduction of energy usage for radio units. So it’s about balancing these potential rewards and also these challenges that AI can bring to sustainability. There’s also functions for customer engagement, for customer service, and internal solutions that help employees within their companies navigate their own, finding the right people they need to find internally within their own companies. That’s also a potential benefit. But as we said, there’s plenty of challenges that come along with these. One of them in Europe is regulatory uncertainty. That being, we will need some clarification along the way, and some of these are coming through the form of guidelines from the Commission. But in order to follow, especially a push from the Commission to embrace AI and become an AI continent, in order to develop these solutions, clear definitions on how to classify high-risk scenarios, for example, are important. And also, for telecoms, classifying AI-based safety components is really, really essential. There’s also cybersecurity risks. AI can increase these threats, and there can be modified contents. For example, we see this impersonation fraud within the telecom sector, and not only telecom sector, also on platforms. And on the other hand, you also have solutions that are being built with AI. So again, this kind of counterbalance, for example, our member Telefonica has a solution called TU-Verify, which can detect content generated or modified by AI. So you have these kind of counterbalances as well. Then the other thing to be considered is that AI will likely cause a spike in data traffic. So if we’re spiking data traffic on telecom networks, this raises a big investment need from a telecom side. And it’s also a bit difficult to say exactly how much more data will be used, because it depends on what kind of AI will be used. So this is an area that should also be kept in mind. And already mentioned in the report as well, we see skills development as very important, especially with employees, like several people have already said. So training both citizens and people working in companies is really crucial. We have some members who have taken some steps in this direction for training programs, both internal and external, but this is certainly an area that’s crucial. And we’ve seen the statistics within the report. There’s still a lot of work to be done there. So I hope I haven’t gone too fast. I was trying to make that really quick. But just to conclude, it’s an ongoing developing area, and there’s a lot of potential to be found. But of course, we need to be operating within a framework that keeps ethical principles in mind and is rights respecting. And in order to do that, we need to continue having these discussions with lawmakers, with the public sector, with the private sector. So yes, there’s a lot still to be done, of course. Thank you.


Tigran Karapetyan: Thank you very much, Ms. Korys, for this. It was very interesting. Indeed, as you said, there’s still lots of questions as things develop, and some of them aren’t going to be very easy to answer. Having said that, I think we are over our time already, but if there is one or two questions, and we still need five minutes to summarize the session. Thank you. My name is Pablo Arrenovo from Telefonica. I’ll ask you if the panel is on the implementation of AI by companies of different sectors or public administrations.


Audience: My question would be if you feel that human rights and ethics is something taken into account when thinking about implementing AI systems by private or public sector. Thank you. Any of the panelists would like to respond? Please go ahead.


Domenico Zipoli: Thank you very much. It’s always fascinating to be in a room with both stakeholders coming from companies and from the public sector. The short question is, as I think everyone here and in the morning has said, there’s still a lot of trust to be built around AI. We need to get AI governance right. Whenever we talk about AI design, I personally always talk about four main challenges that we always have to bear in mind. One is the key fundamental one, which is bias. AI is as representative as the data that we have. If data is unrepresentative, it can reinforce discrimination rather than solving it. Then there’s transparency, of course. Stakeholders must understand. from EY was, you know, referring to education, education, education. We must understand how AI tools reach their conclusions. The International Telecommunications Union has this beautiful initiative that we’re part of called the AI Skills Coalition. So, indeed, coalitions such as this that educate not just the public, but also us, you know, stakeholders, I think, is crucial. And explainability is no longer optional. We need to be part of this discussion. Privacy, of course, and oversight. And this is the last thing that I’d like to say, but I keep on repeating, is that human rights-based AI demands a human-in-the-loop scenario, or governance. Whether it’s a state actor deploying a rights-tracking system, the ones that we study, or a business automating compliance reviews, accountability cannot be outsourced to an algorithm. So, it’s a work in progress, but I think that with this discussion between companies, regulatory bodies, equality bodies, this academia, this conundrum can be solved. I don’t think that we have the solution yet, but we’re there.


Tigran Karapetyan: Thank you very much, Mr. Zipoli. If you turn off your mic. Just two short points.


Lyra Jakulevičienė: Firstly, if we talk about impact of use of AI on both positive, but also adverse impact on human rights, I think there is also another myth sometimes that we are usually talking about privacy rights or something like that. So, I just want to underline for the end that actually all human rights are involved. And there could be impact for all human rights, including, if we talk about environment, this new right on healthy and sustainable environment. Now, in this respect, I think there is not a big difference when the state is using it. and state authorities are using AI or companies because lots of rights could be involved. Now, the second point that I would like to also mention is just to highlight that here we got the good practices from the telecommunications sector. Indeed, the International Telecommunications Union is developing lots of standards for technological companies, but not only and increasingly realizes and acknowledges the need for human rights due diligence. But the report that I mentioned that is coming up of UN Working Group on Business and Human Rights in June is actually looking into different sectors. We try to look at the state as regulators, state as deployers, state as procurer of AI, but also we try to look in different sectors of businesses or different functions, let’s say, of businesses to see if there are some differences and if there is some specifics. And of course, there is some specifics. So, I just want to say that there will be many good practices as we manage to identify emerging practices in this report that you can also benefit to be inspired how businesses indeed or state authorities could undertake this road of human rights due diligence with regard to use of AI.


Tigran Karapetyan: Thank you very much. We’ll all be looking forward to that report. And I’m passing the word to Dr. Erkut for giving us a short summary of the session. Please.


Moderator: Thank you. Thank you for those presentations. They were quite diverse on different topics and I tried to summarize them. And please correct me if I’m missing very important things. We will have the platform online. So, if you want to improve the wording, this is not the place here to do it. Otherwise, we might be taking half an hour to do that. So I understood from the first presentation that within one year, the use of AI in business has risen from 35% to 75%. Employees have a mixed feeling about it, with a slight majority being positive and a strong minority being negative. I think this is the basis we have been starting from. We see there is a lack of upskilling, governance, ethical policies in place. The use of AI needs to be made transparent, monitored, and evaluated. Impact assessments are needed when there is a possible risk to human rights. It is too early to see an impact of the EU AI Act and even more the impact of the COE Framework Convention of AI, and legal certainty, as we have learned, of course, has to settle in as well. This needs to be evaluated in the future. UN guiding principles of business and human rights, and if I am correct, they are from 2011, they provide important guidance as well, so we do not have to start from scratch regarding regulation. There are human rights tracking tools that can track the adherence to and implementation of human rights by nations and corporations. I think that is it. My personal view is that if you start to use human rights, AI-based human rights tracking tools to track people and to track if they are using hate speech, then you are doing exactly AI not for good. So I would have some concerns about that, but when I think about nations and corporations, I think this is a good point. So if you allow me this last comment, but this is not, of course, in the text. So if you agree with this. with these messages and we will forward them to being finalized. Thank you very much. As you said in the very beginning, the panelists will have a chance to


Tigran Karapetyan: actually introduce changes later on. Given the time constraints, I think we can do that and then the final version will be possible. If you see that there is a strong disagreement with a point, please voice it now. If there is a little wording thing, we can do that later. Okay. With this, then, feeling really pressed now for time, I’m going to pass the word back to Alice, but before doing that, I’d like to thank the co-organizers and the panelists, as well as my own colleague, Biljana Nikolic here, who’s actually worked hard to organize the session, as well as Dr. Erbgut for giving us a great summary. With this, and all those who were present and listened to us here and online, thank you all very much for your interest, for your questions, for your participation. Alice, the floor is back to you, please. So, thank you, and the next session, Workshop 8, How AI Impacts Society and Security, Opportunities and Vulnerabilities, will start at 4.30 p.m., and we look forward to seeing you back then. Thank you.


K

Katarzyna Ellis

Speech speed

125 words per minute

Speech length

1286 words

Speech time

612 seconds

AI usage in business has dramatically increased from 35% to 75% in one year globally

Explanation

Ellis presented data from EY’s global survey showing a massive increase in generative AI usage for work purposes over just 12 months. This represents more than doubling of adoption rates across organizations worldwide.


Evidence

EY Work Reimagined survey data from 15,000 employees across over 1,500 entities globally


Major discussion point

Current State of AI Adoption in Business


Topics

Economic | Future of work


90% of organizations already use or plan to use GenAI technology very shortly

Explanation

Ellis reported that the vast majority of organizations have either already implemented generative AI or have concrete plans to do so in the near future. This indicates widespread organizational commitment to AI adoption.


Evidence

EY global survey findings showing high organizational adoption rates


Major discussion point

Current State of AI Adoption in Business


Topics

Economic | Future of work


Employee attitudes toward AI are mixed, with slightly less than 50% positive and 40% negative in Poland

Explanation

Ellis presented Polish-specific data showing that employee sentiment is divided, with a slight majority having positive attitudes but a significant minority expressing negative views. The negative attitudes stem from fears about job displacement and job transformation.


Evidence

Polish market research showing 4% are tired of AI discussions, 36% believe AI use is inevitable and are acquiring skills


Major discussion point

Current State of AI Adoption in Business


Topics

Economic | Future of work | Sociocultural


32% of Polish companies established dedicated AI teams but only 16% hired new AI specialists

Explanation

Ellis highlighted a significant gap between organizational structure changes and actual talent acquisition. Companies are creating AI teams but are primarily filling them with existing employees rather than hiring new specialists with AI expertise.


Evidence

Polish study data showing the disparity between team creation and new hiring, plus mention of significant shortage of AI specialists in the job market


Major discussion point

Current State of AI Adoption in Business


Topics

Economic | Future of work


There is a massive gap between what organizations have and what they need for effective AI utilization

Explanation

Ellis emphasized that while organizations are investing in AI technology, there’s a significant skills and capability gap that prevents effective and ethical utilization. This gap affects both technical implementation and ethical considerations.


Evidence

Survey data showing employees and employers are aligned on need to learn skills but less aligned on opportunities to learn those skills at work


Major discussion point

Skills Gap and Education Challenges


Topics

Economic | Future of work | Development


Education is crucial – “education, education, education” to address workforce concerns

Explanation

Ellis repeatedly emphasized that education is the primary solution to address the negative attitudes and skills gaps around AI implementation. She stressed this as the key to helping the over 50% of workforce that has negative feelings about AI tools.


Evidence

Reference to the need to address over 50% of workforce with negative AI attitudes, and the gap between employer expectations and actual employee AI usage


Major discussion point

Skills Gap and Education Challenges


Topics

Development | Sociocultural | Future of work


Agreed with

– Lyra Jakulevičienė
– Angela Coriz

Agreed on

Education is the primary solution to AI implementation challenges


Companies should clearly define allowed areas for AI experimentation and ensure psychological safety

Explanation

Ellis argued that building trust requires organizations to establish clear boundaries for AI use and create an environment where employees feel safe to experiment without fear of job loss. This psychological safety is crucial for successful AI adoption.


Evidence

Recommendations for building trust including ensuring employees understand AI will increase productivity without leading to layoffs


Major discussion point

Building Trust and Governance


Topics

Economic | Future of work | Human rights principles


Agreed with

– Lyra Jakulevičienė
– Domenico Zipoli

Agreed on

Transparency and disclosure are essential for trustworthy AI


Regular evaluation and feedback loops are necessary to address what’s going wrong quickly

Explanation

Ellis emphasized the importance of continuous monitoring and rapid response mechanisms in AI implementation. Organizations need to quickly identify and address problems while rewarding successful practices.


Evidence

Listed as one of the key elements for driving GenAI agenda across organizations, alongside other trust-building measures


Major discussion point

Building Trust and Governance


Topics

Economic | Future of work


L

Lyra Jakulevičienė

Speech speed

149 words per minute

Speech length

2787 words

Speech time

1122 seconds

Around 1000 various standards exist worldwide dealing with AI technologies and human rights

Explanation

Jakulevičienė highlighted the complexity of the regulatory landscape by noting the vast number of existing standards. She emphasized that this creates challenges for businesses trying to navigate compliance requirements.


Evidence

Mentioned the Council of Europe Framework Convention on AI and EU AI Act as the first mandatory standards among these 1000 standards


Major discussion point

Human Rights and AI Implementation


Topics

Legal and regulatory | Human rights principles


Interdisciplinary approach is needed as businesses cannot become tech or human rights specialists

Explanation

Jakulevičienė argued that businesses need to work with interdisciplinary teams because they cannot be expected to develop expertise in both technology and human rights. This requires collaboration with specialists from different fields.


Evidence

Emphasized the need for interdisciplinary teams and echoed the capacity and knowledge gap issues mentioned in the EY report


Major discussion point

Skills Gap and Education Challenges


Topics

Development | Interdisciplinary approaches | Human rights principles


Agreed with

– Katarzyna Ellis
– Angela Coriz

Agreed on

Education is the primary solution to AI implementation challenges


All human rights can be impacted by AI use, not just privacy rights

Explanation

Jakulevičienė corrected a common misconception that AI primarily affects privacy rights. She emphasized that AI implementation can have positive or adverse impacts across the entire spectrum of human rights, including newer rights like environmental rights.


Evidence

Mentioned the new right to healthy and sustainable environment as an example of broader human rights impacts


Major discussion point

Human Rights and AI Implementation


Topics

Human rights principles | Development


Human rights due diligence should help businesses identify, prevent, mitigate and address potential negative impacts

Explanation

Jakulevičienė outlined the systematic approach businesses should take regarding human rights in AI implementation. This process involves proactive identification of risks and comprehensive measures to address them throughout the AI lifecycle.


Evidence

Referenced the UN Guiding Principles framework and emphasized the importance of prioritizing risks based on severity of impacts


Major discussion point

Human Rights and AI Implementation


Topics

Human rights principles | Legal and regulatory


Transparency and disclosure are essential – stakeholders cannot seek remedy without knowing AI is being used

Explanation

Jakulevičienė emphasized that transparency is fundamental to human rights protection in AI systems. Without disclosure of AI use, affected parties cannot seek appropriate remedies when problems occur, making transparency a prerequisite for accountability.


Evidence

Explained that remedy is impossible without disclosure and transparency, as people cannot apply for remedy to oversight institutions if they don’t know AI was used


Major discussion point

Human Rights and AI Implementation


Topics

Human rights principles | Legal and regulatory


Agreed with

– Domenico Zipoli
– Katarzyna Ellis

Agreed on

Transparency and disclosure are essential for trustworthy AI


Trust building requires transparency, explainability, and stakeholder involvement

Explanation

Jakulevičienė argued that building trust in AI systems requires multiple components working together. Organizations must ensure their AI systems are transparent, can be explained to users, and involve relevant stakeholders in the development and deployment process.


Evidence

Emphasized the importance of talking to stakeholders including trade unions, workers, and civil society organizations to understand severity of risks


Major discussion point

Building Trust and Governance


Topics

Human rights principles | Sociocultural


Agreed with

– Domenico Zipoli

Agreed on

Human oversight remains crucial in AI systems


Collaboration among businesses is helpful, particularly for SMEs with limited capacities

Explanation

Jakulevičienė highlighted that small and medium enterprises face particular challenges in AI implementation due to resource constraints. Collaboration between businesses, especially within value chains, can help leverage resources and address challenges more effectively.


Evidence

Noted that SMEs have less capacity to address AI issues but increasingly use AI for productivity, making collaboration in value chains important


Major discussion point

Stakeholder Engagement and Communication


Topics

Economic | Development | Human rights principles


Agreed with

– Angela Coriz
– Domenico Zipoli

Agreed on

AI implementation requires stakeholder engagement and collaboration


Effective communication is crucial to build trust and demystify AI myths

Explanation

Jakulevičienė emphasized that in a context where much about AI impacts remains unknown, clear and timely communication becomes essential. This communication helps build trust, strengthen relationships, and address misconceptions about AI capabilities and risks.


Evidence

Mentioned this as part of the process where a lot is unknown about AI impacts on stakeholders


Major discussion point

Stakeholder Engagement and Communication


Topics

Sociocultural | Human rights principles


D

Domenico Zipoli

Speech speed

139 words per minute

Speech length

1694 words

Speech time

726 seconds

AI-powered platforms can serve as early warning systems for human rights violations

Explanation

Zipoli explained how AI technology can be used proactively to detect potential human rights issues before they escalate. These systems can scan various data sources to identify red flags and enable early intervention.


Evidence

Examples include scanning social media and news for hate speech spikes or disinformation ahead of elections, with applications for supply chain grievance monitoring


Major discussion point

Digital Human Rights Tracking Tools


Topics

Human rights principles | Cybersecurity


Disagreed with

– Jörn Erbguth

Disagreed on

Use of AI for human rights monitoring and tracking


Digital tracking tools follow an ABC model: Alerts, Benchmarking, and Coordination

Explanation

Zipoli presented a framework for understanding the value of digital human rights tracking tools. The ABC model provides a systematic way to categorize and understand how these tools can support both public institutions and businesses in human rights monitoring.


Evidence

Detailed explanation of each component with examples like the European Court of Human Rights Knowledge Sharing Platform for benchmarking


Major discussion point

Digital Human Rights Tracking Tools


Topics

Human rights principles | Legal and regulatory


These tools can help businesses conduct due diligence and ESG monitoring using structured legal insights

Explanation

Zipoli argued that digital human rights tracking tools, originally designed for public sector use, can provide valuable structured legal insights for businesses. This can support corporate policy development and compliance with emerging human rights standards.


Evidence

Mentioned applications in data privacy, freedom of expression, anti-discrimination based on actual jurisprudence rather than assumptions or surveys


Major discussion point

Digital Human Rights Tracking Tools


Topics

Human rights principles | Economic | Legal and regulatory


Agreed with

– Lyra Jakulevičienė
– Angela Coriz

Agreed on

AI implementation requires stakeholder engagement and collaboration


Human-in-the-loop governance is essential – accountability cannot be outsourced to algorithms

Explanation

Zipoli emphasized that regardless of AI sophistication, human oversight remains crucial for accountability. Whether in public or private sector applications, humans must maintain ultimate responsibility for AI-driven decisions and their consequences.


Evidence

Applied this principle to both state actors deploying rights-tracking systems and businesses automating compliance reviews


Major discussion point

Building Trust and Governance


Topics

Human rights principles | Legal and regulatory


Agreed with

– Lyra Jakulevičienė

Agreed on

Human oversight remains crucial in AI systems


Four main AI governance challenges: bias, transparency, privacy, and oversight

Explanation

Zipoli identified the key challenges that must be addressed to achieve trustworthy AI governance. He emphasized that AI systems can reinforce discrimination if not properly designed and that explainability is no longer optional.


Evidence

Explained that AI is only as representative as the data used, and mentioned the ITU AI Skills Coalition as an example of educational initiatives


Major discussion point

Building Trust and Governance


Topics

Human rights principles | Legal and regulatory | Privacy and data protection


Agreed with

– Lyra Jakulevičienė
– Katarzyna Ellis

Agreed on

Transparency and disclosure are essential for trustworthy AI


A

Angela Coriz

Speech speed

120 words per minute

Speech length

935 words

Speech time

464 seconds

AI is used for network optimization, predictive maintenance, and energy consumption reduction

Explanation

Coriz provided concrete examples of how the telecommunications sector is already implementing AI solutions. These applications focus on improving operational efficiency and reducing environmental impact through better resource management.


Evidence

Specific examples include Orange’s 12% decrease in router energy consumption and Telenor-Ericsson collaboration achieving 4% reduction in radio unit energy usage


Major discussion point

Telecom Sector AI Applications


Topics

Infrastructure | Economic | Development


25% of telecom operators have deployed AI functionality in radio access networks

Explanation

Coriz presented statistics showing significant AI adoption in a specific technical area of telecommunications. The data indicates that AI implementation in telecom is not just planned but actively deployed, with even more operators in various stages of AI development.


Evidence

Over 80% of operators have some sort of AI activity in commercial trial, test, or R&D phases


Major discussion point

Telecom Sector AI Applications


Topics

Infrastructure | Economic


AI offers both benefits and risks for sustainability – can reduce energy consumption but also increase data traffic

Explanation

Coriz highlighted the dual nature of AI’s environmental impact in telecommunications. While AI can optimize energy usage and reduce consumption, it also drives increased data traffic that requires additional infrastructure investment and energy.


Evidence

Contrasted energy reduction examples with the challenge that AI will likely cause a spike in data traffic requiring significant telecom investment


Major discussion point

Telecom Sector AI Applications


Topics

Infrastructure | Development | Economic


Regulatory uncertainty remains a challenge, with need for clear definitions on high-risk scenarios

Explanation

Coriz identified regulatory uncertainty as a key challenge for telecom companies implementing AI. Clear guidance on classification of high-risk scenarios and AI-based safety components is essential for compliance and development.


Evidence

Mentioned the need for Commission guidelines and emphasized the importance of classifying AI-based safety components for telecoms


Major discussion point

Telecom Sector AI Applications


Topics

Legal and regulatory | Infrastructure


Skills development and training programs are crucial for both internal and external stakeholders

Explanation

Coriz emphasized that the telecommunications sector recognizes the importance of education and training in AI implementation. This includes both training employees within companies and broader citizen education programs.


Evidence

Mentioned that some telecom members have implemented training programs both internally and externally


Major discussion point

Skills Gap and Education Challenges


Topics

Development | Infrastructure


Agreed with

– Katarzyna Ellis
– Lyra Jakulevičienė

Agreed on

Education is the primary solution to AI implementation challenges


Discussions between lawmakers, public sector, and private sector must continue

Explanation

Coriz emphasized the need for ongoing dialogue between different stakeholders to ensure AI development proceeds within appropriate ethical and rights-respecting frameworks. This collaborative approach is essential for addressing the complex challenges of AI implementation.


Major discussion point

Stakeholder Engagement and Communication


Topics

Legal and regulatory | Economic


Agreed with

– Lyra Jakulevičienė
– Domenico Zipoli

Agreed on

AI implementation requires stakeholder engagement and collaboration


J

Jörn Erbguth

Speech speed

129 words per minute

Speech length

97 words

Speech time

45 seconds

It’s too early to see the impact of the EU AI Act on business practices

Explanation

Erbguth questioned whether the AI Act addresses the challenges identified in the EY report and noted that it’s still early in the implementation phase. He suggested that the next iteration of research will provide better insights into the Act’s effectiveness.


Evidence

Referenced that the AI Act requires education of people using AI as mandatory and already in force


Major discussion point

Regulatory Framework and Compliance


Topics

Legal and regulatory | Economic


UN Guiding Principles from 2011 provide important guidance so businesses don’t start from scratch

Explanation

Erbguth emphasized that existing frameworks like the UN Guiding Principles on Business and Human Rights offer established guidance for AI implementation. This means businesses have foundational standards to build upon rather than developing entirely new approaches.


Major discussion point

Regulatory Framework and Compliance


Topics

Legal and regulatory | Human rights principles


T

Tigran Karapetyan

Speech speed

133 words per minute

Speech length

2175 words

Speech time

981 seconds

The Council of Europe’s Framework Convention on AI and EU AI Act are the first mandatory standards

Explanation

Karapetyan highlighted the significance of these two regulatory instruments as the first binding legal frameworks specifically addressing AI and human rights. This represents a shift from voluntary guidelines to mandatory compliance requirements.


Major discussion point

Regulatory Framework and Compliance


Topics

Legal and regulatory | Human rights principles


M

Moderator

Speech speed

141 words per minute

Speech length

522 words

Speech time

220 seconds

Impact assessments are needed when there are possible risks to human rights

Explanation

The moderator summarized that when AI implementation poses potential risks to human rights, organizations must conduct thorough impact assessments. This represents a key takeaway from the session discussions about responsible AI deployment.


Major discussion point

Regulatory Framework and Compliance


Topics

Human rights principles | Legal and regulatory


A

Audience

Speech speed

102 words per minute

Speech length

41 words

Speech time

24 seconds

Questions from audience show interest in whether human rights and ethics are considered in AI implementation

Explanation

An audience member from Telefonica asked whether human rights and ethics are being taken into account when implementing AI systems in both private and public sectors. This question reflects broader stakeholder concerns about responsible AI deployment.


Evidence

Question posed by Pablo Arrenovo from Telefonica about implementation across different sectors


Major discussion point

Stakeholder Engagement and Communication


Topics

Human rights principles | Economic


Agreements

Agreement points

Education is the primary solution to AI implementation challenges

Speakers

– Katarzyna Ellis
– Lyra Jakulevičienė
– Angela Coriz

Arguments

Education is crucial – “education, education, education” to address workforce concerns


Interdisciplinary approach is needed as businesses cannot become tech or human rights specialists


Skills development and training programs are crucial for both internal and external stakeholders


Summary

All speakers emphasized that education, training, and skills development are fundamental to successful AI implementation, whether addressing workforce concerns, building interdisciplinary expertise, or ensuring proper deployment across sectors.


Topics

Development | Future of work | Human rights principles


Transparency and disclosure are essential for trustworthy AI

Speakers

– Lyra Jakulevičienė
– Domenico Zipoli
– Katarzyna Ellis

Arguments

Transparency and disclosure are essential – stakeholders cannot seek remedy without knowing AI is being used


Four main AI governance challenges: bias, transparency, privacy, and oversight


Companies should clearly define allowed areas for AI experimentation and ensure psychological safety


Summary

Speakers agreed that transparency is fundamental to AI governance, enabling accountability, remedy mechanisms, and building trust with stakeholders.


Topics

Human rights principles | Legal and regulatory


Human oversight remains crucial in AI systems

Speakers

– Domenico Zipoli
– Lyra Jakulevičienė

Arguments

Human-in-the-loop governance is essential – accountability cannot be outsourced to algorithms


Trust building requires transparency, explainability, and stakeholder involvement


Summary

Both speakers emphasized that despite AI sophistication, human oversight and involvement remain essential for accountability and trustworthy AI governance.


Topics

Human rights principles | Legal and regulatory


AI implementation requires stakeholder engagement and collaboration

Speakers

– Lyra Jakulevičienė
– Angela Coriz
– Domenico Zipoli

Arguments

Collaboration among businesses is helpful, particularly for SMEs with limited capacities


Discussions between lawmakers, public sector, and private sector must continue


These tools can help businesses conduct due diligence and ESG monitoring using structured legal insights


Summary

Speakers agreed that successful AI implementation requires ongoing collaboration between different stakeholders, including businesses, government, and civil society.


Topics

Economic | Development | Human rights principles


Similar viewpoints

Both speakers from the business sector identified significant skills gaps and emphasized the critical need for training programs to bridge these gaps in AI implementation.

Speakers

– Katarzyna Ellis
– Angela Coriz

Arguments

There is a massive gap between what organizations have and what they need for effective AI utilization


Skills development and training programs are crucial for both internal and external stakeholders


Topics

Development | Economic | Future of work


Both human rights experts emphasized the comprehensive nature of AI’s impact on human rights, extending beyond privacy to encompass all human rights categories.

Speakers

– Lyra Jakulevičienė
– Domenico Zipoli

Arguments

All human rights can be impacted by AI use, not just privacy rights


Four main AI governance challenges: bias, transparency, privacy, and oversight


Topics

Human rights principles | Legal and regulatory


Both speakers recognized the challenge of mixed attitudes toward AI and the importance of communication in building trust and addressing concerns.

Speakers

– Katarzyna Ellis
– Lyra Jakulevičienė

Arguments

Employee attitudes toward AI are mixed, with slightly less than 50% positive and 40% negative in Poland


Effective communication is crucial to build trust and demystify AI myths


Topics

Sociocultural | Future of work | Human rights principles


Unexpected consensus

Public sector AI tools can benefit private sector compliance

Speakers

– Domenico Zipoli
– Lyra Jakulevičienė

Arguments

These tools can help businesses conduct due diligence and ESG monitoring using structured legal insights


Human rights due diligence should help businesses identify, prevent, mitigate and address potential negative impacts


Explanation

There was unexpected consensus that digital human rights tracking tools originally designed for public sector use could be adapted and beneficial for private sector compliance and due diligence processes.


Topics

Human rights principles | Economic | Legal and regulatory


AI has dual nature – both benefits and risks across all applications

Speakers

– Angela Coriz
– Lyra Jakulevičienė

Arguments

AI offers both benefits and risks for sustainability – can reduce energy consumption but also increase data traffic


All human rights can be impacted by AI use, not just privacy rights


Explanation

Speakers from different sectors (telecom and human rights) unexpectedly converged on recognizing AI’s dual nature, acknowledging both positive and negative impacts across their respective domains.


Topics

Infrastructure | Human rights principles | Development


Overall assessment

Summary

Strong consensus emerged around the need for education, transparency, human oversight, and stakeholder collaboration in AI implementation. Speakers agreed that existing frameworks provide guidance but implementation challenges remain significant.


Consensus level

High level of consensus on fundamental principles and challenges, with speakers from different sectors (business, human rights, telecom, public sector) aligning on key issues. This suggests a mature understanding of AI governance challenges and points toward actionable solutions that can be implemented across sectors.


Differences

Different viewpoints

Use of AI for human rights monitoring and tracking

Speakers

– Domenico Zipoli
– Jörn Erbguth

Arguments

AI-powered platforms can serve as early warning systems for human rights violations


Personal view that using AI-based human rights tracking tools to track people for hate speech is ‘doing exactly AI not for good’


Summary

Zipoli advocates for AI-powered platforms that can scan social media and news for red flags like hate speech, while Erbguth expresses concerns that using AI to track people for hate speech detection is problematic and not ‘AI for good’


Topics

Human rights principles | Cybersecurity | Legal and regulatory


Unexpected differences

Scope of AI applications for human rights monitoring

Speakers

– Domenico Zipoli
– Jörn Erbguth

Arguments

Digital tracking tools follow an ABC model: Alerts, Benchmarking, and Coordination


Personal view that using AI-based human rights tracking tools to track people for hate speech is ‘doing exactly AI not for good’


Explanation

This disagreement was unexpected because both speakers are focused on human rights protection, yet they have fundamentally different views on whether AI should be used to monitor individual behavior for human rights violations. Zipoli sees it as beneficial early warning systems, while Erbguth sees it as potentially harmful surveillance


Topics

Human rights principles | Privacy and data protection | Cybersecurity


Overall assessment

Summary

The discussion showed remarkable consensus on most major issues, with speakers generally agreeing on the importance of education, transparency, human oversight, and stakeholder engagement in AI implementation. The main area of disagreement centered on the appropriate use of AI for monitoring and tracking purposes in human rights contexts.


Disagreement level

Low to moderate disagreement level. The speakers largely aligned on fundamental principles and challenges, with most differences being matters of emphasis or approach rather than fundamental opposition. The most significant disagreement was philosophical about the appropriate boundaries of AI surveillance for human rights protection, which has important implications for balancing security and privacy in AI governance frameworks.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers from the business sector identified significant skills gaps and emphasized the critical need for training programs to bridge these gaps in AI implementation.

Speakers

– Katarzyna Ellis
– Angela Coriz

Arguments

There is a massive gap between what organizations have and what they need for effective AI utilization


Skills development and training programs are crucial for both internal and external stakeholders


Topics

Development | Economic | Future of work


Both human rights experts emphasized the comprehensive nature of AI’s impact on human rights, extending beyond privacy to encompass all human rights categories.

Speakers

– Lyra Jakulevičienė
– Domenico Zipoli

Arguments

All human rights can be impacted by AI use, not just privacy rights


Four main AI governance challenges: bias, transparency, privacy, and oversight


Topics

Human rights principles | Legal and regulatory


Both speakers recognized the challenge of mixed attitudes toward AI and the importance of communication in building trust and addressing concerns.

Speakers

– Katarzyna Ellis
– Lyra Jakulevičienė

Arguments

Employee attitudes toward AI are mixed, with slightly less than 50% positive and 40% negative in Poland


Effective communication is crucial to build trust and demystify AI myths


Topics

Sociocultural | Future of work | Human rights principles


Takeaways

Key takeaways

AI adoption in business has dramatically increased from 35% to 75% globally in just one year, with 90% of organizations already using or planning to use GenAI technology


There is a significant skills gap and education challenge – organizations lack the necessary expertise for effective and ethical AI implementation


Employee attitudes toward AI are mixed, with concerns about job displacement and stress from productivity monitoring creating resistance


Human rights due diligence is essential for AI implementation, requiring transparency, stakeholder engagement, and impact assessments


All human rights can be impacted by AI use, not just privacy rights, and businesses must identify, prevent, mitigate and address potential negative impacts


Digital human rights tracking tools can support both public institutions and businesses in conducting rights-respecting AI implementation


The telecom sector demonstrates both benefits (network optimization, energy reduction) and challenges (increased data traffic, cybersecurity risks) of AI adoption


Regulatory frameworks like the EU AI Act and Council of Europe Framework Convention provide guidance, but implementation clarity is still needed


Trust building requires transparency, explainability, human-in-the-loop governance, and clear policies defining allowed AI experimentation areas


Interdisciplinary collaboration between technical experts, human rights specialists, and business leaders is crucial for responsible AI deployment


Resolutions and action items

EY to repeat their AI adoption survey within 6-12 months to assess the impact of the EU AI Act


UN Working Group on Business and Human Rights to present their report on AI, business and human rights in June 2024


Geneva Human Rights Platform to host AI for human rights monitoring workshop at AI for Good Global Summit on July 8th


Panelists to review and finalize the session summary with any necessary corrections


Continue discussions between lawmakers, public sector, and private sector stakeholders on AI governance


Unresolved issues

How to effectively address the 50%+ of employees who have negative feelings about AI in the workplace


Specific implementation guidance for the EU AI Act and how it will practically impact business operations


How to balance AI benefits with human rights protection without stifling innovation


Clear definitions and classifications for high-risk AI scenarios in business contexts


How small and medium enterprises can access the resources needed for proper AI governance and human rights due diligence


The challenge of tracking and managing AI tools that employees use without management knowledge


How to quantify and manage the expected spike in data traffic from increased AI usage


Long-term sustainability implications of AI deployment across different sectors


Suggested compromises

Risk-based approach to AI governance, prioritizing the most severe human rights impacts for immediate attention while addressing less critical issues over time


Collaboration between businesses, especially in value chains, to share the burden of AI governance and human rights compliance


Using existing frameworks like UN Guiding Principles rather than creating entirely new regulatory structures


Leveraging open-source digital human rights tracking tools that can benefit both public and private sectors


Balancing AI automation with human oversight through ‘human-in-the-loop’ governance models


Gradual implementation approach with regular evaluation and feedback loops to adjust AI deployment strategies


Public-private partnerships to support SMEs in developing AI governance capabilities they cannot afford independently


Thought provoking comments

So what do we do with that over 50% of all the workforce that actually does not believe or that does not have positive feelings about using those tools? It’s a big problem. So what I talk about is education, education, education.

Speaker

Katarzyna Ellis


Reason

This comment reframed the AI adoption challenge from a technical implementation issue to a fundamental human acceptance problem. It highlighted a critical gap between corporate AI investment and employee readiness, suggesting that the success of AI initiatives depends more on human factors than technological capabilities.


Impact

This observation became a recurring theme throughout the session, with multiple speakers referencing the education gap and the need for building trust. It shifted the discussion from focusing solely on AI capabilities to addressing the human element of AI adoption.


Now, I’m coming from business and human rights topics, so I can only echo that not only on the technological aspects, but also on the human rights aspects. So it’s another burden, let’s say, another challenge for businesses to address these aspects.

Speaker

Lyra Jakulevičienė


Reason

This comment introduced a crucial complexity by highlighting that businesses face a dual knowledge gap – both technological and human rights expertise. It challenged the assumption that AI implementation is primarily a technical challenge and introduced the concept of interdisciplinary requirements.


Impact

This observation elevated the discussion beyond technical implementation to encompass ethical and legal compliance dimensions. It led to deeper exploration of how businesses can navigate multiple complex domains simultaneously and influenced subsequent discussions about the need for interdisciplinary teams.


So, for instance, if we see that use of AI has played a really important role in monitoring, for example, the air quality, fatigue levels in the workplace… But then on the other side, we see that AI is being used for monitoring productivity… So that, on the other hand, is meant to boost the productivity. But on the other hand, it creates a lot of stress.

Speaker

Lyra Jakulevičienė


Reason

This comment provided a nuanced perspective on AI’s dual nature, showing how the same technology can simultaneously benefit and harm workers. It moved beyond simple pro/con discussions to illustrate the complex, context-dependent nature of AI impacts.


Impact

This observation introduced sophisticated thinking about AI’s paradoxical effects, influencing the discussion to consider unintended consequences and the importance of implementation context. It helped frame AI not as inherently good or bad, but as a tool whose impact depends on application and governance.


And without fairness and inclusivity in design, AI risks amplifying the very inequalities that we’re trying to fix… human rights-based AI demands a human-in-the-loop scenario, or governance. Whether it’s a state actor deploying a rights-tracking system… or a business automating compliance reviews, accountability cannot be outsourced to an algorithm.

Speaker

Domenico Zipoli


Reason

This comment articulated a fundamental principle about AI governance – that technology alone cannot solve problems without proper human oversight and ethical design. It challenged the notion that AI can be a neutral tool and emphasized the critical importance of maintaining human accountability.


Impact

This insight reinforced the session’s focus on human-centered AI governance and influenced the discussion toward practical implementation strategies. It provided a philosophical foundation for the technical and regulatory discussions that followed.


So if you don’t know [that AI has been used], how can you apply for certain remedy to certain oversight institution court or any kind of commission and so on. So that’s why the remedy issue is very important.

Speaker

Lyra Jakulevičienė


Reason

This comment revealed a critical gap in AI governance – the impossibility of seeking redress for AI-related harms without transparency about AI use. It connected technical transparency requirements to fundamental rights and access to justice.


Impact

This observation linked technical implementation details to broader justice and accountability frameworks, influencing the discussion to consider the downstream effects of AI opacity. It reinforced arguments for mandatory disclosure and transparency requirements.


Overall assessment

These key comments fundamentally shaped the discussion by introducing multiple layers of complexity to AI implementation in business. Rather than focusing narrowly on technical capabilities or regulatory compliance, the comments collectively steered the conversation toward a more holistic understanding of AI adoption challenges. The discussion evolved from initial optimism about AI benefits to a nuanced exploration of implementation challenges, human factors, dual-use concerns, and governance requirements. The speakers’ insights created a framework for understanding AI adoption as an interdisciplinary challenge requiring technical expertise, human rights knowledge, change management skills, and robust governance structures. This multi-dimensional perspective influenced the session’s conclusion that successful AI implementation requires ongoing collaboration between diverse stakeholders and cannot be solved through technology or regulation alone.


Follow-up questions

How will the EU AI Act impact Polish companies and does it address the education and skills gaps identified in the research?

Speaker

Jörn Erbguth


Explanation

This question addresses whether current AI regulation is sufficient to solve the implementation challenges businesses face, particularly around mandatory education requirements and skills development.


What will be the results of repeating the AI adoption survey within the next 6-12 months to measure the impact of AI Act implementation?

Speaker

Katarzyna Ellis


Explanation

This follow-up research is needed to assess whether regulatory frameworks are effectively addressing the challenges identified in the current study, particularly around education and compliance.


How can companies effectively address the over 50% of workforce that has negative feelings about AI implementation?

Speaker

Katarzyna Ellis


Explanation

This represents a critical business challenge where more than half of employees are resistant to AI adoption, requiring specific strategies for change management and trust building.


How can the significant talent gap be addressed when 1,100 people retire for every 435 entering the workforce in Poland?

Speaker

Katarzyna Ellis


Explanation

This demographic challenge compounds the AI skills shortage and requires strategic workforce planning and development solutions.


How can businesses conduct effective human rights due diligence when using AI, particularly regarding the 1000+ various AI standards that exist globally?

Speaker

Lyra Jakulevičienė


Explanation

The complexity of navigating multiple standards while ensuring human rights compliance presents a significant challenge for businesses implementing AI systems.


How can small and medium enterprises (SMEs) effectively implement AI while ensuring human rights compliance given their limited resources?

Speaker

Lyra Jakulevičienė


Explanation

SMEs face particular challenges in AI implementation due to resource constraints, requiring tailored approaches and potentially collaborative solutions.


What will be the findings of the UN Working Group’s upcoming report on AI, business and human rights focusing on procurement and deployment?

Speaker

Lyra Jakulevičienė


Explanation

This report will provide important guidance on less-explored aspects of AI implementation in business contexts, particularly procurement and deployment practices.


How can businesses effectively map and identify all AI tools being used within their organizations, including those used by individual employees?

Speaker

Lyra Jakulevičienė


Explanation

Many companies lack awareness of all AI tools being used internally, which is essential for proper governance and risk management.


How can digital human rights tracking tools be adapted and scaled for business use in conducting due diligence and ESG monitoring?

Speaker

Domenico Zipoli


Explanation

There’s potential to leverage public sector human rights monitoring infrastructure for private sector compliance and risk management.


How can the value of tracked human rights performance data be quantified and potentially turned into investment cases or business commodities?

Speaker

Tigran Karapetyan


Explanation

This explores the potential economic value of human rights data and its role in investment decisions and business valuation.


How can clear definitions for classifying high-risk AI scenarios and AI-based safety components in telecommunications be established?

Speaker

Angela Coriz


Explanation

Regulatory uncertainty around AI classification creates implementation challenges for telecom companies operating under the AI Act.


How much additional data traffic will AI generate and what investment needs will this create for telecom infrastructure?

Speaker

Angela Coriz


Explanation

The infrastructure impact of increased AI usage is difficult to predict but will require significant investment planning from telecom providers.


Are human rights and ethics genuinely being considered when implementing AI systems in both private and public sectors?

Speaker

Pablo Arrenovo (Audience member)


Explanation

This fundamental question addresses whether ethical considerations are being meaningfully integrated into AI implementation decisions across sectors.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.