Empowering Workers in the Age of AI
9 Jul 2025 14:00h - 15:00h
Empowering Workers in the Age of AI
Session at a glance
Summary
This discussion featured four International Labour Organization (ILO) representatives presenting their work on artificial intelligence and skills development in the context of the world of work. Juan Ivan Martin Lataix opened by highlighting the persistent global digital divide, with 2.6 billion people still lacking internet access, which creates fundamental challenges for digital skills training. He emphasized that while technology adoption happens rapidly—citing ChatGPT’s quick reach of 100 million users—the speed of technological change far outpaces the ability to retrain workers, with UNESCO estimating that 9 out of 10 jobs will need reskilling by 2030.
The ILO’s research suggests that rather than causing mass job displacement, AI will primarily augment workers, particularly in managerial roles, while automation will mainly affect clerical positions, disproportionately impacting women. Manal Azzi discussed occupational safety and health implications, noting that while AI technologies like robotics and smart monitoring systems can remove workers from hazardous situations and predict workplace risks, they also introduce new concerns around human-robot interaction, privacy, and over-reliance on automated systems. She stressed the importance of considering the entire AI supply chain, including data annotators, content moderators, and electronic waste workers who face their own safety challenges.
Sher Verick presented findings from the ILO’s AI Observatory, indicating that while one in four jobs globally may be exposed to AI, only 3.3% face automation risk, with the impact concentrated in high-income countries and knowledge work sectors. Tom Wambeke from the ITC-ILO emphasized the need to move beyond simply digitizing existing processes to fundamentally transforming learning and training approaches. The discussion concluded with audience questions about AI’s role in diplomacy, universal basic income, regulation challenges, and youth unemployment in developing countries, with presenters advocating for human-centered approaches to AI adoption and comprehensive social protection systems.
Keypoints
## Major Discussion Points:
– **Digital Skills and the Global Digital Divide**: The discussion highlighted that 2.6 billion people still lack internet access, creating a massive challenge for digital skills development. The ILO advocates for three levels of digital skills: basic literacy for all, intermediate skills for specific industries, and advanced STEM skills for specialized roles.
– **AI’s Impact on Jobs – Augmentation vs. Automation**: Research shows that while AI will affect many jobs, most impacts will involve augmentation (workers using AI to do jobs better) rather than complete automation. Only 3.3% of global employment faces automation risk, though this still represents 130 million jobs worldwide, with women and clerical workers disproportionately affected.
– **Workplace Safety and Health in the AI Era**: AI and automation technologies offer significant benefits for worker safety by removing humans from hazardous environments and enabling predictive risk management. However, they also introduce new risks including human-robot interaction dangers, privacy concerns from monitoring systems, and the dehumanization of work through algorithmic management.
– **Institutional Capacity Building and AI Literacy**: Beyond individual training, there’s a critical need for institutional transformation in how organizations adopt AI. This includes debunking AI myths, asking better questions about implementation, and viewing AI as part of a broader technological ecosystem rather than an isolated solution.
– **Regulatory Challenges and the Future of Work**: The ILO is working on new international standards for platform economy work and addressing concerns about algorithmic management, bias in AI systems, and the need for human-centered approaches to technology adoption in the workplace.
## Overall Purpose:
The discussion aimed to present the International Labour Organization’s comprehensive approach to AI and skills development, covering research findings, policy recommendations, and practical implementation strategies for managing AI’s impact on the world of work while promoting decent work standards globally.
## Overall Tone:
The tone was professional and informative, with speakers presenting evidence-based research while acknowledging both opportunities and challenges. The discussion maintained a balanced, cautiously optimistic perspective throughout – neither dismissing AI’s transformative potential nor succumbing to apocalyptic predictions about job displacement. The interactive Q&A session at the end introduced more practical concerns from the audience, but the overall tone remained constructive and solution-oriented.
Speakers
– **Juan Ivan Martin Lataix**: Works on digital skills at the International Labor Organization (ILO)
– **Manal Azzi**: Works at the International Labor Organization (ILO) in the occupational safety and health branch, focusing on protection of workers from exposure to hazards
– **Sher Verick**: Advisor to the Deputy Director General of the ILO, speaks about the ILO’s AI Observatory that conducts research on AI in the world of work
– **Tom Wambeke**: Chief Innovation Officer at the ITC-ILO (International Training Centre of the ILO), leads the learning innovation lab in Turin
– **Audience**: Multiple audience members who asked questions during the session, including:
– Melissa from CDBTO in Vienna (part of UN system)
– Representative from One Goal initiative for governance
– Someone asking about youth unemployment in developing countries
**Additional speakers:**
None identified beyond those in the speakers names list.
Full session report
# Comprehensive Discussion Report: AI and the Future of Work – International Labour Organization Perspectives
## Executive Summary
This discussion featured four representatives from the International Labour Organization (ILO) presenting comprehensive research and policy perspectives on artificial intelligence’s impact on the world of work. The session addressed critical challenges including the persistent global digital divide, AI’s transformative rather than replacement effects on employment, workplace safety implications, and the need for fundamental transformation in learning and training approaches. The speakers presented complementary perspectives while acknowledging the massive scale of workforce transformation required, with UNESCO estimating that 9 out of 10 jobs will need reskilling by 2030.
## Opening Context: The Scale of the Digital Challenge
Juan Ivan Martin Lataix, working on digital skills at the ILO, opened the discussion by establishing the fundamental context of global digital inequality. He highlighted that 2.6 billion people worldwide still lack internet access, with India alone having 900 million people in rural areas without Internet access. This digital divide represents a critical challenge as technology adoption accelerates rapidly—ChatGPT reached 100 million users in just a couple of months—whilst the capacity for retraining workers lags significantly behind.
Lataix noted the complexity of measuring digital access, pointing out challenges with double-counting in phone and SIM card statistics due to multiple devices per person. He also referenced discussions with colleagues from Microsoft and Google about the intense competition for AI talent, illustrating the uneven distribution of AI expertise.
The magnitude of the skills challenge is substantial. According to UNESCO research cited by Lataix, 9 out of 10 jobs will require reskilling by 2030, affecting billions of people globally. This statistic reframes the AI discussion from a technical implementation issue to a massive human development challenge requiring coordinated response.
## AI’s Impact on Employment: Augmentation Over Automation
Sher Verick, advisor to the ILO’s Deputy Director General and representative of the AI Observatory, presented research findings published in late 2023 and May that challenge common narratives about AI-driven job displacement. The ILO’s analysis indicates that whilst one in four jobs globally may be exposed to AI, only 3.3% face genuine automation risk. This represents approximately 130 million jobs worldwide—a significant number, but far from the “job apocalypse” often predicted.
The research reveals that AI’s primary impact will be augmentation rather than replacement, particularly affecting managerial roles where AI can enhance decision-making capabilities. However, the impact is not evenly distributed. Clerical positions face higher automation risk, and there is a correlation between these roles and women workers, creating particular challenges for female employment. Additionally, the effects are concentrated in high-income countries and knowledge work sectors, creating uneven global impacts.
Verick emphasised that the benefits of AI adoption are similarly unequal, with the global north positioned to capture more advantages whilst the global south faces different challenges. This geographical disparity in both risks and benefits represents a critical policy consideration for international organisations.
## Workplace Safety and Health: Opportunities and New Risks
Manal Azzi, representing the ILO’s occupational safety and health branch, provided a nuanced analysis of AI’s implications for worker protection. She highlighted significant opportunities for improving workplace safety through AI and robotics technologies, which can remove workers from hazardous environments and enable predictive risk management. Smart monitoring systems can anticipate workplace dangers before they materialise, potentially preventing injuries and fatalities.
However, Azzi also identified emerging risks that require careful consideration. Human-robot interaction introduces new safety challenges, whilst algorithmic management systems raise concerns about worker autonomy and the dehumanisation of work processes. She provided a stark example of algorithmic management’s potential for dehumanisation: an Uber driver being fired while in an ambulance, illustrating how automated systems can lack human judgement and compassion.
Privacy concerns arise as AI systems collect extensive personal and professional data, and there are risks of over-reliance on automated systems that may fail or make errors. Azzi referenced the ILO’s Violence Harassment Convention as part of the broader framework for worker protection in the digital age.
Crucially, Azzi expanded the discussion beyond end-users to consider the entire AI supply chain. She noted that workers throughout this chain face their own safety and health challenges, including miners extracting critical minerals, factory workers assembling technology, data annotators processing content, content moderators exposed to harmful material, and electronic waste workers handling toxic substances. This comprehensive view reveals the human costs throughout the AI production process.
## Transforming Learning and Training Approaches
Tom Wambeke, Chief Innovation Officer at the International Training Centre of the ILO in Turin, brought a critical perspective to current AI adoption in educational settings. Having attended an “AI Skills Coalition” session earlier, he argued that much of what passes for AI innovation in training represents “old stuff in new jackets”—superficial digitisation of existing processes rather than genuine transformation. He specifically criticised current applications like “automated grading” and “AI chatbots” as examples of this superficial approach.
Wambeke noted that colleagues often immediately ask for chatbots without proper reflection on whether this represents meaningful innovation. He referenced Stephen Hawking’s definition and proposed redefining intelligence as “the ability to adapt to change,” suggesting this should also define effective learning. He warned using an organizational change management principle that when the rate of change outside an organisation exceeds the rate of change inside, “the end is near.”
Rather than using AI to provide ready answers, Wambeke advocated for using AI to ask better questions. He emphasised that effective teaching involves “the art of assisting discovery” rather than information transmission, highlighting irreplaceable human skills in educational processes. He used a specific example from Belgium/Antwerp with a building sign to illustrate his points about learning and discovery.
Wambeke warned that the biggest risk of AI is automating ineffective practices, noting that “feeding an AI the entire internet does not make you a teacher.” He also referenced “Sophia,” the humanoid robot from Saudi Arabia, as part of his discussion about the limitations of current AI applications.
## Skills Development Framework and Implementation Challenges
Lataix outlined a three-tier framework for digital skills development that addresses different levels of need across the workforce. The first tier involves basic digital literacy for all workers, enabling fundamental interaction with digital systems. The second tier focuses on intermediate skills tailored to specific industries and roles. The third tier encompasses advanced STEM skills for specialised positions requiring deep technical expertise.
However, implementing this framework faces significant challenges. Training institutions struggle with the speed of curriculum development, which typically takes years whilst technology evolves in months. The gap between technological advancement and educational adaptation creates ongoing difficulties in maintaining relevant skills training programmes. Lataix emphasised that training institutions themselves need digital transformation to effectively deliver these programmes.
Furthermore, Lataix highlighted that AI models contain inherent bias due to training data predominantly sourced from the global north and historical records dating back centuries. This bias affects AI system outputs and recommendations, potentially perpetuating existing inequalities and limiting effectiveness in diverse global contexts.
## Regulatory and Governance Responses
The ILO is actively developing new international standards to address AI-related workplace challenges. Azzi reported that the organisation is working on labour standards for the platform economy, with constituent discussions ongoing and a final draft scheduled for discussion in June 2026. These standards will address algorithmic management, worker classification, and protection measures for platform workers.
The AI Observatory, part of the ILO’s research department, continues research on critical areas including algorithmic management practices, digital labour platforms, data governance frameworks, and AI-enabled skills matching systems. This research informs policy development and provides evidence-based guidance for member states and social partners.
However, significant regulatory challenges remain unresolved. Balancing innovation with worker protection, addressing AI model bias, managing privacy concerns, and ensuring quality in partially automated systems all require ongoing attention and international cooperation.
## Audience Engagement and Practical Applications
The discussion, which included both in-person and online participants, concluded with extensive audience participation revealing practical concerns about AI implementation across various sectors. Melissa from the CDBTO (Comprehensive Test Ban Treaty Organization) in Vienna inquired about AI’s role in diplomacy, particularly its potential for pattern recognition in speeches and anticipating delegate questions. The speakers acknowledged AI’s capabilities in these areas whilst emphasising the continued importance of human judgement in diplomatic processes.
Questions about Universal Basic Income (UBI) as a response to AI-driven job displacement, including specific questions about UBI management, prompted discussion of alternative approaches. Verick advocated for universal social protection systems rather than UBI, arguing for comprehensive safety nets that address various forms of economic insecurity rather than focusing solely on income replacement.
A particularly significant question addressed youth unemployment in developing countries, where rates can reach around 40%. The speakers acknowledged that AI alone cannot solve these structural economic challenges, emphasising the need for broader economic policies and development strategies that create employment opportunities for young people.
## Different Perspectives and Approaches
The speakers presented complementary but distinct perspectives reflecting their different areas of expertise. Wambeke’s critique of current AI adoption as superficial contrasted with Lataix’s more structured, tiered approach to digital skills development. While both recognised the need for transformation, they differed on implementation strategies and the pace of change.
Azzi’s focus on practical safety applications and comprehensive supply chain considerations provided a different lens from Wambeke’s emphasis on AI’s potential for transformative learning and questioning. These differences reflect the complexity of AI implementation across different domains while maintaining shared commitment to worker-centred approaches.
Verick’s research-based perspective on employment impacts offered empirical grounding to complement the more operational and policy-focused presentations from his colleagues.
## Unresolved Challenges and Future Directions
Several critical issues remain unresolved and require ongoing attention. The fundamental challenge of AI model bias persists, particularly when training data from diverse regions remains limited or non-digitised. Balancing the speed of technological change with the time needed for quality skills development and institutional adaptation continues to challenge policymakers and practitioners.
Questions about maintaining human agency in increasingly automated workplaces, ensuring quality in AI-enhanced educational systems, and addressing the global inequality in AI benefits and risks all require sustained international cooperation and innovative policy solutions.
## Implications for International Labour Policy
This discussion reveals the ILO’s comprehensive approach to AI governance, combining research, standard-setting, and capacity-building activities. The organisation’s focus on human-centred AI development, attention to supply chain impacts, and commitment to addressing global inequalities positions it as a crucial actor in shaping the future of work in the AI era.
The speakers’ emphasis on transformation over replacement, augmentation over automation, and adaptation over resistance provides a balanced framework for approaching AI adoption in workplace contexts. Their recognition of both opportunities and risks, combined with practical policy recommendations, offers valuable guidance for member states and social partners navigating AI implementation.
The discussion demonstrates that whilst AI presents significant challenges for the world of work, thoughtful policy responses, comprehensive skills development, and sustained commitment to human dignity can help ensure that technological advancement serves human flourishing. The ILO’s multifaceted approach, including resources available at their conference stand, provides a model for international cooperation in addressing one of the defining challenges of the 21st century.
Session transcript
Juan Ivan Martin Lataix: There are people online too. It’s open. We are conducting this session among the many others that are concurring at the same time. Can I ask how many of you were in the sessions this morning with ITU and ILO? Okay, some of you. Okay, right, right. Okay, so this session is about the ILO, so the International Labor Organization, and what are we doing surrounding AI and skills, right? So we have four colleagues here today. Myself, Juan MartÃn, I will talk about digital skills. We have Manal Azzi that will talk about OSH and the latest report they just have published. Then we’ll have Sher Verick, he’s an advisor to our DDG, and he will be talking about the AI Observatory that is doing research in the world of work around AI. And last but not least, we’ll have Tom Wambeke, that is the Chief Innovation Officer at the ITC-ILO. We would like to make it as dynamic as we can. This is a small gathering, so I think it’s good. So we’ll start with the presentations. We have like 15 minutes per presentation, but at the end of each one of them, you can shoot some questions. At the end, if you want to add questions to any of us, we’ll be happy to take them. All right, so first of all, in this event, you’ve probably heard a lot of data and a lot of numbers. So I took some from ITU from end of last year. There is still a very big digital divide in the world, right? And this is a very important starting point for us because when it comes to skilling people and skilling them digitally, they need to have access, right? So I think this number is enhancing. There is more and more people having a global Internet, but still there is 2.6 billion people without access. In the previous session, we had a speaker from India saying that in India, there is 900 million in the rural areas of people without Internet access. And he said this is more than all the population of Europe and the U.S. together. So, you know, we always kind of forget about the dimensions of this. Similarly, with phone numbers of phone devices, this is enhancing, but still it’s far. And also there is a lot of double counting because, you know, in many parts of the world, people don’t have one phone, but they have two or a connected watch or whatever. So the SIM cards are sometimes more than one per person in some countries. Therefore, this number is very slow. And last but not least, the speed. The speed dimension is also a big challenge for us in skills because it takes time to train people, to upskill them or re-skill them. But technology goes so fast that it’s very difficult to catch up. So one of the examples is chat GPT adoption that in just a couple of months reached 100 million users and now many more, right? So this is something that we try to keep in mind when it goes about upskilling people around the world. The UNESCO did a report end of 2023 saying that out of 10 jobs, 9 will need to be re-skilled by 2030. This is billions of people. So the size of these challenges is enormous. We work with governments and we work with training institutions around the world and they’re trying to see how they can cope with this very fast speed of change and how they can have their populations. And also in all those presentations, should we have digital literacy for all? Or should we do more the STEM? Should we take care of minorities? What happens with elderly people, with women, with people? So it is very difficult to really tackle this challenge. This is from a report the ILO published also last year, well, end of 2023. That was very interesting because it was looking at what is the impact of AI in jobs. And they really looked into all the ranges of jobs during ISCO, right? So it’s really the taxonomy that ranges all the jobs that there are. And the finding was pretty positive that a lot of people will be impacted by AI but most of the impacts will be people being augmented. Augmented meaning that we will upskill to use AI to do our jobs better or differently. And this is mostly managerial jobs. But on the other hand, there will be a lot of people that will see their jobs disappearing. And you have a full list of jobs here but mostly it’s clerical jobs. So people doing data entry, people doing things that are prone to automation of sorts. And that raised a number of concerns because typically there is a correlation between these type of jobs and women. So therefore AI will come as an imbalance, having even more women, let’s say, losing jobs or not having a proper job due to the adoption of this technology. The other finding is that AI alone is impacting mostly the global north. And it’s impacting mostly knowledge works. People that are kind of more in the white collar kind of jobs. Whereas the global south and blue collar are not that impacted. At least with AI in isolation. If you combine it with robotics and other things then maybe the picture is different. So areas of concern that we looked from the world of work. First, bias and discrimination. So there is a big concern that a lot of the models that are being used today have been trained with the data available. The data available is mostly emanating from the global north, from the white man pen. And it’s emanating since the 600s when Gutenberg invented the press. So this is the data that is being used to train the models. It’s very difficult to have a model that is being trained with the data of specific regions or countries. The data is not there or it’s not digitalized. So it is a big challenge and it will take a lot of time and a lot of millions of years to build models that are not biased. So today most of the models that are used around the world are fairly biased and they are dominated by a few companies from the global north. There is a risk of unfair treatment based on these characteristics, of course. And then this could have systemic problems in the world of work. So we are working on the platform economy. There is an ongoing exercise with our constituents to create a standard. And there is a lot of concern about using algorithmic management so that the computers are making decisions on behalf of companies or people and that this could also lead to biases and problems. So we believe this needs to be regulated to a certain extent. Secondly, privacy and data. So these things are consuming data. And all of us, we use it daily. A lot of people say, no, I use ChatGPT daily. Great. But at the same time, at some point, ChatGPT knows much more about you than your family. Because not only do you use it for private things, but for personal and corporate things. So you start thinking, oh, I have a daughter that has this age, that has this problem, what do you think I should do? People use it for all kinds of things. And sometimes maybe not very consciously knowing that they are exposing to private companies a lot of personal information. So it’s great. Maybe the answer is very accurate. Maybe it kind of connects the dots that you asked me something similar six months ago. But at the end of the day, we’re exposing a lot of private data. And we believe, again, this will require some regulation. And then, yes, lack of human touch. So we all have seen all the issues, again, with the platform economy, the Uber driver that had an accident, and while he was going to the hospital in the ambulance, he was fired because he didn’t deliver the pizza on time. So how dehumanizing is this? We’re going towards models that are just driven by algorithms, driven by productivity, and driven by numbers, and not driven by what makes us human. And I think this is another area of big concern for us. So very quickly, for us, we are advocating towards having digital skills at different levels across the world. The more basic ones, anyone should have them. So this is agnostic of what type of job you do and what type of industry you’re in. Everybody should have this basic literacy, know how to browse the Internet, to send an email, etc. So we believe that this should happen. A lot of governments around the world are doing big campaigns so that this happens. Then there is a second level, more intermediate digital skills. And this, we’re seeing that more and more industries are asking for this. So not only do they want you to send emails, but they would like you to be able to use it to do some digital marketing, or to use some more social media. So this is something that we’re seeing in certain industries is being asked more and more. And then the third part, which is more the advanced digital skills. So this is the STEM part of things. We’re here. There is a lot of competition. We had a conversation with our colleagues from Microsoft and Google, etc., that are here. They are paying more and more to data scientists, to experts in artificial intelligence. So the tip of the pyramid is just incredible how this is going. But there is a big lack of sufficient persons that are trained to that level. So it is a very big issue at the three levels today to tackle. And just for you to know, when we work with training institutions and when we work with governments, what are the biggest challenges? So for training institutions, the speed, right? So every time we talk to them, say, yeah, it took us three years to put together this new program, and now that it’s in the market, it’s too late because it’s already old. So speed is of the essence, and it’s a big challenge. Secondly, the demand. So it’s both sides. So they work with the private sector, and the private sector is the one demanding and giving feedback. Say, we received your last cohort of students that are great in this, but they are not great on that. So you should have. So that also is adding to the complexity and having the things in the market on time. Then themselves, they are undergoing digital transformation. So not only they are skilling people in digital skills, but themselves, they need to go through the digital transformation. So how do they invest in AI, VR, 3D printing, whatever it is, so that they can train more people? And this is very important, because if you continue training people on a face-to-face instructor-led kind of fashion offline, there is so much you can do. You will never be able to upskill and reskill the millions of people that we saw before. The only way is through technology, where people with their mobile, while they are commuting, they can do bite-sized training in five minutes, get something. And that will require a digital transformation. And then, well, apprenticeships is an area very interesting for the ILO, where we are pushing for changes also in regulations. And then for governments. So governments need to make sure that they regulate in an equanimous manner, so that people have a fair chance at all levels of the society. They need to promote lifelong learning. So this is not about creating people that have a certain skill, and then they have it for life. You need to continue maintaining and re-educating yourself as you go. Developing the relevant digital skills. So sometimes the governments don’t necessarily have a long-term strategy. So that is linked to their skill strategy. So it is important to know, how do you see your country in 20 years’ time? How many doctors, lawyers, et cetera, data scientists do you need? So that then you can go for that. And that often we see it is a challenge to really foresee what is coming. Again, the speed. And then the last one, ensuring the quality of learning. This is a big area of concern. Typically, governments don’t have sufficient capacity. So they do RFPs, RFQs, et cetera, and they hire companies. But it’s very difficult for them to kind of make sure that the quality is there. So this is in a nutshell what we’ll do from the ILO skills. With that, I’m going to hand over to my colleague, Manon. Thank you very much.
Manal Azzi: Thank you. Unless there are any pressing questions later. Thank you. Good morning, again. Good afternoon, everyone. So I also work in the International Labor Organization based here in Geneva. We have a branch on occupational safety and health, protection of workers from exposure to hazards. And we work on a variety of issues, of course, at work. Biological risks, chemical risks, psychosocial risks at work, ergonomics, and physical risks. And recently this year, we published our new report on robotics, automation, digitalization, and AI, and how it’s had an impact on the discipline of safety and health. How it’s helped improve some of the very hazardous situations that workers find themselves in. But of course, it doesn’t come with other risks that we can be prone to once we start introducing some of these technologies. What we looked at were five different major areas, if we were to group them. The first one is automation and advanced robotics, which is used a lot in many sectors. What we’ve seen is, and these are not new, of course, I mean, automation has been around for more than decades, the move towards more automated work, the introduction of different machines to do certain tasks. And the use of robotics, it could be robotic arms, complete robotics, half robots. So they’ve been used in a lot of industries. And the positive impact has been that workers have been removed from highly hazardous jobs. And so where we work, for example, in high temperatures, where we’re melting metal, it’s robotic arms that’s doing that. It’s no longer a human exposed to such high temperatures. Even using drones to enter into confined spaces where we need to be, or drones that actually spray pesticide in big agricultural fields instead of exposing workers to these hazardous substances. Also, a lot of the repetitive movements that cause a lot of strain on humans are now being done in operation processes with robots. So we do need to acknowledge that some of these robots and the automation of some of our tasks is allowing workers to move on to more meaningful tasks, to do things that challenge them more, rather than doing monotonous or repetitive tasks. Recently, and if we’re talking from a psychosocial health perspective, it’s been seen that a mix of sort of more mundane tasks and challenging tasks is ideal for people, for their growth and identity at work, and for them to feel that they’re meeting a certain higher objective by going to work rather than just doing something very basic. So we see that for the safety and health of workers. If we look at smart tools and monitoring systems, before we used to know about some hazards in construction, in agriculture, but now we’re able to predict and take quick action when it comes to certain hazards. And that’s by, for example, workers that are able to wear smart wearables. They could be arm wristbands, they could be ear muffles. So many different detectors that actually help detect hazards. For example, in construction, you’ve got workers wearing certain sensor material that can detect the risk to fall from height, which is one of the biggest causes of death globally across many sectors. And so when it detects the fall from height, it could give a sign to the worker to prevent it. And not only that, it’s linked to medical teams that can come on site very quickly, as opposed to previously, phone calls. So it’s connected immediately, and this improves the outcome for survival for a lot of these workers as they’re doing these jobs.
Juan Ivan Martin Lataix: And so this idea that it can predict risk based on all of the algorithm systems and all the big data that we can now manage can increase the prospects of living and quality of life for a number of workers across many sectors. And virtual reality, for safety and health mostly, it’s been used to create environments that are very similar to the hazardous environments that they may face. The first thing that comes to mind would be firefighters. Instead of them training in real fires, being exposed to fumes and foams and heat, you can do that through virtual reality. And they can even wear the exact dress code that they need to wear to protect themselves while they’re doing the virtual reality practice. Without being exposed to the dangers that that would mean if they were to be practicing and training in the real environment. So it has helped a lot for the safety and health perspective. The increase used in algorithmic management of work, managing work through algorithms, if we look at the positive side, it has allowed us to make schedules more adapted to the needs of workers and also distribute different workloads evenly or in a more equal manner based on the data that we can get. And this improves, obviously, efficiency and it could also help identify where there’s a gap in certain skills. So it has worked positively in that sense. The last thing we looked at in the report is that thanks to a lot of these introduction of technology and digitalization, we’re able to have different forms of work and different forms of working away from the offices and design and designated workspaces. This has allowed, for example, people with disabilities or people with caregiving responsibilities, whether for elderly or younger, to be able to access the labor market. Otherwise they would not have been able to be part of the labor market. And of course, it reduces a lot of some of the risks from commuting and the time wasted and promotes this kind of inclusion. So there’s a lot of positives to expanding the labor market to include different kinds of work that have been allowed and possible due to the technologies that we’re talking about. But of course, all this does not come without new risks that could be introduced with these technologies that we need to manage, prevent, and also regulate. For automation, of course, when you’re working with robots, the human-robot interaction can be very risky. These are not necessarily reliable all the time. It really depends how we manage them. Also, for example, in the use of exoskeletons, when we’re designing them to protect workers from musculoskeletal disorders and other strains, they have to be designed according to the needs and shape of the worker. So they have to be tailored and personalized. We are having to shift, and when we see a lot of these technologies taking over some of the tasks we’ve done, we do sense that we are losing control of our workspace and our jobs and what we’re actually meant to be doing. So those are some of the risks, and these are physical risks, they’re organizational risks, and they’re psychosocial risks for workers. So it’s not just about the physical risks. And when we rely a lot on these smart tools and devices to detect hazards and sensors and monitor and give us different signs, we also are relying on malfunctions that could occur with these systems. And so we need to be careful of that. We need to make sure that the human being remains at the center of decision-making and not to rely on these short-term aids that we can have, but we still have to use our own sense of judgment when it comes to safety and health and not completely rely on them.
Manal Azzi: And to get all the information we need in these monitoring systems, we obviously may step on various privacy and ethical concerns, and we get access to data maybe that’s more than we need for safety and health concerns. And that’s one of the biggest problems, that the confidentiality and workers that feel that they don’t know what data are being gathered about them, and sometimes it starts off by being to protect your safety and health and ends up being just too much data that is not necessary to be shared. For virtual reality, of course, it’s not the most comfortable. I don’t know if you’ve tried wearing the – Tom’s not going to like it because our Turin Center, a lot of our trainings now are happening through these virtual systems and goggles that you wear where you’re really not really aware of the space you’re in, and it could create more physical hazards and dangers and loss of balance, et cetera. So it’s something you need to get used to. You need to only use for a certain point of time. But, yeah, so it has its own challenges to work with virtual reality and rely on that. For algorithmic management, obviously, like the example, Ivan, you know, you just mentioned, is that we’re dealing with systems and we’re no longer dealing with people, and so there isn’t that flexibility to understand what’s behind the number or the digit that you’re receiving that can advise you on scheduling preferences, et cetera, but it may not take into account other nuances that we would only understand if we know the person and the human being. But, of course, another positive part, like I said earlier, is we are relieving workers from some tedious tasks, like even when we’re talking about the healthcare sector, and if we are able – robots are able to take vital signs and do some of the basic diagnostic testing for COVID or other, then the healthcare workers can give more time to understand and talk more meaningfully to the patient or understand other issues. So they just need to be used smartly, obviously, and they can be used to our advantage. The last one about changing work arrangements, of course, it brings the positives that we discussed, but it also blurs the line between what is work and what is our life and personal and private life. When do we start work? When do we stop work? Do we have the safety and health organization and set up and the ergonomic and environmental set up wherever we decide to work in different workspaces? There’s less control from, let’s say, an employer or the safety and health experts on where we are working and what we are doing, what we’re exposed to, not to mention cyberbullying, the increased reliance on Internet and access and pushing everyone to be fully connected. We are increasing and exposing ourselves to even more different types of harassment and cyberbullying online. These are just a few key points. Of course, the report goes through a lot of details of what that means. Another thing I always like to mention, and sometimes it’s forgotten. So when we’re talking about this whole chain, the digitalization chain, of course, there are persons that are powering, the people powering AI have their own safety and health concerns. So we’re talking about data annotators, those people that actually prepare the data for the AI models that we rely on. They perform very repetitive tasks. They’re sometimes exposed to toxic material, invasive material over long periods of time, and they’re not protected or provided with psychosocial support that is necessary. Same with content moderators that actually analyze huge amounts of data, machine learning engineers that actually develop the AI systems using these large databases that are very complex and require the management of complex volumes of data, not to mention the big data analysts that actually use AI and machine learning to actually extract summaries and insights to advise our policymaking and other areas. So these are the people powering AI that have their own safety and health concerns. The other workers along the chain are those miners excavating critical minerals to allow us to use some of our computers and other tools. So you have miners, what you need is cobalt, lithium, and copper. And some of these workers are working in very dangerous conditions, sometimes not having the right protections in their countries. And factory workers that actually assemble all this technology, and in the end you’ve got the electronic waste, the business of electronic waste, where most of the time it’s in the informal economy, it is not regulated in any sense, and people are exposed to all the chemical substances that are oozing out of these electronics that are thrown in huge, not even landfills, like huge areas, and they’re exposed to mercury and so many different other substances as they’re dealing with what’s left from these digital equipment and technologies. So it’s important to consider the whole supply chain. Here are just the final parts. I just want to emphasize some of the responses that already exist, including international standards, where we do recognize the necessity of employers to ensure that all equipment is safe for use of workers. We also need to make sure that when something’s introduced in our workplace, workers are involved in understanding why is it introduced. We shouldn’t be introducing technologies because it exists. It needs to be fit for purpose, it needs to be explained as to why it will help or support a worker, and then you get more compliance and more collaboration in using and implementing these different equipment. We have other instruments, of course, our Violence Harassment Convention that aims to protect from violence and harassment in different workplaces, including those that occur in a digital mode, like cyberbullying. We are currently at the ILO. We just started this international labor conference in June, and there’s a second discussion next June, working on potentially a new instrument to promote decent work in the platform economy. It’s been quite an exciting discussion these couple of years that hopefully will end in the development or adoption of binding, hopefully, instruments around managing and promoting decent work in the platform economy. And we have the ILO Observatory at the ILO that Sher Verick, my colleague, will be speaking to in a minute. And of course, a lot of countries, in the report you will see there are so many examples, and from our constituents, worker, employer, organizations and governments, mainly ministries of labor, they’re very interested in the list that we’ve been able to compile on what are countries doing, what examples can we learn from, what is applicable in my country, in my sector, what priorities we can learn from. Here are some examples, I won’t go through them. Also, some that clearly regulate automation, advanced robotics, some that regulate the right to disconnect. What does it mean today for workers? Is a job 9 to 5? Is it better to just give flexibility when, for example, parents need to step out two, three hours during the day to take care of different needs, but may be able to connect again in the evening to finish their jobs? What kind of, what does a job look like when there is connection and disconnection to the Internet and when that is involved and our job relies on it? In addition to regulating the remote work, telework, and the digital platforms, because, you know, they’re quite different things with different needs as well. And here are some collective bargaining agreements where regulation is not up to speed or where you really need some specific agreements. Here are some examples in some countries where they’ve been able to ensure more rights to workers through these collective agreements. They’ve negotiated these agreements. And, of course, for example, if you think of the arm of compliance and enforcement in ministries of labor, labor inspectors have been using some of this technology to their advantage in trying to predict, you know, where accidents can happen, which sectors are going to be more prone by using existing data to make such predictions and make more proactive investigations. Everybody used to looking in more detail about technology and the risks that come with it. And this is, I think, my last slide, at the workplace level, so we talked a lot about the national and the national framework and the regulation but even at the workplace level, if you have any safety and health background, you know that we work by hierarchy. If you are exposed to something that’s dangerous, we try to eliminate it, if not we substitute it, if not we do engineering controls, admin controls or we provide people with personal protective equipment and this is how this is translated for the area of AI and technology, what it means to eliminate a hazard, what does that mean, so you actually replace physical entry with drones or robotic crawlers if it’s really hazardous, so it’s how do you use the technology to eliminate, how do you substitute so you don’t expose them to unnecessary dangers, you use virtual reality for training when you can, etc. Engineering controls, some examples, and today it’s normally we say that there’s a hierarchy of importance, right, and the last thing you should do is give people personal protective equipment, you have to do the others, but now with technology, personal protective equipment have a dual role, they are also sensors, they’re also detectors, so they are preventative and not just protective in a sense, so it’s just evolution of the science of safety and health that’s become interesting with the support and help of these technologies, this is just some key takeaways, what we need at the end is we need a little bit more research, it was very difficult to find data on how many people or injuries or accidents that have been prevented or saved or decreased because of the introduction of technology or the other way around, so we don’t have a lot of inputs globally on this and we need to make sure we personalize, adapt everything we’re using to the workers needs and specific characteristics, those are sort of the two key takeaways on our end.
Sher Verick: Well, good afternoon, you’ve been hearing a lot from the ILO so far and I’m going to continue, and my name is Sher Verek, I’m advisor to the Deputy Director General of the ILO and I’ve been asked to speak about the ILO’s observatory on AI in the world and work in the digital economy, but I will just cover a few of those issues that have already been highlighted before coming to what we’re doing on the observatory. I think what is happening with AI doesn’t need any introduction in this room, everyone here gets it, is looking at it, is part of this whole process of developing new tools and how that’s impacting the world of work is the issues that we are looking at, as you’ve already heard from Juan Ivan and Manal, and so you know when we look at these tools like this is mid-journey generating an image of Harry Potter, what does that mean for the labour market? Is this taking a job? So let me ask this question, in fact I wanted to ask you a question instead of just us talking to you, I want to ask you a question, how many of you think your job can be automated through AI? Completely. How many think some of your tasks can be automated? I think we all can think that and we are already using AI for certain tasks, right? Meeting notes even, simple things like that or creating images. My brother works in the design area, he has also an AI start-up and I was saying what does this mean for jobs in your industry? And he was saying, okay you know you used to employ an illustrator to produce an image, take a few days, now of course you expect that to be done in a couple of hours, still there’s an illustrator who’s using such a tool as mid-journey but is working with AI to develop that image, right? So that’s a real key issue and for those who know that whole approach of looking at what does technology mean for jobs, it’s about tasks. Tasks, we are bundles of tasks, right? Occupations are made up of tasks and so this looks suddenly very scientific, I’m not going to go into the details, it’s on the Observatory website, I’ll come back to that but the bottom line is and Juan Ivan talked about some of that research that we’ve been doing from our research department that’s on the Observatory, this is even more recent data and it’s in a study by our colleagues Pavel Gimerick and others and don’t worry you’ll get to see the link as well at the end, have done is looked at occupations in terms of their tasks and there’s a complex way of looking at their exposure of all those tasks to AI, how easy is it to automate those tasks, it’s been a very thorough process to identify that and then looking at how variable those tasks are within occupation. So the bottom line is without going into all those details, you have occupations up here which have the highest exposure and the lowest task variability. These are as Juan Ivan talked about administrative jobs, clerical roles, where you have a lot of tasks that can be automated and the same way we have other occupations where you have some tasks like we just discussed, some of our tasks can be automated but not all of them, so the heterogeneity of our tasks are still much greater for some of these other occupations that further down here and that has a fundamental issue for skilling, right? So if you’re here, you’re in an administrative role, increasingly some of other roles like web development etc which now is being done with AI, you’re going to need to think about other occupations shifting your job. If you’re in some of these other occupations, well then it’s about the opportunity to be augmented by AI, right? To be transformed by AI and then of course an issue from the skilling perspective is how to ensure that we have those skills in order to be transformed by AI. So there’s going to be both transformation and some automation but this translates into bigger figures for us at the ILO, the research that we’ve done, it’s on the observatory and that bottom line is that we, based on our own estimates that have come out including this year in May, a jobs apocalypse is unlikely, right? So just based on that story of automation versus augmentation that I just explained. If you look, it’s a bit hard for everyone to see I know, at the top is the figures for the world that we have which basically tell us that one in four jobs are potentially exposed to AI but that includes those that are not only going to be at risk of automation but also augmentation because in fact the transformation part, that augmentation part is far greater than those who are at risk of automation that I’ve been explaining and that share is just 3.3% of global employment according to our latest estimates that came out in May. However of course 3.3% of global employment still means around 130 million jobs around the world so it’s not, we can’t just dismiss that and say it’s nothing but it is in compared, a smaller share compared to that part, the darker blue parts that are up there and of course this is much higher in high-income countries as already mentioned as opposed to low-income countries that reflects the structures of economies of course. If you’re in agriculture, you’re not going to be automated, right? I mean you may use AI tools as a farmer but the basic functions of farming or cutting wood, there are certain tasks that are not going to be automated, right? Through AI, it can be other technologies in agriculture of course and as already mentioned, it also leaves women more vulnerable to this process particularly because they are more over-represented in admin roles. Now I won’t go into any more details, don’t have time and we’re running out of time as I can see already but I think really important message from the ILO is first is that we don’t expect a job apocalypse, we think the transformation part is going to be greater but second, we need to look at the implications for job quality. This has really come up in both what Manal and Juan Ivan talked about and this is about issues around wages and really that’s where you see the action about what happens on the demand and supply side. The ones with the stronger demand, top-end jobs, you talked about Google, Microsoft, wages are going up. Those who are in less demand or those are being automated or at least a number of their tasks have been automated, they don’t necessarily lose their jobs but their wages will be maybe flatlining or falling in real terms but a really important part as Manal said is about the algorithmic management. It’s ready in the workplace, what are those implications? I won’t repeat those and of course we have this issue of new jobs and again it’s been mentioned, you have the machine learning specialists but you have the content moderation and the supply chains that was also highlighted. So as you would have heard of course throughout this week, it’s about the digital divide as well. At the ILO, yes obviously these are driven by big forces in the world in terms of technological change but they are posing both opportunities and challenges and ultimately it also will depend on how tech is adopted and what it is used for and this will also be influenced by policy choices including on skills. Obviously it’s not all about policy or all about regulation but of course there is an opportunity here to promote those opportunities and respond to those challenges. So I’m going to keep this very short because we’ve still got one more presentation. So that is just a snapshot of the really key part of the research we’re doing in the ILO observatory on AI and work in the digital economy. website, you’ll see a lot of our material there. And what we are focusing on in the ILO is the following. Thematically, a lot of work has gone with the AI side, as you’ve heard. We’re also on algorithmic management. It’s a critical area when it comes to its impact in the workplace, and there’s a lot of country-level work as well that’s going on in that area. Digital labor platforms, you heard about the platform standard-setting process that is there, trying to look at how we can have a new labor standard that responds to those new challenges. Data is an area that we’re increasingly getting back into a focus on that as something that is an emerging area, and skills. Now, of course, this whole thing today and this session is about skills, and really Juan Iván has given you really a good overview of that. But for the ILO, we look at all of this and how this fits together, not only from a supply side but a demand side, how that matches the implications not only for job quantity but job quality. So let me stop there. It’s not just about a job apocalypse. It’s about transformation. But we need to keep an eye on job quality, not just the quantity side, and think about different entry points, which are not always obvious. It’s not just about jobs being lost, et cetera. It’s about what’s happening in the workplace. It’s what’s happening in the supply chains, as mentioned. And this is what we hope to do with the observatory as well. Thank you.
Tom Wambeke: Good afternoon. This is the last input before we can go a little bit more interactive. As you see from the title, one of my hobbies is finding new abbreviations for AI, becoming moving beyond artificial ignorance. So I’m Tom Wambeck, the chief of the learning innovation lab in Turin, which is the international training center of the ILO, which is the capacity building arm. And that’s where my reflections will come from today. What I’m doing there is leading a lab, an innovation lab. It might be important to mention it, because we are creating in Turin, in the north of Italy, a kind of a safe experimentation zone. It’s a zone of learning. I was in a session before on the AI skills coalition, and they were showing some survey results. And for example, for needs that were detected by leaders, policy makers, they said one of these major needs there from leaders was having a kind of a safe space where they can learn about AI without immediately being perceived as asking a dumb question. So what I’m saying is from the safe experimentation zone out there. A second element that I wanted to mention here is there are a lot of different things that I would like to mention in this presentation. I’m going to focus on three specific ideas in the area of AI upscaling. One element is in a training center, you would think that we are training individuals. We are training ILO constituents, employers, governments, and workers, also beyond that. But it’s not just about, let’s say, individual upscaling. Also, I want to make the link with the previous session that we were in. They said, well, we are now in an area where everybody’s doing, let’s say, pilots, but having issues to basically come up at scaling up, let’s say, these levels. And that’s why I would like to jump from, let’s say, the individual training level towards the more institutional training level. So what the center is trying to do also in the area of AI is providing institutional capacity development opportunities. And when we look at that curriculum, it’s much more than just, you know, individual upskilling from some specific target audiences. It has to do with a much broader angle than just, let’s say, technological change. It’s almost organizational change management. And that’s an important reflection also if we are going to look into our AI curriculum. So a little bit the overview of what we do at individual capacity towards institutional capacity, also with the hope that we can give a contribution to more system capacity development. But that’s just the background. Let me come back to the topic of today, because one of the big things that is mentioned, as I said, we have a mandate. If I look at our organization in terms of AI adoption, I think we have really a mandate to innovate. There’s really an active mandate. And also if I would look at the culture in a training center, it’s rather an agile, let’s say, culture where innovation is absent. And so what we’re trying to do is basically jump from, let’s say, casual experiments towards more systemic innovation out there by launching a whole bunch of different AI projects. One of the things that we have seen, everybody is talking about AI, but nobody’s doing it, so that we really started to start with projects. So what can we learn from these AI projects within institutions? And that’s also a little bit the entry point. By the way, at the ILO stand, you will find different courses and programs that we are organizing. And one of them is called the AI Forum, where we also help organizations on how to deploy AI at a systemic level. The first thing that we have to do in a training center is basically, I would say, upskilling in terms of AI literacy. Actually, I took this picture from many years ago here at the AI for Good seminar. You remember this, how would I say, this humanoid lady? You remember her, Sophia? When I walk now at the AI for Good, it’s still full of robots. But almost 10 years ago, there was already a pledge. In training, there will be some space for these kind of humanoid robots. So in a way, I’m trying to, much of our work is actually to do on debunking myths, where I would say that Sophie, who’s actually also an official resident from Saudi Arabia, is not yet ready in the classroom. If you look at the whole skills portfolio that we are having, actually, I took a picture in Belgium, this one. It’s kind of an interesting one. It’s in Antwerp. Hey, Chachoupi, finish this building. Your skills are irreplaceable. But it’s a lot of different, let’s say, narratives that we’re trying to build in. Okay, what do you want to achieve with this kind of AI upskilling or retraining? And these are, let’s say, the five myths that we are actually encountering all the time when we actually are talking about AI adoption. It’s a little bit similar with what Sher Verick was saying. It’s not the kind of replacement for it’s going to replace shop. It’s no. There’s also AI is not going to replace face-to-face training. It’s a bit more of a kind of a nuanced view that we need to have. So our initial, let’s say, initiatives are all about debunking AI myths. But then once you have done that, once you have a bit more of a realistic view on what can AI mean for capacity building, then there’s a whole field of, let’s say, learning opportunities coming out. And there are many of them. As I said, I only have 10 minutes. I’m not going to go through all the different sections here. That’s why I’m going to focus on three specific ideas. And they can be linked to any of the different topics out here. I will share these slides also afterwards. So my first idea is actually also inspired by maybe a different notion on what it means to be intelligent. I always like to refer to Stephen Hawking’s, let’s say, non-readable language, but you can understand it. He defines intelligence as the ability to adapt to change. For me, that’s also actually an excellent definition what learning should be. Learning is also adapting to change. And what happens if this kind of inspiration at the individual level, if you would take that also up to the organizational level, you come at another old quote, which is basically when the rate of change outside an organization is greater than the rate of change inside, the end is near. And that’s where I want to focus my three ideas on. It’s rather in function of a larger view on how AI can be deployed at an organizational setting rather than introducing some technologies. And specifically in capacity development. Here’s my first idea. It’s also at a foresight angle. And it’s also related to change management. This is a famous foresight person. If we always do what we have always done, we will get what we have always got. And if I look in how AI is being deployed in training, then I would expect new things. I would expect all kinds of new innovations. I would expect new curricula, new organizations, new architecture, new methods, new whatever, new connections, new administrative procedures, many other elements. But that’s not what I currently see when I look a little bit at the current AI initiatives. What I currently see is basically new stuff or old stuff in new jackets in one way or another. That’s how I actually could call it. And maybe to give you a few examples to look at some of these things in training. I see AI power things. I see automated grading. I see AI chatbots all over the place. They have, of course, a value. They are digitized. They create some practical add-ons in my curriculum. But do they really change something in my whole educational setup? That’s a bigger question that I would like to ask to you. So my question is, how can we really transform learning and training that we are creating an added value? And not just, let’s say, substituted by a new technology or augmented a little bit. But where can we do new stuff in new ways? And that requires a little bit more of a radical approach, which is not always obvious in education and training, which also, let’s say, changes very slowly. That’s the first idea of adding value. Second idea that I would like to share is the following, is like, again, with a foresight angle. Foresight is also one of Any useful statement about the future should at first seem ridiculous. What do I mean with that? If I rewind with 60 years ago, then I could show you this picture. This is from Sir William Preece, he’s Chief Engineer of the British Post Office. And he says, the Americans have need of the telephone, but we do not. We have plenty of messenger boys. If you surf the internet, it’s full of these kind of statements. And I would like to ask you the question, the same. We’re now confronted with AI, we all want to use it. What would be your intelligent what-if question for the next 50 years? Anyone wants to give it a try? What if? I’ll leave it silent, just think about it. But I think that’s a fundamental question to ask. Because with AI, it actually allows us to ask a whole bunch of new questions that maybe previously we did not ask ourselves. Some of these questions might sound a little bit more, let’s say, existential or philosophical. What can we learn from AI about human learning is one question. Or with machine-human interaction, who are the new actors and partners in AI learning and training? These are a few that you say, yes, very nice, you have time for philosophical questions. But if I would put it back on curriculum development, there’s a whole bunch of very concrete questions that we need to ask ourselves these days when AI gets infused into a training institute. Maybe I read one or two, how do we quality-assure partially automated teaching and assessment, just as one single question, or give me another one, on how do we curate and share knowledge to build the right and responsible AI. So there’s a whole bunch of new questions that we need to ask ourselves in order to innovate. And that’s actually my favorite what-if questions. What if we would use AI to ask better questions? It’s a question that I always share with my colleagues before we start a discussion. And then I have the last idea. And that idea is a bit of a criticism that we treat AI always a little bit, how would I say, isolated. While if you would use it or you would see it in daily practice, it’s completely embedded into a larger system. What do I mean with that when we’re going to look at the future of AI? That it will be a kind of any Cambrian explosion of AI offspring will occur, according to me, at the intersection with other technologies and systems. When we use AI in training, for example, it’s a combination of AI and VR in the rollout of, for example, soft skills training. It’s not just AI on itself. It’s really at the intersection of many other things. And therefore, we need the kind of a more, let’s say, ecological approach, an intersectional approach to assess not only the opportunities, but also to mitigate the many risks that are connected with it. And when I say other technologies, I think it was already mentioned, not only immersive technologies, but we have to go much broader. If I would look into the upcoming wave, it’s not only about immersive technologies, it’s about artificial intelligence, it’s about quantum computing, it’s about neurotechnology. There are many other angles out there that we also should bring into our reflections out there. So when I talk about AI, it’s part of a broader network, where I can ask them actually a lot of different questions than before. I think I’m almost at the time where it needs to be. Maybe one final reflection before we basically stop. Often when my colleagues said, okay, we need to start with AI and learning, the first initial reason that they said we’re going to do it faster, stronger, and better. It’s like a popular song of Daft Punk. But it reminds me that a lot of these questions are always linked to efficiencies, productivity in one way or another. And I think that, and also I’ve seen in the many conversations already this morning, that we need to go beyond that. More is not always better, I would mention it. And then maybe also with a second critical reflection is, progress is not about size or speed as much as it is about direction. And specifically within educational technology, I see too many projects starting not from a vision or from a reflection. I had a discussion with Juan, I remember. Everybody comes immediately, I need a chat bot. That’s the first thing that they said without having to ask, what do you really need? Have you had a more broader reflection about that? And that’s dangerous, because if you don’t have that right question, if you don’t have that vision, we go into the wrong direction. And what happens if you go into the wrong direction? Technology will get you there faster. So if that would lead to, for example, one of my favorite quotes, I know I use it a lot, but the biggest risk of AI is that it would automate ineffective practice out there. A quote from Professor Dan Schwartz. Because I think learning and training is much more than just integrating a few chat bots here and there. I think it’s about, yeah, if you have to come up with a definition, maybe according to your tasks of a teacher, well, feeding an AI the entire internet does not make you a teacher. And I can maybe also say that of a lot of other professions. Teaching or training is something much more complex and that won’t be replaced as such. For me, teaching or training is almost the art of, teaching is the art of assisting discovery. And that’s very difficult to capture into one task definition out there. But having said that, these were three ideas to reflect upon and maybe to feed in also in the three conversations that we have had already. So I’ll give it back to you, Juan.
Juan Ivan Martin Lataix: Thank you very much, Tom. So we are almost at time, but there is no another session right after, so we might spend another 10 minutes for those of you that so wish to take your questions. Thank you very much.
Audience: Yes. Hello, my name is Melissa. I work in Vienna with the CDBTO, which is part of the UN system. And thank you so much for this very, very interesting and thought-provoking presentation. I actually have two questions. And I find it very interesting. I’m taking notes of what I’m always doing as an observer and where things are heading and they are moving fast. But how do you see that AI can influence the UN in the area of diplomacy? Because as I was thinking, we are talking about some of the clerk tasks that maybe can be automated, of course, in the UN as well. But when you have been in the UN for many years, and you listen to some of the speeches, and you start realizing there’s kind of a pattern of repetition, and you can go into chat GPT and say, you know, this is the question that these delegates, this is the topic. What do you think are the questions that are going to be answered? And what do you suggest that should be in my presentation? And you can anticipate when you’re ahead based on the projects that you are doing, what could be asked. So this is one of my questions, is AI and diplomacy. And the other is, how do you see the needs to, and I think it links also to this, we don’t need to do more. We have to be working on the right things. But how do you see the needs of, because I feel there is a lot of pressure to catching up with AI, and there is maybe a bit of imposter syndrome that we are, you know, there is so much to learn, and we don’t have enough hours in the day to learn all of that. But where do you see the other stream of, let’s focus on being better humans, rather than just being very good with AI and maybe coding or not. So how do you see the role of ILO and the UN in general in supporting like a human approach to the AI investments? Thank you.
Juan Ivan Martin Lataix: Thank you. Thank you very much. We’re going to take two or three questions, and then we’ll try to answer them. I love mine of the One Goal initiative for governance.
Audience: This is a question to all of the presenters. How do you see the role of AI in UBI management? Yes. And maybe if the job market is not supposed to be a market anymore, about the management of who does what then, you know.
Juan Ivan Martin Lataix: Thank you. Good question. Anyone? Here.
Audience: Thank you. My question is about the role of the ILO in all that. Ms. Atzi mentioned that there will be soon a conference in which you will try to address how to regulate this. Do you expect that you can do something? Because the problem is not anymore a problem that can be regulated between negotiation, between labor organization and counterparts, but there is a third actor in the process, government and global governance. So it’s more complex adventure that you have done until now.
Juan Ivan Martin Lataix: Thank you. Thank you very much. Another one here. Thank you.
Audience: For me, it’s just the challenge that we face lately of high unemployment, especially in the developing countries. Where I come from, youth unemployment, it’s somewhere around 40% right now. And now the question is how then do we embrace AI in also combating unemployment in the midst of AI era? How do we ensure that we bridge the gap between unemployment if we do acknowledge that within this era there will be a bit of job losses, there will be a bit of some repetitive things that we need to get rid of?
Juan Ivan Martin Lataix: Thank you. Thank you. Let’s take these four questions already because we need to remember them and then we’ll take three or four more. And we will check also online. Yes, thank you for breaking the digital divide. The first one was more about diplomacy and AI. Maybe, Sher Verick, since you have this diplomatic role. and Dr. Sher Verick, Ms. Manal Azzi, Dr. Sher Verick, Ms. Manal Azzi, Dr. Sher Verick, Ms.
Sher Verick: But there is some interesting issues that have been discussed. I’ve heard from ITU on the use of AI in diplomacy. I would refer to our ITU colleagues there. But I think, indeed, we are not racing to embrace AI to replace a human-centered approach. I think it wouldn’t make any sense. We wouldn’t have a clear justification for that as well. Do you want me to stop at that question? There are others there. The second one was about universal basic income. Do you want me to answer that one as well? The ILO advocates for universal social protection. We don’t advocate for universal basic income specifically, but we advocate for universal social protection from childhood up to old-age pension. Of course, this is an ongoing debate. This has happened for a couple of decades, particularly during that robotic period, discussions back 10, 15 years. Do we need UBI because people are going to lose jobs and they don’t have anything else to do? Well, the fact is we’re not seeing that happen, right? So we still very much focus on what’s happening in the labor market, how social protection can then support workers and their families around that. It’s a long story behind that, but in short, that’s our focus, social protection, how do you get workers into decent work, how do you support them and their families?
Juan Ivan Martin Lataix: If I can build on this one. No, no, no, thank you. Very good question. I mean, for me, the ILO started more than 100 years ago, and one of the first regulations that emanated was to reduce the labor from seven days a week to six days, and then we went down to five. So maybe, yeah, AI for good, if we do it properly and the quality side, as we said before, maybe we end up as humans, as an entire planet, needing to work less to survive. Maybe the machines and the algorithms will do a lot of the things, and if that is true, then maybe this is conducive to working four days a week. So we don’t know. There is a lot of people and movies talking about dystopian futures, but maybe there is also a positive outcome, so maybe that will go in this direction. Then we have the question about the role of the ILO, and then the question about developing countries and the impact on people unemployed.
Manal Azzi: Yeah, I could talk about the role of the ILO, but maybe also just back to question one, because it’s focused on diplomacy, your question, but it also goes to a lot of public administration and public organizations and the way they’re working, and it’s important not to confuse also or blame AI for a lot of the changes, because we are facing a lot of restructuring and depletion of resources across a lot of these organizations, and tasks have been changing, and we can see internally in the ILO that the need for admin roles and tasks and staff has decreased as people are doing their own admin management. So there’s been that shift, and it’s happening, and it will reflect and be reflected on a lot of the work we do, but also in, again, as we said, while we can assume what the discussions, the questions and answers will be a year from now from some of our delegates, that we can’t forget the role of the humans and the ability to change and be agile and to change their perspectives and changes as we see the world, even politically. The world, as we know, can change overnight in a number of countries globally, so we need to keep up to date, and our trust and self-sense of responsibility needs to apply, even in the diplomatic world. The second point, can the ILO achieve, you asked, this dream of decent work in the platform economy? At least we know we have the mechanism for it. So it is a process, and it’s a process where we give the floor to worker organizations, their representatives, to the employers, but as you said, governments play an important role, mainly ministries of labor. These are the people that come around the table, and we give them the time to do so, and the research and evidence to inform the decisions that they’re making. So over the course of at least two years, even more, for preparation, we provide that platform and the necessary information to at least come up with something that could be a compromise, not the best standard sometimes, but it’s a compromise of what could work and could appeal to the implementation process and policy changes that we want to see at the workplace level and national level. I think the ILO is the right mechanism to do it. Can it achieve its ultimate goal? It’s something that we are observing as we go. Yes, the draft is already there, and so it was discussed in June, and now there’s more questionnaires that will go out to our constituency, and then a final draft will be discussed in June 2026.
Juan Ivan Martin Lataix: Thank you. Tom, do you want to take the last one?
Tom Wambeke: There are a few questions on there, and I’ll combine it also with I feel empathy for the imposter syndrome, because even being a specialist, there are too many tools, too many information. I’m also being overwhelmed with it, and that brings me to the question, maybe we have to focus on some of the essentials. I get a lot of critique of colleagues that say that chat GPT and other generative AI tools are basically easing up the whole learning process, and people, you know, copy-paste ready-made answers and whatever they need to do. But if we bring back to learn, what is learning? Learning is friction, learning is suffering, learning is basically having different viewpoints, and if you look into the field of AI, there are some super interesting tools that can help you to reach that. I’m thinking about the whole emerging field of antagonistic AI, where basically the AI does not give you the answer, but the AI basically questions yourself, and thinking about the kind of a digital queen that basically critically questions everything what you said, that’s actually a perfect learning tool. And that’s why I said go back to the essentials of learning, which could be also teaching, which could be asking questions and see what kind of AI-related tools could help you to augment, accelerate the objectives that you’re trying to achieve, and then bit by bit the imposter syndrome will fade away. Excellent, thank you. Let me try to take a couple of the questions online, and for those of you that need to go, please feel free. I’m sorry. Okay, okay. So, what has been some success stories in capacity building, particularly around AI literacy, any lessons and challenges from member states? Okay, I don’t know if from the experience with the governments, any capacity building around AI literacy and lessons learned? Yeah, but translate theory and practice, what I see now that people, and there’s a whole field of growing use cases in there, I think that’s where we basically need to go through, because as I mentioned already in my presentation, everybody talks about AI, but nobody’s doing it. So, in a way, what are the things that have been tested out? And what you see there is, I’m going to give you one specific example. We were working with the colleagues at the ILO of norms. We had a kind of an idea to use a kind of an AI tool on some very complicated legislative type of issues. And people were immediately thinking, oh, that’s just, let’s say you do the requirements and then you translate it into a chatbot that functions. But bit by bit, we started to see what are the complexities, what are the difficulties out there. And that was a whole dialogue which led to increased, let’s say, AI literacy, but not necessarily led to a kind of an AI solution. But the fact that a whole group of people from different angles, from legal experts to IT experts, started to address in a joint language of what they wanted to achieve. That, according to me, is successful AI, let’s say, improvements rather than, you know, showcasing the newest chatbot that is out there or any other elements. It’s a kind of, you know, gradual process. And definitely with the whole exponential change of the technologies that are out there that we need to continue that kind of more interdisciplinary dialogue among, let’s say, the different stakeholders that are out there. That would be for me, let’s say, a very modest definition of success.
Juan Ivan Martin Lataix: Okay. Thank you. Would you like to take a – we have to go. So thank you very much, everybody. We will be happy to take more questions in the coffee break. Thank you.
Sher Verick: Yeah, sorry, because this is a very important question. And I think bottom line is AI is neither, you know, the biggest challenge or the biggest solution for youth unemployment in the African region, right? I mean, there are other critical factors in terms of investment, trade. They’re going to drive job creation. Then the question is about the quality of jobs, et cetera. But really, you know, so I wouldn’t want to say that AI is going to change that. I mean, the solutions will need to come from a broader macro industrial sectoral perspective. But what is true is that we need to look at how the development of AI can benefit the region, not just from the global north. And where, for example, in Kenya, one of the countries that have been very active, Rwanda, others who have been very active trying to develop their own digital industry. So, yes, look at how the region can benefit from it. But it’s not going to be either the biggest challenge or the solution to those, you know, youth unemployment rates that you see in the region. That requires investment, requires job creation and broader set of policies would be my response. Thank you.
Juan Ivan Martin Lataix: Thank you. Thank you very much.
Juan Ivan Martin Lataix
Speech speed
169 words per minute
Speech length
3209 words
Speech time
1133 seconds
2.6 billion people still lack internet access globally, creating barriers to digital skills development
Explanation
Despite improvements in global internet connectivity, a significant portion of the world’s population remains without access to the internet, which creates fundamental barriers to developing digital skills. This digital divide is particularly pronounced in rural areas of developing countries.
Evidence
ITU data from end of last year showing 2.6 billion people without access; example from India where 900 million people in rural areas lack internet access, which is more than the combined population of Europe and the U.S.
Major discussion point
Digital divide and access challenges
Topics
Development | Infrastructure
Technology adoption speed outpaces training capacity, making it difficult for skills development to keep up
Explanation
The rapid pace of technological change, particularly in AI, creates challenges for training institutions and governments trying to upskill populations. By the time training programs are developed and implemented, the technology has often already evolved significantly.
Evidence
ChatGPT reached 100 million users in just a couple of months; training institutions report taking three years to develop new programs that are already outdated by the time they reach market
Major discussion point
Speed of technological change vs. training capacity
Topics
Development | Economic
Global north benefits more from AI while global south faces different challenges
Explanation
AI’s impact is primarily felt in knowledge work and white-collar jobs, which are more prevalent in high-income countries. The global south, with more agriculture and blue-collar work, faces less direct impact from AI alone, though this may change when combined with robotics.
Evidence
ILO research showing AI impact is mostly on knowledge workers and global north economies, while global south and blue collar workers are less affected by AI in isolation
Major discussion point
Unequal global impact of AI
Topics
Development | Economic
Most jobs will require reskilling by 2030, affecting billions of people worldwide
Explanation
The scale of the reskilling challenge is enormous, with the vast majority of jobs expected to require some form of retraining or upskilling within the next decade. This represents a massive global workforce transformation challenge.
Evidence
UNESCO report from end of 2023 stating that 9 out of 10 jobs will need to be re-skilled by 2030, affecting billions of people
Major discussion point
Massive scale of reskilling needs
Topics
Economic | Development
Agreed with
– Sher Verick
– Tom Wambeke
Agreed on
Transformation over replacement in AI impact
Women face disproportionate risk as they are overrepresented in clerical jobs prone to automation
Explanation
AI’s impact on employment is not gender-neutral, with women facing higher risks of job displacement because they are more likely to work in clerical and administrative roles that are susceptible to automation. This could exacerbate existing gender inequalities in the labor market.
Evidence
ILO research showing correlation between clerical jobs (prone to automation) and women’s employment, with data entry and similar administrative tasks being particularly at risk
Major discussion point
Gender implications of AI automation
Topics
Human rights | Economic
Three-tier digital skills approach needed: basic literacy for all, intermediate skills for specific industries, and advanced STEM skills
Explanation
Digital skills development should be structured in three levels: universal basic digital literacy (email, internet browsing), intermediate skills for industry-specific needs (digital marketing, social media), and advanced technical skills (data science, AI expertise). Each level serves different workforce needs and career paths.
Evidence
Examples include basic skills like sending emails and browsing internet for everyone, intermediate skills like digital marketing for specific industries, and advanced skills where companies like Microsoft and Google are paying premium wages for data scientists and AI experts
Major discussion point
Structured approach to digital skills development
Topics
Development | Economic
Disagreed with
– Tom Wambeke
Disagreed on
Approach to AI adoption in training and education
Training institutions struggle with speed of curriculum development and digital transformation requirements
Explanation
Educational institutions face multiple challenges including the slow pace of curriculum development relative to technological change, managing industry feedback and demands, and undergoing their own digital transformation while training others. They must also scale beyond traditional face-to-face instruction to reach millions of learners.
Evidence
Training institutions report taking three years to develop programs that are outdated upon release; need for mobile-based, bite-sized training during commuting; industry feedback on graduate skills gaps
Major discussion point
Institutional challenges in skills development
Topics
Development | Sociocultural
AI models are biased due to training data predominantly from global north and historical sources
Explanation
Current AI models suffer from significant bias because they are trained primarily on data from developed countries and historical sources dating back to the invention of the printing press. This creates systemic problems when these models are applied globally, as they don’t represent diverse regional or cultural perspectives.
Evidence
Training data predominantly from global north and ‘white man pen’ since the 1600s when Gutenberg invented the press; difficulty in building models with region-specific data that is often not digitalized
Major discussion point
AI bias and representation issues
Topics
Human rights | Legal and regulatory
Agreed with
– Manal Azzi
Agreed on
Need for comprehensive regulatory frameworks
Algorithmic management risks dehumanizing work and removing human flexibility in decision-making
Explanation
The increasing use of algorithms to manage workers creates risks of dehumanization, where decisions are made purely based on productivity metrics without considering human circumstances. This can lead to unfair treatment and loss of the human element in workplace management.
Evidence
Example of Uber driver fired while in ambulance after accident because he didn’t deliver pizza on time; concerns about platform economy and algorithmic decision-making
Major discussion point
Dehumanization through algorithmic management
Topics
Human rights | Economic
Agreed with
– Sher Verick
– Manal Azzi
– Tom Wambeke
Agreed on
Human-centered approach to AI adoption
Privacy concerns arise as AI systems collect extensive personal and professional data
Explanation
AI systems like ChatGPT accumulate vast amounts of personal information through daily use, potentially knowing more about users than their families. Users often share sensitive personal and professional information without fully understanding the privacy implications or how this data is stored and used by private companies.
Evidence
Example of people using ChatGPT for personal advice about family issues, connecting dots from previous conversations months ago, exposing private data to companies
Major discussion point
Privacy and data protection in AI systems
Topics
Human rights | Legal and regulatory
Agreed with
– Manal Azzi
Agreed on
Need for comprehensive regulatory frameworks
Governments need long-term strategies linking skills development to economic planning
Explanation
Governments face challenges in developing comprehensive strategies that connect skills development to long-term economic planning. They need to forecast future workforce needs and ensure quality in training programs, but often lack the capacity to effectively manage and oversee these initiatives.
Evidence
Need to forecast how many doctors, lawyers, data scientists will be needed in 20 years; challenges in ensuring quality of learning through RFPs and RFQs with limited government capacity
Major discussion point
Government role in strategic skills planning
Topics
Legal and regulatory | Development
Sher Verick
Speech speed
183 words per minute
Speech length
2086 words
Speech time
682 seconds
AI will primarily augment jobs rather than replace them, with only 3.3% of global employment at risk of automation
Explanation
Contrary to fears of widespread job displacement, ILO research indicates that most AI impact will be augmentative rather than replacement-based. While one in four jobs are potentially exposed to AI, the vast majority will involve workers being enhanced by AI tools rather than replaced entirely.
Evidence
ILO research published in May showing 3.3% of global employment at risk of automation (around 130 million jobs), with much higher shares involving augmentation; higher impact in high-income countries versus low-income countries
Major discussion point
AI augmentation vs. replacement in employment
Topics
Economic | Development
Agreed with
– Juan Ivan Martin Lataix
– Tom Wambeke
Agreed on
Transformation over replacement in AI impact
AI alone won’t solve youth unemployment in developing countries – broader economic policies are needed
Explanation
While AI presents both opportunities and challenges, it is neither the primary cause nor the solution to high youth unemployment rates in regions like Africa. Addressing unemployment requires broader macro-economic, industrial, and sectoral policies focused on investment and job creation.
Evidence
Reference to 40% youth unemployment rates in developing countries; emphasis that solutions require investment, trade, and broader policy interventions beyond AI
Major discussion point
AI’s limited role in addressing structural unemployment
Topics
Economic | Development
Need to maintain human-centered approach rather than racing to embrace AI for efficiency alone
Explanation
The ILO advocates against rushing to adopt AI simply for efficiency gains, emphasizing the importance of maintaining human-centered approaches to work and development. The focus should be on how AI can support rather than replace human-centered practices.
Evidence
ILO’s advocacy for universal social protection rather than universal basic income; emphasis on decent work and supporting workers and families
Major discussion point
Human-centered approach to AI adoption
Topics
Human rights | Economic
Agreed with
– Juan Ivan Martin Lataix
– Manal Azzi
– Tom Wambeke
Agreed on
Human-centered approach to AI adoption
Manal Azzi
Speech speed
162 words per minute
Speech length
2797 words
Speech time
1035 seconds
AI and robotics remove workers from hazardous environments and enable predictive safety measures
Explanation
Automation and AI technologies have positive safety impacts by removing workers from dangerous situations such as high-temperature environments, confined spaces, and exposure to hazardous substances. Smart monitoring systems can also predict and prevent workplace accidents before they occur.
Evidence
Robotic arms handling metal melting at high temperatures; drones entering confined spaces and spraying pesticides; smart wearables detecting fall risks in construction with immediate medical team alerts; sensors predicting workplace hazards
Major discussion point
Positive safety impacts of AI and automation
Topics
Development | Infrastructure
Disagreed with
– Tom Wambeke
Disagreed on
Primary focus of AI implementation
New risks emerge from human-robot interaction, algorithmic management, and privacy concerns
Explanation
While AI and robotics offer safety benefits, they also introduce new risks including unreliable human-robot interactions, loss of worker control over their workspace, system malfunctions, and privacy concerns from extensive data collection. These risks span physical, organizational, and psychosocial dimensions.
Evidence
Exoskeletons requiring personalized design; system malfunctions in monitoring devices; excessive data collection beyond safety needs; virtual reality causing physical hazards and balance issues
Major discussion point
New risks from AI and robotics in workplace
Topics
Human rights | Legal and regulatory
Workers throughout the AI supply chain face safety and health challenges, from data annotators to electronic waste handlers
Explanation
The AI ecosystem creates safety and health risks for workers across the entire supply chain, including those who prepare data, moderate content, mine critical minerals, assemble technology, and handle electronic waste. These workers often lack adequate protection and support.
Evidence
Data annotators performing repetitive tasks exposed to toxic material; content moderators analyzing large amounts of data without psychosocial support; miners extracting cobalt, lithium, copper in dangerous conditions; electronic waste workers exposed to mercury and chemical substances in informal economy
Major discussion point
Supply chain worker safety in AI ecosystem
Topics
Human rights | Development
Agreed with
– Juan Ivan Martin Lataix
– Sher Verick
– Tom Wambeke
Agreed on
Human-centered approach to AI adoption
ILO is developing new labor standards for platform economy and algorithmic management
Explanation
The ILO is actively working on creating binding international instruments to address decent work in the platform economy, including protections against algorithmic management and digital workplace issues. This involves a multi-year process with worker, employer, and government representatives.
Evidence
Second discussion scheduled for June 2026 on platform economy instrument; Violence Harassment Convention covering digital workplace harassment; ongoing international labor conference discussions
Major discussion point
Regulatory responses to digital work challenges
Topics
Legal and regulatory | Economic
Agreed with
– Juan Ivan Martin Lataix
Agreed on
Need for comprehensive regulatory frameworks
Tom Wambeke
Speech speed
183 words per minute
Speech length
3094 words
Speech time
1009 seconds
Individual upskilling must be complemented by institutional capacity development and organizational change management
Explanation
Effective AI adoption requires moving beyond individual training to institutional transformation and system-level capacity development. This involves organizational change management that addresses broader structural and cultural changes needed for AI integration.
Evidence
ITC-ILO’s approach moving from individual capacity to institutional capacity to system capacity development; emphasis on organizational change management rather than just technological change
Major discussion point
Holistic approach to AI capacity building
Topics
Development | Sociocultural
Agreed with
– Juan Ivan Martin Lataix
– Sher Verick
– Manal Azzi
Agreed on
Human-centered approach to AI adoption
Current AI adoption in training often represents ‘old stuff in new jackets’ rather than true transformation
Explanation
Many current AI implementations in education and training are simply digitized versions of existing processes rather than genuinely transformative approaches. True innovation requires doing new things in new ways, not just automating or augmenting existing practices.
Evidence
Examples of AI-powered automated grading and chatbots that don’t fundamentally change educational setup; criticism that these are practical add-ons rather than transformative changes
Major discussion point
Need for genuine transformation vs. superficial AI adoption
Topics
Sociocultural | Development
Agreed with
– Juan Ivan Martin Lataix
– Sher Verick
Agreed on
Transformation over replacement in AI impact
Disagreed with
– Juan Ivan Martin Lataix
Disagreed on
Approach to AI adoption in training and education
AI should be used to ask better questions and enable new forms of learning rather than just automate existing processes
Explanation
The most valuable application of AI in learning is not to provide easy answers but to help learners ask better questions and engage in more meaningful learning processes. This includes using AI tools that challenge and question learners rather than simply providing information.
Evidence
Concept of antagonistic AI that questions rather than answers; example of ‘digital queen’ that critically questions everything; emphasis on learning as friction and suffering rather than ease
Major discussion point
AI as tool for better questioning and learning
Topics
Sociocultural | Development
Disagreed with
– Manal Azzi
Disagreed on
Primary focus of AI implementation
AI works best when integrated with other technologies in an ecological approach rather than in isolation
Explanation
Effective AI implementation requires understanding it as part of a broader technological ecosystem that includes virtual reality, quantum computing, neurotechnology, and other emerging technologies. This intersectional approach is necessary to both maximize opportunities and mitigate risks.
Evidence
Examples of AI combined with VR for soft skills training; mention of upcoming wave including quantum computing and neurotechnology; emphasis on Cambrian explosion of AI offspring at intersections
Major discussion point
Ecological and intersectional approach to AI
Topics
Infrastructure | Development
Audience
Speech speed
163 words per minute
Speech length
582 words
Speech time
214 seconds
AI in diplomacy and public administration requires maintaining human judgment despite pattern recognition capabilities
Explanation
While AI can recognize patterns in diplomatic discourse and predict likely questions and responses, the dynamic nature of international relations and politics requires human judgment and adaptability. The world can change overnight politically, requiring responses that go beyond predictable patterns.
Evidence
Example of using ChatGPT to anticipate delegate questions and responses based on patterns; observation that diplomatic speeches often follow repetitive patterns
Major discussion point
Role of AI in diplomacy and governance
Topics
Legal and regulatory | Sociocultural
Agreements
Agreement points
Human-centered approach to AI adoption
Speakers
– Juan Ivan Martin Lataix
– Sher Verick
– Manal Azzi
– Tom Wambeke
Arguments
Algorithmic management risks dehumanizing work and removing human flexibility in decision-making
Need to maintain human-centered approach rather than racing to embrace AI for efficiency alone
Workers throughout the AI supply chain face safety and health challenges, from data annotators to electronic waste handlers
Individual upskilling must be complemented by institutional capacity development and organizational change management
Summary
All speakers emphasized the importance of keeping humans at the center of AI development and implementation, warning against purely efficiency-driven approaches that could dehumanize work or ignore worker welfare
Topics
Human rights | Economic | Development
Need for comprehensive regulatory frameworks
Speakers
– Juan Ivan Martin Lataix
– Manal Azzi
Arguments
AI models are biased due to training data predominantly from global north and historical sources
Privacy concerns arise as AI systems collect extensive personal and professional data
ILO is developing new labor standards for platform economy and algorithmic management
Summary
Both speakers agreed that current AI systems require regulation to address bias, privacy concerns, and workplace management issues, with the ILO actively working on new standards
Topics
Legal and regulatory | Human rights
Transformation over replacement in AI impact
Speakers
– Juan Ivan Martin Lataix
– Sher Verick
– Tom Wambeke
Arguments
Most jobs will require reskilling by 2030, affecting billions of people worldwide
AI will primarily augment jobs rather than replace them, with only 3.3% of global employment at risk of automation
Current AI adoption in training often represents ‘old stuff in new jackets’ rather than true transformation
Summary
Speakers agreed that AI’s primary impact will be transformative rather than replacement-based, requiring massive reskilling efforts but not leading to widespread job displacement
Topics
Economic | Development
Similar viewpoints
Both speakers highlighted the challenge of educational institutions keeping pace with rapid technological change, with curriculum development taking years while technology evolves in months
Speakers
– Juan Ivan Martin Lataix
– Tom Wambeke
Arguments
Technology adoption speed outpaces training capacity, making it difficult for skills development to keep up
Training institutions struggle with speed of curriculum development and digital transformation requirements
Topics
Development | Sociocultural
Both speakers recognized that AI and automation create disproportionate impacts on vulnerable groups, particularly women, and introduce new forms of workplace risks
Speakers
– Juan Ivan Martin Lataix
– Manal Azzi
Arguments
Women face disproportionate risk as they are overrepresented in clerical jobs prone to automation
New risks emerge from human-robot interaction, algorithmic management, and privacy concerns
Topics
Human rights | Economic
Both speakers emphasized that AI adoption should focus on enhancing human capabilities and learning rather than simply pursuing efficiency or automation
Speakers
– Sher Verick
– Tom Wambeke
Arguments
Need to maintain human-centered approach rather than racing to embrace AI for efficiency alone
AI should be used to ask better questions and enable new forms of learning rather than just automate existing processes
Topics
Human rights | Sociocultural | Development
Unexpected consensus
Positive potential of AI for workplace safety
Speakers
– Manal Azzi
– Juan Ivan Martin Lataix
Arguments
AI and robotics remove workers from hazardous environments and enable predictive safety measures
Three-tier digital skills approach needed: basic literacy for all, intermediate skills for specific industries, and advanced STEM skills
Explanation
Despite focusing on risks and challenges, there was unexpected consensus on AI’s positive potential for improving workplace safety and the structured approach needed for skills development, showing balanced perspective rather than purely cautionary stance
Topics
Development | Infrastructure | Economic
Global inequality in AI impact and benefits
Speakers
– Juan Ivan Martin Lataix
– Sher Verick
– Manal Azzi
Arguments
Global north benefits more from AI while global south faces different challenges
AI alone won’t solve youth unemployment in developing countries – broader economic policies are needed
Workers throughout the AI supply chain face safety and health challenges, from data annotators to electronic waste handlers
Explanation
All speakers unexpectedly converged on recognizing that AI’s benefits and risks are unevenly distributed globally, with developing countries facing different challenges and often bearing hidden costs in the AI supply chain
Topics
Development | Human rights | Economic
Overall assessment
Summary
The speakers demonstrated strong consensus on maintaining human-centered approaches to AI, the need for comprehensive regulation, and the transformative rather than replacement nature of AI’s impact on work. They agreed on the challenges of keeping skills development pace with technological change and the unequal global distribution of AI’s benefits and risks.
Consensus level
High level of consensus among ILO speakers on core principles and challenges, with complementary expertise areas reinforcing shared institutional perspective. This strong alignment suggests coordinated organizational approach to AI governance and indicates potential for effective policy development and implementation in the international labor context.
Differences
Different viewpoints
Approach to AI adoption in training and education
Speakers
– Tom Wambeke
– Juan Ivan Martin Lataix
Arguments
Current AI adoption in training often represents ‘old stuff in new jackets’ rather than true transformation
Three-tier digital skills approach needed: basic literacy for all, intermediate skills for specific industries, and advanced STEM skills
Summary
Tom Wambeke criticizes current AI implementations as superficial automation of existing processes, advocating for genuine transformation, while Juan Ivan Martin Lataix presents a structured, tiered approach to digital skills development that could be seen as more conventional
Topics
Development | Sociocultural
Primary focus of AI implementation
Speakers
– Tom Wambeke
– Manal Azzi
Arguments
AI should be used to ask better questions and enable new forms of learning rather than just automate existing processes
AI and robotics remove workers from hazardous environments and enable predictive safety measures
Summary
Tom emphasizes AI’s potential for transformative learning and questioning, while Manal focuses on practical safety applications and risk mitigation in workplace settings
Topics
Development | Infrastructure
Unexpected differences
Role of efficiency in AI adoption
Speakers
– Tom Wambeke
– Juan Ivan Martin Lataix
Arguments
AI should be used to ask better questions and enable new forms of learning rather than just automate existing processes
Technology adoption speed outpaces training capacity, making it difficult for skills development to keep up
Explanation
While both work for ILO training institutions, Tom explicitly criticizes efficiency-focused AI adoption (‘faster, stronger, better’ approach) while Juan’s presentation implicitly accepts the need to keep up with technological speed, creating an unexpected philosophical divide within the same organization
Topics
Development | Sociocultural
Overall assessment
Summary
The speakers show broad consensus on AI’s transformative impact on work and the need for comprehensive responses, but differ on implementation approaches and priorities
Disagreement level
Low to moderate disagreement level. The speakers are largely aligned on fundamental issues but show nuanced differences in emphasis and methodology. The disagreements are more about approach and priorities rather than fundamental opposition, which is typical for colleagues from the same organization working on related but distinct aspects of AI and work policy.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers highlighted the challenge of educational institutions keeping pace with rapid technological change, with curriculum development taking years while technology evolves in months
Speakers
– Juan Ivan Martin Lataix
– Tom Wambeke
Arguments
Technology adoption speed outpaces training capacity, making it difficult for skills development to keep up
Training institutions struggle with speed of curriculum development and digital transformation requirements
Topics
Development | Sociocultural
Both speakers recognized that AI and automation create disproportionate impacts on vulnerable groups, particularly women, and introduce new forms of workplace risks
Speakers
– Juan Ivan Martin Lataix
– Manal Azzi
Arguments
Women face disproportionate risk as they are overrepresented in clerical jobs prone to automation
New risks emerge from human-robot interaction, algorithmic management, and privacy concerns
Topics
Human rights | Economic
Both speakers emphasized that AI adoption should focus on enhancing human capabilities and learning rather than simply pursuing efficiency or automation
Speakers
– Sher Verick
– Tom Wambeke
Arguments
Need to maintain human-centered approach rather than racing to embrace AI for efficiency alone
AI should be used to ask better questions and enable new forms of learning rather than just automate existing processes
Topics
Human rights | Sociocultural | Development
Takeaways
Key takeaways
AI will primarily augment jobs rather than replace them, with only 3.3% of global employment at risk of automation, contradicting fears of a ‘job apocalypse’
A three-tier digital skills framework is needed: basic digital literacy for all, intermediate skills for specific industries, and advanced STEM skills for specialized roles
The digital divide remains a fundamental barrier, with 2.6 billion people lacking internet access, creating inequitable access to AI benefits
Women face disproportionate risk from AI automation as they are overrepresented in clerical jobs prone to automation
AI models contain inherent bias due to training data predominantly from the global north and historical sources dating back centuries
Current AI adoption in training often represents incremental improvements rather than transformational change
Workplace safety can be enhanced through AI and robotics removing workers from hazardous environments, but new risks emerge from human-robot interaction and algorithmic management
The entire AI supply chain involves workers facing safety and health challenges, from data annotators to electronic waste handlers
Speed of technological change outpaces institutional capacity for curriculum development and skills training
AI should be integrated with other technologies in an ecological approach rather than implemented in isolation
Resolutions and action items
ILO is developing new labor standards for platform economy through ongoing constituent discussions, with final draft to be discussed in June 2026
ILO Observatory on AI and Work will continue research on algorithmic management, digital labor platforms, data governance, and skills matching
Training institutions need to undergo digital transformation to scale up reskilling efforts for millions of workers
Governments should develop long-term strategies linking skills development to economic planning and promote lifelong learning
Organizations should create safe experimentation zones for AI learning without fear of asking ‘dumb questions’
Focus on using AI to ask better questions and enable new forms of learning rather than just automating existing processes
Unresolved issues
How to effectively regulate algorithmic management while maintaining workplace flexibility and human decision-making
How to address the fundamental challenge of AI model bias when training data from diverse regions is limited or non-digitized
How to balance the speed of technological change with the time needed for quality skills development and institutional adaptation
How to ensure quality assurance in partially automated teaching and assessment systems
How to manage privacy concerns as AI systems collect extensive personal and professional data
How to prevent AI from automating ineffective practices rather than transforming them
How to address youth unemployment in developing countries where AI impact may be limited compared to broader economic factors
How to maintain human-centered approaches in diplomacy and public administration while leveraging AI capabilities
How to scale individual AI literacy training to institutional and systemic capacity development
Suggested compromises
Universal social protection rather than universal basic income as a response to AI-driven job displacement
Gradual implementation of AI in workplace safety with humans remaining at the center of decision-making rather than complete automation
Balanced approach between efficiency gains and maintaining human touch in work processes
Interdisciplinary dialogue among different stakeholders (legal experts, IT experts, workers, employers) to develop AI solutions rather than top-down implementation
Focus on augmentation and transformation of existing roles rather than wholesale job replacement
Combination of face-to-face and AI-enhanced training methods rather than complete digitalization
Regulation that allows innovation while protecting worker rights and privacy
Emphasis on asking better questions with AI assistance rather than seeking ready-made answers that bypass learning friction
Thought provoking comments
The UNESCO did a report end of 2023 saying that out of 10 jobs, 9 will need to be re-skilled by 2030. This is billions of people. So the size of these challenges is enormous.
Speaker
Juan Ivan Martin Lataix
Reason
This statistic reframes the AI skills challenge from a technical problem to a massive human development crisis, highlighting the unprecedented scale of transformation needed across the global workforce.
Impact
This comment established the foundational urgency for the entire discussion, setting the tone that this isn’t just about technology adoption but about a fundamental restructuring of human capabilities on a global scale. It influenced subsequent speakers to address systemic rather than incremental solutions.
At some point, ChatGPT knows much more about you than your family. Because not only do you use it for private things, but for personal and corporate things… People use it for all kinds of things. And sometimes maybe not very consciously knowing that they are exposing to private companies a lot of personal information.
Speaker
Juan Ivan Martin Lataix
Reason
This observation cuts through technical discussions to reveal the intimate and unconscious nature of AI integration into personal lives, highlighting how users may be unknowingly surrendering unprecedented levels of personal data.
Impact
This comment shifted the discussion from viewing AI as an external tool to recognizing it as an entity that develops intimate knowledge of users, leading to deeper considerations about privacy, regulation, and the human-AI relationship throughout the session.
We are having to shift, and when we see a lot of these technologies taking over some of the tasks we’ve done, we do sense that we are losing control of our workspace and our jobs and what we’re actually meant to be doing.
Speaker
Manal Azzi
Reason
This comment captures the psychological and existential dimension of AI adoption – the loss of agency and identity that workers experience, moving beyond technical capabilities to human meaning and purpose.
Impact
This insight introduced the critical theme of human agency and workplace identity, influencing the discussion to consider not just what AI can do, but what it means for human dignity and self-determination in work environments.
It’s important to consider the whole supply chain. Here are just the final parts… those miners excavating critical minerals… factory workers that actually assemble all this technology… the electronic waste, the business of electronic waste… people are exposed to mercury and so many different other substances.
Speaker
Manal Azzi
Reason
This comment dramatically expanded the scope of AI’s impact by revealing the hidden human costs in the supply chain, challenging the clean, digital narrative of AI with the reality of environmental and labor exploitation.
Impact
This observation fundamentally broadened the discussion from AI’s impact on knowledge workers to include global supply chains and environmental justice, forcing participants to consider AI’s full lifecycle impact on human welfare.
Intelligence as the ability to adapt to change. For me, that’s also actually an excellent definition what learning should be. Learning is also adapting to change… when the rate of change outside an organization is greater than the rate of change inside, the end is near.
Speaker
Tom Wambeke
Reason
This redefinition of intelligence and learning reframes the entire AI skills challenge from acquiring specific technical skills to developing adaptive capacity, offering a more fundamental approach to preparing for technological change.
Impact
This philosophical reframing influenced the discussion to move beyond specific AI tools toward developing organizational and individual adaptability, shifting focus from ‘what to learn’ to ‘how to keep learning.’
What I currently see is basically new stuff or old stuff in new jackets… But do they really change something in my whole educational setup? How can we really transform learning and training that we are creating an added value? And not just, let’s say, substituted by a new technology or augmented a little bit.
Speaker
Tom Wambeke
Reason
This critique challenges the assumption that technological adoption equals innovation, demanding deeper transformation rather than superficial digitization of existing practices.
Impact
This comment prompted critical reflection on whether current AI initiatives represent genuine transformation or merely technological substitution, elevating the discussion from implementation tactics to fundamental questions about educational innovation.
What if we would use AI to ask better questions? It’s a question that I always share with my colleagues before we start a discussion.
Speaker
Tom Wambeke
Reason
This reframes AI’s role from providing answers to enhancing human inquiry, suggesting a more collaborative and intellectually stimulating relationship between humans and AI systems.
Impact
This insight shifted the conversation from AI as a replacement tool to AI as an intellectual partner, influencing how participants considered the future of human-AI collaboration in learning and problem-solving.
The biggest risk of AI is that it would automate ineffective practice… feeding an AI the entire internet does not make you a teacher. Teaching or training is almost the art of assisting discovery.
Speaker
Tom Wambeke
Reason
This comment distinguishes between information processing and genuine human expertise, highlighting that effective teaching involves complex human skills that cannot be replicated through data processing alone.
Impact
This observation grounded the discussion in the irreplaceable value of human expertise and relationship-building, countering technological determinism and emphasizing the continued centrality of human skills in education and development.
Overall assessment
These key comments collectively transformed what could have been a technical discussion about AI implementation into a profound examination of human adaptation, dignity, and purpose in the age of artificial intelligence. The speakers moved the conversation through multiple levels – from individual skills to systemic transformation, from technical capabilities to human meaning, and from local implementation to global supply chains. The most impactful comments challenged participants to think beyond immediate technological solutions toward fundamental questions about human agency, organizational transformation, and the kind of future we want to create with AI. The discussion evolved from ‘How do we use AI?’ to ‘How do we remain human while working with AI?’ – a much more sophisticated and necessary conversation for policymakers and practitioners.
Follow-up questions
How do you see that AI can influence the UN in the area of diplomacy?
Speaker
Melissa (CDBTO Vienna)
Explanation
This explores the potential impact of AI on diplomatic processes, including pattern recognition in speeches and anticipating delegate questions, which could transform how diplomatic work is conducted.
How do you see the role of AI in UBI management?
Speaker
Audience member from One Goal initiative
Explanation
This addresses the intersection of AI automation and universal basic income policies, particularly relevant as job markets may be transformed by AI technologies.
How do we embrace AI in combating unemployment in developing countries with high youth unemployment rates?
Speaker
Audience member from developing country context
Explanation
This is critical for understanding how AI can be leveraged positively in regions with 40% youth unemployment rather than exacerbating job losses.
How do we quality-assure partially automated teaching and assessment?
Speaker
Tom Wambeke
Explanation
This addresses a fundamental challenge in educational technology as AI becomes integrated into learning systems, requiring new frameworks for maintaining educational standards.
How do we curate and share knowledge to build the right and responsible AI?
Speaker
Tom Wambeke
Explanation
This focuses on the ethical development and deployment of AI systems, particularly important for training institutions and capacity building organizations.
What would be your intelligent what-if question for the next 50 years regarding AI?
Speaker
Tom Wambeke
Explanation
This encourages long-term strategic thinking about AI’s future impact, moving beyond current applications to anticipate transformational changes.
What can we learn from AI about human learning?
Speaker
Tom Wambeke
Explanation
This explores the philosophical and practical implications of AI for understanding and improving human learning processes.
With machine-human interaction, who are the new actors and partners in AI learning and training?
Speaker
Tom Wambeke
Explanation
This addresses the changing ecosystem of educational stakeholders as AI becomes integrated into learning environments.
Need for more research on quantifying injuries/accidents prevented or caused by technology introduction
Speaker
Manal Azzi
Explanation
There is insufficient global data on the safety impact of AI and automation technologies, making it difficult to assess their true occupational health effects.
How to develop AI models that are not biased, particularly for regions outside the global north
Speaker
Juan Ivan Martin Lataix
Explanation
Current AI models are trained primarily on data from the global north, creating bias issues that will take significant time and resources to address.
How to balance human-centered approaches with AI adoption pressures
Speaker
Melissa (CDBTO Vienna)
Explanation
This addresses the tension between rapid AI adoption and maintaining human-focused approaches in work and learning environments.
Success stories in capacity building around AI literacy and lessons learned from member states
Speaker
Online participant
Explanation
This seeks practical examples and best practices for implementing AI literacy programs at national and organizational levels.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.