WSIS Action Line C2 Information and communication infrastructure
8 Jul 2025 11:30h - 12:30h
WSIS Action Line C2 Information and communication infrastructure
Session at a glance
Summary
This discussion focused on the role of artificial intelligence and emerging technologies in advancing ICT infrastructure development, particularly in the context of WSIS Action Line C2. The session brought together experts from academia, government regulators, industry, and international organizations to explore how AI can bridge digital divides and enhance connectivity planning.
Archana Gulati from ITU-D opened by emphasizing AI’s potential to accelerate reliable and inclusive ICT infrastructure development, particularly for underserved communities. She highlighted AI’s capabilities in smart infrastructure planning, operational cost reduction, and network optimization while stressing the need for ethical and equitable implementation frameworks. Renata Figueiredo from Brazil’s telecommunications regulator Anatel shared her country’s approach to AI governance, including regulatory impact assessments and partnerships with academic institutions to ensure responsible AI deployment in telecom services.
Industry representative Gonzalo Suardiaz from Ericsson addressed a critical challenge in AI implementation: the “garbage in, garbage out” problem, where poor data quality leads to unreliable results. He outlined strategies for ensuring data quality in connectivity planning platforms, including standardization, governance frameworks, and validation through machine learning. Sandor Farkas from ITU demonstrated practical AI applications in infrastructure mapping, specifically using computer vision to detect cell towers in satellite imagery for coverage analysis.
From an academic perspective, Aleksandra Jastrzebska warned against over-reliance on AI tools, citing research showing cognitive decline when students depend too heavily on AI for thinking tasks. She emphasized that AI should challenge rather than replace human minds. Joshua Ku from GitHub concluded by advocating for open-source approaches to AI development, explaining how the ITU has embraced open-source practices to accelerate innovation through community collaboration. The discussion reinforced that while AI offers powerful tools for infrastructure development, success requires clean data, ethical frameworks, human oversight, and collaborative partnerships across sectors.
Keypoints
## Overall Purpose/Goal
This discussion was a panel session focused on the role of emerging technologies and artificial intelligence in advancing Action Line C2 within the World Summit on the Information Society (WSIS) framework. The session aimed to explore how AI can be leveraged to develop reliable, inclusive, and sustainable ICT infrastructure, particularly for underserved and remote communities, while addressing associated challenges and best practices.
## Major Discussion Points
– **AI as an Infrastructure Planning Tool**: Multiple speakers emphasized AI’s potential to optimize ICT infrastructure deployment through smart planning, analyzing geospatial and demographic data, enabling dynamic spectrum allocation, and facilitating predictive maintenance. The ITU’s Connectivity Planning Platform (CPP) was highlighted as a practical example of AI-driven infrastructure planning.
– **Data Quality and the “Garbage In, Garbage Out” Problem**: A significant focus was placed on the critical importance of clean, standardized, and well-governed data for AI systems. Speakers discussed the need for data standards, governance frameworks, validation mechanisms, and crowdsourcing feedback to ensure AI tools provide accurate and reliable results for infrastructure planning.
– **Ethical AI Implementation and Regulatory Frameworks**: The discussion covered the necessity of inclusive policy frameworks to ensure ethical and equitable AI deployment. Brazil’s regulatory approach through Anatel was presented as a case study, emphasizing transparency, privacy protection, cybersecurity, and alignment with international standards like UNESCO’s AI ethics recommendations.
– **Open Source as a Catalyst for Innovation**: The session explored how open source development can accelerate AI and infrastructure solutions by leveraging global developer communities. GitHub’s collaboration with ITU was presented as an example of how organizations can embrace open source while maintaining security and managing community contributions.
– **Academic Perspectives on AI Limitations**: A critical examination of AI’s impact on learning and research was presented, highlighting concerns about “cognitive debt” when humans over-rely on AI tools. This included examples of AI-generated academic papers and emphasized that AI should augment rather than replace human thinking and decision-making.
## Overall Tone
The discussion maintained a professional and collaborative tone throughout, with speakers presenting both opportunities and challenges in a balanced manner. The tone was optimistic about AI’s potential while remaining realistic about implementation challenges. There was a strong emphasis on inclusivity, ethical considerations, and the importance of human-centered approaches. The session fostered knowledge-sharing among diverse stakeholders (government, academia, industry, and international organizations) and maintained a forward-looking perspective focused on practical solutions and partnerships.
Speakers
– **Archana G. Gulati**: Speaking on behalf of Dr. Cosmas Luckyson Zavazava, Director of the Telecommunications Development Bureau at ITU-D
– **Gonzalo Suardiaz**: Program manager within digital inclusion and digital education at Ericsson, working on the Connect2Learn program and connectivity planning platform (CPP), 17 years experience in mobile networks from 2G to 5G
– **Renata Figueiredo Santoyo**: Regulation expert in international affairs at Anatel (Brazilian telecommunications regulator)
– **Sandor Farkas**: Expert on geospatial artificial intelligence at ITU, specializing in AI tools for ICT infrastructure mapping
– **Aleksandra Jastrzebska**: Junior mapping expert at ITU, recent graduate from Universitat Jaume I, academic perspective on AI and emerging technologies, researcher in generative AI for map generation
– **Joshua Ku**: Senior solution architect at GitHub, expert on open source software development and community management
**Additional speakers:**
– **Walid Mahmoudli**: Head of FNS (Future Networks and Spectrum Division) within BDT – mentioned as participating remotely but connection could not be established
Full session report
# Report: AI and Emerging Technologies in ICT Infrastructure Development – WSIS Action Line C2 Panel Discussion
## Executive Summary
This panel discussion examined the role of artificial intelligence and emerging technologies in advancing ICT infrastructure development under WSIS Action Line C2. The session brought together experts from ITU-D, industry, national regulators, and the open-source community to explore AI’s potential in bridging digital divides while addressing implementation challenges including data quality, ethical deployment, and human oversight requirements.
A planned remote presentation by Walid Mahmoudli from ITU-D’s Future Networks and Spectrum Division could not proceed due to technical connectivity issues. The session was moderated by Gonzalo Suardiaz from Ericsson, who also presented industry perspectives alongside his moderating role.
## Opening Framework: AI as Infrastructure Bridge
**Archana G. Gulati**, representing Dr. Cosmas Luckyson Zavazava from ITU-D, opened the discussion by positioning AI as a transformative tool for ICT infrastructure development. She emphasized that “AI should be a bridge, not a barrier, to inclusive digital infrastructure,” establishing the session’s focus on equitable technology deployment.
Gulati outlined AI’s applications across the infrastructure lifecycle, including smart site selection for mobile towers, dynamic spectrum allocation, predictive maintenance, and real-time network monitoring. She highlighted AI’s potential to reduce operational costs while enhancing network efficiency, making infrastructure deployment more economically viable in underserved areas.
The ITU-D representative stressed that AI deployment must be “rights-based, secure and human-centric with multi-stakeholder engagement.” She emphasized the need for inclusive policy frameworks to ensure ethical AI use and address concerns about algorithmic bias and transparency. Gulati concluded by highlighting the importance of human capacity-building and AI literacy for public authorities.
## Industry Perspective: Data Quality as Foundation
**Gonzalo Suardiaz** from Ericsson, serving as both moderator and industry representative, emphasized the critical importance of data quality in AI implementation. Drawing from his 17 years of experience in mobile networks from 2G to 5G, he introduced the “garbage in, garbage out” (GIGO) principle as fundamental to AI success.
“No matter how good your algorithms, no matter how good your tools or your models are, you will get bad results if the input data is bad,” Suardiaz stated, identifying data quality as the primary challenge in AI deployment for infrastructure planning.
He outlined Ericsson’s approach through their Connect2Learn programme and connectivity planning platform (CPP), emphasizing the need for data standards and schemas across all infrastructure data types. Suardiaz described how AI can be used to validate data quality by cross-referencing different datasets and identifying inconsistencies, while stressing the importance of robust data governance frameworks.
The Ericsson representative concluded that multi-stakeholder collaboration is essential for making AI meaningful beyond being merely a powerful tool.
## Regulatory Approach: Brazil’s AI Governance Framework
**Renata Figueiredo Santoyo** from Brazil’s telecommunications regulator Anatel presented the country’s approach to AI governance in telecommunications. She outlined Brazil’s regulatory impact assessment process for AI in telecom services, emphasizing alignment with UNESCO’s AI ethics recommendations and the recent BRICS declaration on AI governance.
Santoyo discussed how AI and 5G technologies increase network complexity and associated cybersecurity risks, requiring adaptive regulatory frameworks. She highlighted Anatel’s partnership with ITA university to examine AI’s impact on telecom regulation, cybersecurity, and consumer rights.
The Brazilian regulator emphasized that strong partnerships between regulators, academia, and the private sector are essential for responsible AI deployment while maintaining consumer protection and service quality standards.
## Technical Implementation: AI in Infrastructure Mapping
**Sandor Farkas**, ITU’s expert on geospatial artificial intelligence, demonstrated practical AI applications in ICT infrastructure mapping. His presentation focused on using YOLO11 object detection to identify cell towers in satellite imagery for coverage analysis and infrastructure planning.
Farkas explained the technical challenges of AI model training, including dataset preparation, labeling, and validation using precision and recall metrics. He discussed the trade-offs involved in balancing these metrics, noting that accepting false positives that can be post-validated may be preferable to missing actual infrastructure.
The ITU expert described ongoing work to extend AI object detection datasets and develop tools for smooth data pipeline implementation, emphasizing the iterative nature of AI development and the need for continuous improvement.
## Academic Perspective: Human-AI Interaction Concerns
**Aleksandra Jastrzebska**, a junior mapping expert at ITU, raised important concerns about AI’s impact on human cognitive development. She introduced the concept of “cognitive debt,” referencing MIT research that showed reduced memory retention when students relied heavily on ChatGPT compared to traditional learning methods.
Jastrzebska warned about AI-generated academic content potentially compromising scientific integrity without proper human oversight. She emphasized that “AI is calculating, not thinking – it processes data through mathematical operations rather than genuine understanding.”
Her key message was that “AI doesn’t need to replace our minds. It should challenge them,” advocating for humans to think “with and help of AI, but not letting it think for us.” This perspective on maintaining human intellectual agency resonated throughout the session.
## Open Source Innovation: Collaborative Development
**Joshua Ku** from GitHub concluded the panel by demonstrating how open-source approaches can accelerate AI and infrastructure solutions. He presented statistics showing 150 million developers contributing over 1 billion contributions annually to open-source projects.
Ku highlighted that even major technology companies like Google make their core products (such as Chrome) fully open source, challenging assumptions about competitive advantage and demonstrating how openness can accelerate innovation.
The GitHub representative discussed ITU’s partnership with GitHub to open-source software tools, outlining best practices including proper licensing, code preparation, security scanning, and community management. He emphasized how open-source communities can extend software capabilities beyond original visions through collective innovation.
## Key Themes and Consensus
Several important themes emerged from the discussion:
**Human-Centric AI**: All speakers agreed that AI should augment rather than replace human decision-making, requiring proper oversight and ethical frameworks.
**Data Quality Imperative**: Speakers consistently emphasized that high-quality, well-governed data is essential for effective AI systems.
**Multi-Stakeholder Collaboration**: Strong agreement emerged on the necessity of partnerships across government, academia, private sector, and civil society.
**Practical Implementation Focus**: The discussion balanced AI potential with realistic assessment of implementation challenges and prerequisites.
## Practical Applications Highlighted
The session featured several concrete AI applications:
– **Connectivity Planning Platform**: Ericsson’s AI-driven platform for connectivity planning, with MVP presentation scheduled for the GIGA Connectivity Forum and general availability planned for June 2026
– **Infrastructure Detection**: ITU’s use of YOLO11 for cell tower identification in satellite imagery
– **Regulatory Assessment**: Brazil’s systematic approach to AI impact assessment in telecommunications
– **Open Source Tools**: ITU-GitHub collaboration for developing and sharing infrastructure planning tools
## Conclusion
The panel discussion demonstrated a balanced understanding of AI’s role in ICT infrastructure development, combining technological optimism with practical wisdom about implementation challenges. The strong emphasis on data quality, human oversight, and multi-stakeholder collaboration provides a foundation for responsible AI deployment in infrastructure development.
The session successfully addressed WSIS Action Line C2 objectives by exploring how AI can accelerate progress toward universal connectivity while ensuring that technology serves human development goals. The integration of perspectives from international organizations, regulators, industry, and open-source communities created a comprehensive examination of both opportunities and challenges in AI-driven infrastructure development.
Most importantly, the discussion established that successful AI implementation requires not just technical excellence but also ethical frameworks, quality data governance, and collaborative partnerships to ensure AI serves as a bridge to inclusive digital infrastructure.
Session transcript
Archana G. Gulati: Distinguished guests, esteemed colleagues, ladies and gentlemen, on behalf of Dr. Cosmas Luckyson Zavazava, Director of the Telecommunications Development Bureau, it is my pleasure to address you and to set the scene for this important discussion. As you know, AI is a tool with the potential to accelerate the development of reliable, inclusive and sustainable ICT infrastructure. At ITU-D, we are especially excited about its potential to help deliver meaningful connectivity, especially in underserved and remote communities. We believe that AI can be used to support the implementation of the Kigali Action Plan and BDT strategic priority to bridge the digital infrastructure gap. AI enables smart infrastructure planning by analyzing geospatial, demographic and economic data so that we can optimize where and how to deploy connectivity solutions. AI can also help us to reduce operational costs and enhance network efficiency in both urban and rural settings through predictive analysis and automation. And AI can facilitate real-time monitoring and maintenance, increasing infrastructure resilience and service continuity in disaster-prone or hard-to-reach areas. We also believe that AI has a key role to play in network rollout planning. For example, by enabling smarter site selection for mobile towers or fiber routes, AI will facilitate dynamic spectrum allocation to increase capacity where it is most needed, and also energy-efficient network management. That is particularly important both for environmental, sustainability and rural power-constrained deployments. That is why we are keen to apply AI in various ITU-supported initiatives in partnerships with member states and private sector players, including pilot projects and toolkits. Ladies and gentlemen, while AI is a powerful tool indeed, we also need inclusive policy frameworks to ensure that its use is both ethical and equitable. Key considerations include bias and transparency in AI algorithms, as well as data governance and privacy, especially for vulnerable populations. It is also important to give due consideration to workforce deployment to ensure that those working for public authorities are AI literate. Above all, AI deployment must be rights-based, secure and human-centric, with policies shaped through multi-stakeholder engagement. In the same spirit, we must ensure that the adoption of AI in infrastructure does not leave anyone behind. That requires strong partnerships and human capacity-building efforts. This will include the ITU Academy platform, which promotes the sharing of information and education in an affordable manner, as well as partnerships with governments to define national digital strategies and with the private sector to co-develop tools and models. It is also essential to include academia and civil society in assessing impact and ensuring inclusivity. So, BDT encourages open innovation systems and collaborative platforms for knowledge-sharing and capacity-building. AI should be a bridge, not a barrier, to inclusive digital infrastructure. And once again, I would like to reiterate that we must ensure that no one is left behind in the next wave of digital transformation. Thank you. With these words, I hand over back to you.
Gonzalo Suardiaz: Thank you so much, Archana, for those opening remarks, and a warm welcome, everyone, to this session on the role of emerging technologies and artificial intelligence on advancing the goals of Action Line C2 within WISIS. So today, we have a great panel of experts here. As Archana mentioned, we have people representing academia, we have people representing the ITU, people representing the government, regulator, and the open source community, and even myself representing the industry sector. Our first panelist today is participating remotely. Let’s hope that we can establish a connection with him and that there are no technical issues. And I’m pleased to introduce Mr. Walid Mahmoudli, who is the head of FNS, which is the Future Networks and Spectrum Division within BDT. So let’s try to establish a connection with Walid, see if that works. All right, it seems that we have no Walid online. We can try to see if we can establish the connection within a few minutes. But then we can then present the first speaker here in the room, who is Renata Figueiredo, coming all the way from Brazil. Renata is a regulation expert in international affairs. She works at Anatel, who is the regulator in Brazil, and she’s going to give us some perspectives about how is the Brazilian government doing in terms of Action Line C2. So over to you, Renata.
Renata Figueiredo Santoyo: Thank you. Good morning, everyone. It’s a true honor to join the WSIS Action Line C2 session and share the perspectives of the Brazilian telecommunication regulator, Anatel. Well, my name is Renata, as I was presented, and I’m grateful for this space that values diverse voices, including women in tech and policymaking. Information and communication infrastructure is much more than cables, antennas, and data centers. It’s about enabling people to connect, learn, work, and participate in the economy. Innovation technologies and AI offer powerful tools to advance this goal, but they also pose real challenges in ethic, equity, and security. As regulators, our mission is to ensure that innovation delivers inclusive and secure connectivity for all. Let’s start with artificial intelligence. Anatel is conducting a regulatory impact assessment to establish clear guidelines for the ethical and responsible use for AI in telecom services. This includes managing risks related to how we use data and collect it, process it, and use it for decision-making. We are aligned with international standards, such as UNESCO’s recommendation on AI ethics and I2 guidelines. Public consultation is underway to help us balance innovation with transparency, privacy, and accountability. We know regulation can happen in isolation. That’s why Anatel signed a term of decentralized execution, that’s called TED, with ITA, ITA. That’s one of Brazil’s leading engineering institutions, a huge university in Brazil. This partnership is a cornerstone of our approach to AI and emerging technologies in telecom. Together with ITA researchers, we are examining the many dimensions where AI is transforming telecom regulation, quality of services, how AI can improve, but also potentially compromise reliability and fairness, cyber security, how to identify and mitigate new attack surface created by a driven network, consumer rights, ensuring transparency and avoiding discriminatory or opaque algorithmic decisions, platform oversight, addressing converging markets and ensuring fair competition and accountability for services that deliver telecom-like functions. Spectrum managing, exploring how AI can optimize the spectrum used while ensuring equitable access. This academic collaboration is not theoretical, it’s producing real research that will shape our regulatory framework in the coming years. At the same time, our second strategic priority is expanding 5G with inclusion in mind. Brazil’s roadmap includes annual update to our structural plan of network, revising spectrum managing rules and pushing open run and spectrum sharing. We are simplifying local license process and supporting smaller providers to make deployment cheaper and faster. Our universal service fund, FUST, is being used to connect to schools and remote towns. For us, 5G isn’t just about speed or latency, it’s about affordable, reliable connectivity that reaches every part of our country. Talking also about cybersecurity, AI, 5G and hyper-connected services increase complexity and with it, risks. Anatel is reviewing its cybersecurity regulation for telecom. We are working with the world’s top telecom networks to ensure strong, clear standards as these technologies evolve. We are actively studying and adapting lessons from other regions, South Korea’s National Coordination Centers, the EU’s NIS2 Directive, China’s Data Security Framework, and the US supply chain security initiatives. Data security isn’t a nice-to-have, it’s the foundation of trust, and trust is the true currency of the digital economy. I would like to emphasize, good regulation depends on evidence and partnership. Our work with ITA is just one example of how regulators can collaborate with academia to anticipate challenges and design better rules. We believe emerging technologies like AI can either defend inequality and risks or help us build a safer, fairer, more connected world. The difference lies in one of our choices, in how seriously we take this debate today. I would just, because we just had a BRICS leader’s declaration two days ago, so I would just add a little bit about this. It’s not really regulators, it’s more like Brazil’s government, but I would just like to emphasize that 6th of July in Rio, as part of this commitment, Brazil also aligns with broader global self-priorities, calling for inclusive AI governance, that Brazil as government, believe AI development must respect digital sovereignty, promote fair and open access and technologies, and ensure that no country is left behind. That means supporting data governance framework that protects privacy while enabling equitable, innovative, encouraging open science and open source moderns and fostering international cooperation to reduce technological gaps and empower local talent. That’s how we can turn AI into a truly global public good. Just a parenthesis, because I think it’s very good news, very fair news. So here’s my invitation, let’s work together across borders, sectors and disciplines to make emerging technologies serve the public good. Thank you for the opportunity to share Anatel’s perspective. I look forward to learning a lot with all my colleagues here and collaborate to advance inclusive, secure and resilient digital infrastructure for all. Thank you.
Gonzalo Suardiaz: Obrigado, Renata. Thank you so much. So next in turn is actually myself. I’m here not only as a moderator today, but also as a panelist. So my name is Gonzalo Suardias. I work at Ericsson, a telecom operator. I’ve been doing that for the last 17 years. So I’ve been working in different parts of mobile networks from 2G to 5G. And as of today, I’m a program manager within digital inclusion and in particular digital education. So as part of our program, Connect2Learn, we’ve been working a lot with the ITU, especially on a platform which is called the CPP, the connectivity planning platform. You will hear a little bit more about it in a few minutes. And I think some of the other panelists may refer to it. But what we’re trying to achieve with this platform is to allow different types of stakeholders to take better, more informed decisions about their connectivity and infrastructure planning. So how we do this is by feeding a lot of data into that particular platform. The data is, of course, about the locations of the points of interest that we’re trying to connect, POIs as we call them. But then there’s as well a bunch of data about elevation or terrain. So geospatial data, for instance. There’s data about fiber infrastructure, about mobile network infrastructure. Where are the closest radio sites to those points of interest? How high are those towers? Is there a line of sight? This kind of stuff, right? Then we also have population density data. We have cost data for the models, et cetera. So the idea is that then investors or connectivity planners, regulators, governments, administrations, GIGA people. So I guess many of you are familiar with the GIGA initiative, which aims to connect all the schools of the world to the Internet by 2030. It’s a joint initiative of ITU and UNICEF. So, of course, GIGA will be one of the use cases for the CPP. So we currently have an MVP available. We’ll actually present it during the GIGA Connectivity Forum in a couple of days. And then there will be a first version available by October this year and a general availability version by June 2026. But what I’m here to talk about today, rather than CPP, is about one of the major risks that we have identified when we’ve been planning and designing for the CPP. And I think it’s a general problem that applies to generative AI and that applies to AI or to any data science topic in general, which is GIGO. Garbage in, garbage out. I think many of you are familiar with the term. But basically, GIGO, what it means is that if you feed bad data into a system, you will get bad results. That’s it. No matter how good your algorithms, no matter how good your tools or your models are, you will get bad results if the input data is bad. So this has huge implications for us. Of course, we don’t want to be creating a platform that then gives the user wrong perspectives or makes the user take the wrong connectivity alternative. For instance, connecting a school or a point of interest with, I don’t know, satellite if fiber is closer or mobile networks might be a more sustainable or better way, more efficient to connect the school. So what are we doing today to fight GIGO and to avoid it as much as we can? Well, the first point, of course, is it’s about data standards and data schemas. So, for example, for fiber, there are available already some very good open standards to define fiber data. But we need the same for all type of data, right? For points of interest, let’s say schools, metadata, where are they located? Not only the latitudes and the longitude, the geocoordinates, but also how big are the schools, what needs do they have, what sort of facilities are available there. Same applies to coverage data, mobile coverage. Same applies to backhaul data or cost data, demand, et cetera. Any type of data that we input into the platform needs to be properly schemed and standardized. And by that, we’re reducing the risk of having wrong data. The other part, of course, is about data is creating and maintaining a very serious governance framework, right? Data is a life and data changes all the time, right? We need to keep track who owns the data, who has changed it and when and how, right? So those are also super important aspects. The next one would be to interconnect different data sets. So, for example, we have access to coverage data from operators on the ground, but we also have data sets from open cell ID, for instance. And we can triangulate those and realize when there are mismatches. When one of the data says, here, there’s great coverage, but according to this other data set, there’s no mobile network infrastructure in this area. So how could that happen? So probably one of the two data sets is wrong. So then it allows us to investigate and deep dive a little bit into that. Then the next point, and I think my colleague here, Sander, will touch on that briefly during his presentation, but it’s using AI and machine learning to validate the data. So, for example, if you have two data sets, one is about school geolocations, the other one is about geospatial data, and then you realize that the school coordinates fall, I don’t know, on a lake, well, then you know there’s something wrong. Assuming that the geospatial data is right, which most likely it is, then the coordinates that you have received from the school, they’re wrong, right? And we’ve seen that. We’ve been receiving sometimes geo-coordinates from governments and not all the data is 100% accurate. So that would be one way to validate the data. The other one could be, for instance, using satellite imagery and school metadata. The data is saying that there’s a school with over 500 students and then from the satellite image we see that there’s like no rooftop on that particular area, then we also can imagine that there’s something fishy about that information, so we can act accordingly. The last idea about Fighting Ego is of course embedding crowdsourcing as much as possible, a feedback mechanism which could be crowdsourced via a mobile app or service or even via the CPP tool itself, right, allowing the user to provide feedback as it goes, because the ground truth is what the user knows at the end, right, like this school was there but now it’s closed for whatever reason, or the tool says that here there’s great 4G coverage but in reality it’s not, it’s actually poor coverage, so then we can also correct that. So that’s a little bit what I wanted to touch on today. In summary, we believe that GIGO, garbage in, garbage out, it’s a risk to almost every digital tool that is available today, so be mindful of it, and the bottom line is that we need clean, trusted and transparent data to ensure that connectivity planning is inclusive and is efficient. So that’s it for my end, and I will be handing over now to Mr Sandor Farkas who works at the ITU and he’s an expert on geospatial artificial intelligence and he’s going to be mentioning a few AI tools for ICT infrastructure mapping, so over to you Sandor. Thank you Gonzalo.
Sandor Farkas: Good morning everyone. Today I’m going to talk about object detection using AI. For constant coverage analysis, information is needed about cell towers’ location in a country. Often this information is missing. AI computer vision can be used for finding objects and satellite images. The goal is to create a vector layer with cell towers for further analysis and to support decision making. We use YOLO11 from Ultralytics. This is an open source Python module based on PyTorch. It can be used with Azure Compute Instance and offline machine also. Using oriented bounding box, there’s less distracting background around target objects when digitizing our objects. It has also pre-trained models on satellite images which is useful for us, unfortunately not with our target objects. Here you see the basic workflow for AI object detection consisting of two parts, training a model and using a trained model. If you already have your trained model for your target objects, then you can just skip the first part and go for using a trained model for detection. If you don’t have a classified model for your target objects, you have to create one. Training a model is practically teaching machine learning algorithm how to recognize What? What is that? Sorry for that. Okay, just for me. Training a model is practically teaching machine learning algorithm how to recognize target objects on images. Before training, you need to prepare the data set. That is acquire images with your target objects and labeling images. This is the process of digitization and classification of your target object on your images. Okay. We used four classes in this research. Two types of cell towers and their shadows to improve our findings. To run a model training, the data set has to be split to train, test and validation sets. During the learning process, train and test data are used for calculating the weights for the target objects. In the final step, the models make predictions on the validation data set using the weights generated in the previous set. As a final result, you get the trained model. With the trained model, also metrics and visual outcomes are generated. Here you see F1 confidence curve. I have to explain it a bit. Precision and recall are two key metrics used to validate classification models in machine learning. Precision quantifies the true positives among all positive predictions, assessing the model’s ability to avoid false positives. On the other hand, recall calculates the true positives among all actual positives, assessing the model’s ability to take all instances of a class. F1 score shows these two metrics in one as a harmonic mean of the two. The higher the value, the better the model. Here you see confusion metrics, showing counts for true positives, true negatives, false positives, false negatives for each class. This version of confusion metrics, normalized version, shows the values in proportions rather than counts. This format makes it easier to compare performance across classes. These metrics are loss results during the training. This is a precision confidence curve at different thresholds. This is a recall curve for the same. These two curves show together. The closer the curve to the top right corner, the better the model is. But there is a trade-off between these two, precision and recall. You have to decide which one is more important in your use case. I think in this model we have to focus on recall because in the end we will get a vector layer with cell towers. If the precision is bad and we will get lots of false positives, then we can do some post-validation on that data. But we don’t know anything about false negatives that the model didn’t recognize as target. These two mosaics show labels. and the predicted bounding boxes. As an important measure, intersection over union quantifies the proportion of predicted bounding box and ground truth bounding box. Very important to evaluate accuracy in object localization. You can use fine-tuning parameters called hyperparameters tuning to improve your model. That is not just a one-time configuration but an iterative process optimizing your model’s metrics. This is our to-do list in this research. We have to extend our dataset in numbers and to a wide variety of cell towers to improve our recall. We want to develop useful tools for smooth data pipeline. And of course, we want to try it and use it on new datasets.
Gonzalo Suardiaz: Thank you. Thank you very much, Sandor. Next presentation is going to be made by Aleksandra, who is a junior mapping expert at ITU. She is actually working also with the connectivity planning platform that I mentioned during my presentation. And Aleksandra is going to give us a perspective on AI and emerging technologies from the academic world. So the floor is yours, Aleksandra.
Aleksandra Jastrzebska: Thank you so much, Gonzalo. So good morning, everyone. I’m Aleksandra Jastrzemska, a recent graduate from Universitat Jaume I. And I’m thrilled to bring an academic perspective to this incredibly timely topic. How we use AI in industry or infrastructure but as well also in how we learn, how we teach, and how we think. So let me start with a question. What happens when AI thinks for us? A recent MIT study explored how students use chat GPT to write essays. It turned out that the more they rely on it, the less they remembered. So they cited less, they took less ownership of their text, and they reported weaker learning outcomes. In fact, the study used functional MRI scans to show something even more striking. The reduced activity, which is marked in blue, was observed in chat GPT users, while increased activity, which was marked in red, appeared in brain regions associated with memory and critical thinking. So basically the cognitive shortcut is what the research called cognitive debt. And like financial debt, it actually accumulates in a silent way. So we offload thinking to the machine, and in doing so, we actually weaken our own mental muscles. So as the slide shows, when participants wrote with AI, their brains literally went quiet. And this is a reminder for us that the cost of convenience is really striking. So what happens when researchers start relying on AI tools too heavily? Let me show you a few examples. This paper looks perfectly legitimate. It was published on Science Direct. The topic is about lithium batteries, and the formatting is flawless. But when we take a closer look, the introduction starts with, Certainly, here is a possible introduction for your topic, which I think it sounds familiar to several of us. And here’s another one. In a paper summarizing a study on radiology cases, the conclusion states, I’m sorry, but I don’t have access to real-time information or patient-specific data, as I am an AI language model. Well, that’s not just about science. It’s a direct copy-paste from a large language model. And of course, after all, both papers were retracted. But the fact that actually they passed peer review, and they were published at all, reveals a deeper concern. Because when academia starts to rely too heavily on AI-generated content, without the proper editing, it’s not just that error might slip through. It actually can become a part of the scientific record. So we all know that we are in the middle of an AI hype. It’s literally everywhere. And however, here are the real numbers. According to the Stanford AI Index, the AI papers now represent over 40% of all computer science research worldwide. And that’s nearly a quarter of a million of publications in just one year. And yet, despite that massive output, many people still don’t understand how AI actually works. So even something like recognizing a handwritten digit can seem like magic. So let me walk you through a simple example using artificial neural networks. One of the most fundamental classes of machine learning algorithm. And this example is based on MNIST dataset, which stands for Modified National Institute of Standards and Technology. And it’s been a classical training ground. This flows starts from the simple live view, how AI actually sees. It processed the pixel intensity values, flattens them into a vector, and then multiplies them by a set of weights. And it’s just matrix and vector multiplication. So something intuitive to the majority of humans, but yet it’s very logical. So what lies behind AI, it’s not a mystery. It’s just mathematics. And these weights, they aren’t programmed. They’re learned from the data through training. And that means that AI is not thinking, and it’s just calculating. So last but not least, I would like to share a bit about my personal research. It’s rooted in the generative AI, the models that generate new images. And in my case, those images are maps. And in this slide, you can see in an example that links OpenStreetMap, one of the most widely used open sourcing mapping platform, with the text prompt. And these prompts request a generation of maps that mimics the characteristic of the particular region, such as residential areas or coastal zones. And these models are powerful. But yet, they’re as good as the data which we provide and the human choices which are behind them. So basically, what we prioritize, what we filter, and how we evaluate the outcomes. So with this being said, I want to conclude that AI doesn’t need to replace our minds. It should challenge them. And as people from academia, our role is to make sure that in every context, from satellite imagery to schools’ essays, we are thinking with and help of AI, but not letting it think for us. And thank you so much for your time. I hope that this talk reminds you to use large language models wisely and to never stop using your own brain in the process.
Gonzalo Suardiaz: Thank you so much, Alexander. That was super interesting. All right. Last but not least, we have… Yeah, you also need… Here you go. So here on my right-hand side, we have Joshua Ku, coming all the way from the US. He’s a senior solution architect at GitHub. And Joshua is going to give us some hints about the why and how of open sourcing. So over to you, Joshua.
Joshua Ku: Great. Thank you very much. It’s been a privilege and an honor to be able to present among you guys. So I’m going to be talking about the why and the how of open sourcing, and also highlighting the journey that I’ve had working with the ITU in open sourcing some of their software. So the first question we want to answer is, why open source? What is the point of open sourcing, and why should the ITU, and also the rest of the UN in general, should be adopting open source? There’s a lot of challenges in software development today. There’s a lot of technological challenges, and velocity is always the huge one. In terms of getting skills and resources in headcount, that’s always the struggle in today’s world. Finding enough developers to work on technology to help build new features. So open source is one of the ways that we can answer this problem. Within GitHub alone, last year, we had over 150 million developers contribute to open source projects. Open source projects are designed as projects that are open to the public and can be edited by anybody in the public. Among the 150 million developers, we had over 1 billion contributions to open source projects last year alone. And that number is growing year over year. So that’s really exciting to see that developers around the world are embracing open source and contributing. soon, and thank you for joining us today. We’re going to talk about the world of open source. We’re going to talk about the world of open source. And there are a few key things that are contributing to it. And that is helping a lot of key technologies that power the world today to move faster. So you may be wondering, open source, is this just for developers? Or are companies actually contributing to open source as well? Who is keeping this community alive? Well, companies in the world are all active contributors and one of the largest contributors to open source. You look at there’s Google, there’s Microsoft, there’s Amazon, and even Huawei. And these are companies that power the fundamental things that we do in our lives. Google has a huge search base. Their search engine is large, but they also have one of the most popular web browsers in the world. And that web browser, Google Chrome, is fully open source. And that is powering a lot of the development today. Microsoft, Amazon, all have key open source technologies that their own technologies rely on. Without open source, these tech companies can’t move as forward and as quickly as they can today. So in learning more about why and the power of open source, let me talk to you about the journey that the ITU has taken this past year in embracing open source and what are the steps they had to do to get there. First of all, we had to look at licensing. Within the open source community, there’s a lot of things about licensing to ensure that software can be presented correctly. So we had to look at all of the pros and cons and the restrictions of licenses to see how can open source contribute, but also to make sure that all of the software that we write as the ITU can be related and sent out to other people and how other people can also build upon our technologies to build other solutions. It’s really amazing to see all the different solutions that can be made, and we want to make sure that everything is open source downstream as well. Second of all, we took a look at how to prepare our code. Preparing our code is very important. It’s not as simple as just copying and pasting your code and just dumping it on the internet. We had to go through all of the ITU source code to make sure that we strip out anything that contained potentially sensitive information. We had to delete anything that had secrets in it, but also we had to make sure that what we’re presenting is readable by other people. A lot of times when you build internal code bases, it’s just technical jargon and things that only you in the company would know about. When you’re in the open source community, you need to make sure that everything you put out there is readable by anybody who’s joining, because they don’t have the context of your company. We had to look into that, how to simplify a lot of the text to make it very readable and accessible for all. Lastly, we had to prepare a repository. We had to create a repository in GitHub and also create ways to allow developers to access it, to view it, and make a very readable interface for all users. Of course, open sourcing technology is only half the battle. The next part of it is how to maintain that and how to build velocity over that. One of the most important aspects of open sourcing technology is, as I mentioned, a lot of people can look at it. When there’s a lot of people that can look at it, there are things that might make your software very dangerous as well. One of the aspects we had to take care of is ensuring that our code base was as secure as possible. We had to look at our supply chain and looking at all of the other open source dependencies we depended on to make sure that none of them were vulnerable to any sort of attacks, so that our software would also be secure as well. We also had to look at our code scanning as well. As developers are writing code, it is known that developers are human and we do make mistakes. When we make mistakes, that introduces vulnerabilities that bad actors can come in and take over and exploit our application. We had to look at ways on how to secure our open source software, how to make sure that there are no vulnerabilities in there that bad actors can use to exploit. After we’ve secured our code base, then we had to start looking at managing community contributions. This is going to be really important because if we want to draw more developers to help contribute to the ITU’s projects, we have to make sure that people know where to go. We were working together to devise ways of issues so that we can document what are the next feature requests, what are some issues with the software today that we would like developers to help contribute to. By allowing tags like good first issue, this allows brand new developers to take a look at the ITU’s projects and also see what are some of the ways they can contribute without having to go through the code base and learn that on their own. We provide a very curated way for new developers to come in to take a look at the ITU’s projects and also see where can they contribute very quickly and add value to the project. We also are working on building out a roadmap so that when users come to the ITU’s projects, they can see what is the vision of the ITU, what are the next steps for these technologies and where do we want to bring them. That allows developers to also have their own ideas of what can they contribute to the ITU’s projects as well. You’re very surprised at a lot of open source technologies today, how we had a set vision in the very beginning, we laid out our roadmap, but then our open source developers saw potential in how to apply these technologies in other aspects as well. They’re able to contribute and expand the software to do more than what we originally imagined. This is the dream that we’re building here at the ITU. We want to see how we can use these technologies today to accomplish the vision we have, but also to inspire the next generation of developers to see what they can use our technologies to solve other problems as well. Thank you very much for listening and I hope that this really inspires you on open source and to see how within the technology that we’re doing today, how we can empower that with open source technology and how we can propel that forward.
Gonzalo Suardiaz: Thank you so much, Joshua. I think we have a couple of minutes before we close up the session to allow some Q&A from the audience, so maybe we can get support from the facilitators if there are any questions. Or questions from any of the panelists to the other presenters. It seems that we were super clear, there are no questions. Before closing, I wrote down a few takeaways and notes as you guys were presenting. Just to close this session, things that I take with me at least is that AI is an enabler, definitely not a replacement for human decision maker. Thank you, Alexandra, for the quote, AI is not thinking, it’s calculating. I like that one. I also like what Archana mentioned about AI being a bridge and not a barrier. I also think that partnerships are essential. AI is very powerful, but multi-stakeholder collaboration is what really makes it meaningful. Open source is a great example of that. But we have the public and the private sector represented as well. And that collaboration is also quite important to achieve the goals of Action Line C2. We also saw how AI-driven planning can unlock financing by giving governments data-backed infrastructure investment plans. And I think that CPP is a really good use case of that. We also talked about the need for clean, trusted and transparent data as a foundation for everything we’re trying to build. People come first, then it’s processes and governance, and then it’s technology. But I think this was a very interesting session, so I would like to thank each and one of you, Alexandra, Sandor, Renata and Joshua. Thank you for your presentations today. And thank you very much to all of you in the audience and to everyone who followed online. It’s been a pleasure, and if you want to connect after the session, it will be a pleasure to have a chat about this topic. So thanks again, and we’ll be in touch. Thanks. Thank you. Thank you.
Archana G. Gulati
Speech speed
116 words per minute
Speech length
484 words
Speech time
249 seconds
AI enables smart infrastructure planning through geospatial, demographic and economic data analysis to optimize connectivity deployment
Explanation
AI can analyze various types of data including geospatial, demographic, and economic information to help determine the optimal locations and methods for deploying connectivity solutions. This supports the implementation of the Kigali Action Plan and helps bridge the digital infrastructure gap.
Evidence
Mentioned as supporting the Kigali Action Plan and BDT strategic priority to bridge the digital infrastructure gap
Major discussion point
AI’s Role in ICT Infrastructure Development
Topics
Infrastructure | Development
Disagreed with
– Aleksandra Jastrzebska
Disagreed on
Role of AI in human cognitive processes
AI facilitates network rollout planning including smarter site selection for mobile towers and dynamic spectrum allocation
Explanation
AI can improve network deployment by enabling more intelligent decisions about where to place mobile towers and fiber routes. It also allows for dynamic spectrum allocation to increase capacity where it’s most needed.
Evidence
Examples given include smarter site selection for mobile towers or fiber routes and dynamic spectrum allocation to increase capacity where most needed
Major discussion point
AI’s Role in ICT Infrastructure Development
Topics
Infrastructure
AI can reduce operational costs and enhance network efficiency through predictive analysis and automation
Explanation
AI technologies can help lower the costs of operating networks while improving their efficiency in both urban and rural environments. This is achieved through predictive analysis capabilities and automation of network management tasks.
Evidence
Mentioned as applicable in both urban and rural settings through predictive analysis and automation
Major discussion point
AI’s Role in ICT Infrastructure Development
Topics
Infrastructure | Economic
AI enables real-time monitoring and maintenance, increasing infrastructure resilience in disaster-prone areas
Explanation
AI can provide continuous monitoring and maintenance capabilities that help maintain service continuity and increase the resilience of infrastructure systems. This is particularly valuable in areas prone to disasters or that are difficult to reach.
Evidence
Specifically mentioned as important for disaster-prone or hard-to-reach areas for increasing infrastructure resilience and service continuity
Major discussion point
AI’s Role in ICT Infrastructure Development
Topics
Infrastructure | Development
Inclusive policy frameworks are needed to ensure AI use is ethical and equitable, addressing bias and transparency in algorithms
Explanation
While AI is powerful, it requires comprehensive policy frameworks to ensure its implementation is both ethical and equitable. Key areas of concern include addressing algorithmic bias, ensuring transparency, and protecting vulnerable populations through proper data governance and privacy measures.
Evidence
Key considerations mentioned include bias and transparency in AI algorithms, data governance and privacy especially for vulnerable populations, and workforce AI literacy for public authorities
Major discussion point
Regulatory Frameworks and Policy Considerations
Topics
Legal and regulatory | Human rights
Agreed with
– Gonzalo Suardiaz
– Aleksandra Jastrzebska
Agreed on
Data quality and governance as fundamental requirements
AI deployment must be rights-based, secure and human-centric with multi-stakeholder engagement
Explanation
The deployment of AI systems should prioritize human rights, security, and put humans at the center of the design process. This requires involving multiple stakeholders in shaping the policies that govern AI implementation.
Evidence
Emphasized that policies should be shaped through multi-stakeholder engagement
Major discussion point
Regulatory Frameworks and Policy Considerations
Topics
Human rights | Legal and regulatory
Agreed with
– Gonzalo Suardiaz
– Aleksandra Jastrzebska
Agreed on
AI as an enabler requiring human oversight and collaboration
Human capacity-building efforts and workforce AI literacy are crucial for public authorities
Explanation
It’s essential to ensure that people working in public authorities have the necessary knowledge and skills to understand and work with AI systems. This requires dedicated capacity-building initiatives and educational efforts.
Evidence
Mentioned the ITU Academy platform for promoting information sharing and education in an affordable manner, and partnerships with governments for national digital strategies
Major discussion point
Partnership and Capacity Building
Topics
Development | Sociocultural
Agreed with
– Gonzalo Suardiaz
– Renata Figueiredo Santoyo
– Joshua Ku
Agreed on
Importance of partnerships and multi-stakeholder collaboration
Gonzalo Suardiaz
Speech speed
137 words per minute
Speech length
1968 words
Speech time
859 seconds
Garbage in, garbage out (GIGO) is a major risk where bad input data leads to bad results regardless of algorithm quality
Explanation
GIGO represents a fundamental challenge in AI and data science where poor quality input data will produce poor results, no matter how sophisticated the algorithms or tools being used. This poses significant risks for connectivity planning platforms that could provide wrong recommendations to users.
Evidence
Example given of potentially recommending satellite connectivity when fiber or mobile networks might be more sustainable or efficient for connecting schools
Major discussion point
Data Quality and Governance Challenges
Topics
Infrastructure | Legal and regulatory
Agreed with
– Archana G. Gulati
– Aleksandra Jastrzebska
Agreed on
Data quality and governance as fundamental requirements
Data standards and schemas are essential for all types of infrastructure data including fiber, coverage, and cost data
Explanation
Proper standardization and structuring of data is crucial for reducing the risk of errors in connectivity planning systems. This includes not just location data but also metadata about facilities, coverage information, and cost models.
Evidence
Examples provided include fiber data standards, school metadata (size, needs, facilities), mobile coverage data, backhaul data, and cost data
Major discussion point
Data Quality and Governance Challenges
Topics
Infrastructure | Legal and regulatory
Data governance frameworks must track data ownership, changes, and maintain data lifecycle management
Explanation
Since data is constantly changing and evolving, it’s essential to have robust governance systems that can track who owns data, who has modified it, when changes were made, and how the data has evolved over time.
Evidence
Emphasized that data is alive and changes all the time, requiring tracking of ownership, changes, timing, and methods
Major discussion point
Data Quality and Governance Challenges
Topics
Legal and regulatory | Infrastructure
AI and machine learning can be used to validate data by cross-referencing different datasets and identifying inconsistencies
Explanation
By comparing multiple data sources, AI systems can identify potential errors or inconsistencies that indicate data quality problems. This allows for investigation and correction of problematic data before it affects decision-making.
Evidence
Examples include comparing operator coverage data with open cell ID data, and using satellite imagery with school metadata to validate coordinates and facility information
Major discussion point
Data Quality and Governance Challenges
Topics
Infrastructure | Legal and regulatory
Disagreed with
– Aleksandra Jastrzebska
Disagreed on
Approach to AI validation and quality control
Multi-stakeholder collaboration makes AI meaningful beyond just being a powerful tool
Explanation
While AI has significant technical capabilities, its real value comes from collaborative efforts involving multiple stakeholders working together to achieve common goals.
Evidence
Mentioned representation from public and private sectors and their importance in achieving Action Line C2 goals
Major discussion point
Partnership and Capacity Building
Topics
Development | Legal and regulatory
Agreed with
– Archana G. Gulati
– Renata Figueiredo Santoyo
– Joshua Ku
Agreed on
Importance of partnerships and multi-stakeholder collaboration
Renata Figueiredo Santoyo
Speech speed
116 words per minute
Speech length
802 words
Speech time
414 seconds
Brazil’s Anatel is conducting regulatory impact assessments for AI in telecom services, aligned with UNESCO recommendations
Explanation
Brazil’s telecommunications regulator is taking a systematic approach to AI regulation by conducting thorough impact assessments to establish clear guidelines for ethical and responsible AI use. This work is being aligned with international standards and involves public consultation processes.
Evidence
Mentioned alignment with UNESCO’s recommendation on AI ethics and I2 guidelines, with public consultation underway to balance innovation with transparency, privacy, and accountability
Major discussion point
Regulatory Frameworks and Policy Considerations
Topics
Legal and regulatory | Human rights
Cybersecurity regulation must evolve as AI and 5G increase network complexity and risks
Explanation
The introduction of AI and 5G technologies creates new complexities and security risks that require updated regulatory frameworks. Anatel is reviewing its cybersecurity regulations to address these emerging challenges.
Evidence
Referenced studying lessons from South Korea’s National Coordination Centers, EU’s NIS2 Directive, China’s Data Security Framework, and US supply chain security initiatives
Major discussion point
Regulatory Frameworks and Policy Considerations
Topics
Cybersecurity | Legal and regulatory
Strong partnerships between regulators, academia, and private sector are essential for responsible AI deployment
Explanation
Effective AI regulation and deployment requires collaboration across different sectors, with regulators working closely with academic institutions and private companies to develop comprehensive approaches to emerging technologies.
Evidence
Anatel’s partnership with ITA university through a term of decentralized execution (TED) to examine AI’s impact on various aspects of telecom regulation
Major discussion point
Partnership and Capacity Building
Topics
Development | Legal and regulatory
Agreed with
– Archana G. Gulati
– Gonzalo Suardiaz
– Joshua Ku
Agreed on
Importance of partnerships and multi-stakeholder collaboration
Brazil’s partnership with ITA university examines AI’s impact on telecom regulation, cybersecurity, and consumer rights
Explanation
This academic collaboration is producing practical research that will shape Brazil’s regulatory framework by examining how AI transforms various dimensions of telecommunications regulation, from service quality to consumer protection.
Evidence
Research areas include quality of services, cybersecurity, consumer rights, platform oversight, and spectrum management, with focus on ensuring transparency and avoiding discriminatory algorithmic decisions
Major discussion point
Partnership and Capacity Building
Topics
Development | Legal and regulatory | Human rights
Sandor Farkas
Speech speed
78 words per minute
Speech length
742 words
Speech time
565 seconds
YOLO11 object detection can identify cell towers in satellite imagery to support coverage analysis and decision making
Explanation
Using AI computer vision technology, specifically YOLO11 from Ultralytics, it’s possible to automatically detect and locate cell towers in satellite images. This creates vector layers that can be used for further analysis and to support infrastructure planning decisions.
Evidence
Uses YOLO11 from Ultralytics, an open source Python module based on PyTorch, can be used with Azure Compute Instance and offline machines, uses oriented bounding boxes to reduce distracting background
Major discussion point
Technical Implementation and Tools
Topics
Infrastructure
AI model training requires proper dataset preparation, labeling, and validation using metrics like precision and recall
Explanation
Creating effective AI models involves a systematic process of preparing training data, labeling target objects, and validating model performance using established metrics. The process includes splitting data into training, testing, and validation sets, with careful attention to precision and recall trade-offs.
Evidence
Used four classes (two types of cell towers and their shadows), explained F1 confidence curves, precision-recall trade-offs, confusion matrices, and intersection over union metrics for evaluating model accuracy
Major discussion point
Technical Implementation and Tools
Topics
Infrastructure
Aleksandra Jastrzebska
Speech speed
138 words per minute
Speech length
847 words
Speech time
366 seconds
Over-reliance on AI tools like ChatGPT reduces student memory retention and critical thinking abilities
Explanation
Research shows that when students rely heavily on AI tools for writing tasks, they experience reduced learning outcomes, weaker memory retention, and decreased critical thinking skills. Brain imaging studies reveal reduced activity in regions associated with memory and critical thinking when using AI assistance.
Evidence
MIT study showing students using ChatGPT cited less, took less ownership of text, reported weaker learning outcomes, and fMRI scans showed reduced brain activity in memory and critical thinking regions
Major discussion point
Academic Perspectives on AI Learning
Topics
Sociocultural | Development
Disagreed with
– Archana G. Gulati
Disagreed on
Role of AI in human cognitive processes
AI-generated academic content without proper editing can compromise scientific integrity and research quality
Explanation
When researchers rely too heavily on AI-generated content without proper review and editing, it can lead to the publication of flawed research that compromises the scientific record. Examples show papers with obvious AI-generated text passing peer review before being retracted.
Evidence
Examples of retracted papers including one with introduction starting ‘Certainly, here is a possible introduction for your topic’ and another concluding ‘I’m sorry, but I don’t have access to real-time information… as I am an AI language model’
Major discussion point
Academic Perspectives on AI Learning
Topics
Sociocultural | Legal and regulatory
Disagreed with
– Gonzalo Suardiaz
Disagreed on
Approach to AI validation and quality control
AI is calculating, not thinking – it processes data through mathematical operations rather than genuine understanding
Explanation
AI systems, including neural networks, operate through mathematical calculations such as matrix and vector multiplication rather than actual thinking or understanding. The weights used in these calculations are learned from data through training, but the process remains fundamentally mathematical rather than cognitive.
Evidence
Explained using MNIST dataset example showing how AI processes pixel intensity values, flattens them into vectors, and multiplies by weights – demonstrating it’s ‘just mathematics’ and ‘just calculating’
Major discussion point
Academic Perspectives on AI Learning
Topics
Sociocultural
Agreed with
– Archana G. Gulati
– Gonzalo Suardiaz
Agreed on
AI as an enabler requiring human oversight and collaboration
Generative AI models can create maps from text prompts, but quality depends on input data and human oversight
Explanation
AI models can generate new images, including maps, based on text descriptions that specify characteristics of particular regions. However, the effectiveness of these models is directly dependent on the quality of data provided and the human decisions made in prioritizing, filtering, and evaluating the outputs.
Evidence
Research linking OpenStreetMap with text prompts to generate maps mimicking characteristics of regions like residential areas or coastal zones
Major discussion point
Technical Implementation and Tools
Topics
Infrastructure | Sociocultural
Agreed with
– Archana G. Gulati
– Gonzalo Suardiaz
Agreed on
Data quality and governance as fundamental requirements
Joshua Ku
Speech speed
177 words per minute
Speech length
1416 words
Speech time
479 seconds
Open source enables faster software development with 150 million developers contributing over 1 billion contributions annually
Explanation
Open source development addresses key challenges in software development, particularly around velocity and resource constraints. The massive scale of global participation in open source projects demonstrates its effectiveness in accelerating technological progress.
Evidence
GitHub statistics showing over 150 million developers and over 1 billion contributions to open source projects in the previous year, with numbers growing year over year
Major discussion point
Open Source Development and Community Collaboration
Topics
Development | Economic
Major tech companies rely on open source technologies, with projects like Google Chrome being fully open source
Explanation
Leading technology companies are not just users but major contributors to open source projects. These companies depend on open source technologies for their core products and services, demonstrating that open source is essential for modern technology development.
Evidence
Examples include Google (with Chrome browser being fully open source), Microsoft, Amazon, and Huawei as active contributors, with these companies powering fundamental technologies in daily life
Major discussion point
Open Source Development and Community Collaboration
Topics
Economic | Development
Successful open sourcing requires proper licensing, code preparation, security scanning, and community management
Explanation
Open sourcing software involves multiple technical and organizational steps beyond simply making code public. Organizations must carefully consider legal frameworks, prepare code for public consumption, ensure security, and establish systems for managing community contributions.
Evidence
ITU’s journey included examining licensing pros/cons/restrictions, preparing code by removing sensitive information and making it readable, creating accessible repositories, implementing security scanning for vulnerabilities and supply chain issues
Major discussion point
Open Source Development and Community Collaboration
Topics
Legal and regulatory | Cybersecurity
Open source allows developers to expand software beyond original vision and solve additional problems
Explanation
One of the key benefits of open source development is that external contributors can take projects in directions that the original creators never imagined. This leads to software solutions that address a broader range of problems than initially intended.
Evidence
Mentioned how open source developers often see potential to apply technologies in other aspects and contribute to expand software capabilities beyond original imagination, which is the ‘dream’ being built at ITU
Major discussion point
Open Source Development and Community Collaboration
Topics
Development | Economic
Agreed with
– Archana G. Gulati
– Gonzalo Suardiaz
– Renata Figueiredo Santoyo
Agreed on
Importance of partnerships and multi-stakeholder collaboration
Agreements
Agreement points
AI as an enabler requiring human oversight and collaboration
Speakers
– Archana G. Gulati
– Gonzalo Suardiaz
– Aleksandra Jastrzebska
Arguments
AI deployment must be rights-based, secure and human-centric with multi-stakeholder engagement
Multi-stakeholder collaboration makes AI meaningful beyond just being a powerful tool
AI is calculating, not thinking – it processes data through mathematical operations rather than genuine understanding
Summary
All speakers agree that AI should augment rather than replace human decision-making, requiring proper human oversight, collaboration, and ethical frameworks to be truly effective.
Topics
Human rights | Development | Sociocultural
Importance of partnerships and multi-stakeholder collaboration
Speakers
– Archana G. Gulati
– Gonzalo Suardiaz
– Renata Figueiredo Santoyo
– Joshua Ku
Arguments
Human capacity-building efforts and workforce AI literacy are crucial for public authorities
Multi-stakeholder collaboration makes AI meaningful beyond just being a powerful tool
Strong partnerships between regulators, academia, and private sector are essential for responsible AI deployment
Open source allows developers to expand software beyond original vision and solve additional problems
Summary
There is strong consensus that effective AI deployment and infrastructure development requires collaboration across sectors, including government, academia, private sector, and civil society.
Topics
Development | Legal and regulatory
Data quality and governance as fundamental requirements
Speakers
– Archana G. Gulati
– Gonzalo Suardiaz
– Aleksandra Jastrzebska
Arguments
Inclusive policy frameworks are needed to ensure AI use is ethical and equitable, addressing bias and transparency in algorithms
Garbage in, garbage out (GIGO) is a major risk where bad input data leads to bad results regardless of algorithm quality
Generative AI models can create maps from text prompts, but quality depends on input data and human oversight
Summary
Speakers unanimously emphasize that high-quality, well-governed data is essential for effective AI systems, with proper frameworks needed to ensure ethical and accurate outcomes.
Topics
Legal and regulatory | Infrastructure
Similar viewpoints
Both speakers emphasize the critical need for comprehensive regulatory frameworks that ensure AI deployment is ethical, transparent, and aligned with international standards and human rights principles.
Speakers
– Archana G. Gulati
– Renata Figueiredo Santoyo
Arguments
Inclusive policy frameworks are needed to ensure AI use is ethical and equitable, addressing bias and transparency in algorithms
Brazil’s Anatel is conducting regulatory impact assessments for AI in telecom services, aligned with UNESCO recommendations
Topics
Legal and regulatory | Human rights
Both speakers focus on the technical aspects of AI implementation, emphasizing the importance of proper data validation, quality control, and systematic approaches to AI model development.
Speakers
– Gonzalo Suardiaz
– Sandor Farkas
Arguments
AI and machine learning can be used to validate data by cross-referencing different datasets and identifying inconsistencies
AI model training requires proper dataset preparation, labeling, and validation using metrics like precision and recall
Topics
Infrastructure | Legal and regulatory
Both speakers recognize that emerging technologies create new security challenges that require updated frameworks and systematic approaches to risk management.
Speakers
– Renata Figueiredo Santoyo
– Joshua Ku
Arguments
Cybersecurity regulation must evolve as AI and 5G increase network complexity and risks
Successful open sourcing requires proper licensing, code preparation, security scanning, and community management
Topics
Cybersecurity | Legal and regulatory
Unexpected consensus
Academic integrity and AI over-reliance concerns
Speakers
– Aleksandra Jastrzebska
– Archana G. Gulati
Arguments
Over-reliance on AI tools like ChatGPT reduces student memory retention and critical thinking abilities
Human capacity-building efforts and workforce AI literacy are crucial for public authorities
Explanation
While coming from different perspectives (academic research vs. policy implementation), both speakers converge on the concern that AI should enhance rather than replace human cognitive abilities, emphasizing the need for proper education and capacity building.
Topics
Sociocultural | Development
Open source as solution to resource constraints
Speakers
– Joshua Ku
– Archana G. Gulati
Arguments
Open source enables faster software development with 150 million developers contributing over 1 billion contributions annually
Human capacity-building efforts and workforce AI literacy are crucial for public authorities
Explanation
The GitHub representative’s emphasis on open source community contributions aligns unexpectedly well with the ITU’s focus on capacity building and inclusive development, suggesting open source as a mechanism for addressing resource and skill gaps.
Topics
Development | Economic
Overall assessment
Summary
The speakers demonstrate strong consensus on key principles: AI as an enabler requiring human oversight, the critical importance of multi-stakeholder partnerships, the fundamental need for data quality and governance, and the requirement for ethical frameworks in AI deployment.
Consensus level
High level of consensus with complementary perspectives rather than conflicting viewpoints. This strong alignment suggests a mature understanding of AI’s role in infrastructure development and creates a solid foundation for collaborative action on WSIS Action Line C2 goals. The consensus spans technical, regulatory, and social dimensions, indicating comprehensive agreement on both challenges and solutions.
Differences
Different viewpoints
Approach to AI validation and quality control
Speakers
– Gonzalo Suardiaz
– Aleksandra Jastrzebska
Arguments
AI and machine learning can be used to validate data by cross-referencing different datasets and identifying inconsistencies
AI-generated academic content without proper editing can compromise scientific integrity and research quality
Summary
Gonzalo advocates for using AI to validate and improve data quality through cross-referencing datasets, while Aleksandra warns about AI-generated content compromising quality without proper human oversight and editing
Topics
Infrastructure | Legal and regulatory | Sociocultural
Role of AI in human cognitive processes
Speakers
– Archana G. Gulati
– Aleksandra Jastrzebska
Arguments
AI enables smart infrastructure planning through geospatial, demographic and economic data analysis to optimize connectivity deployment
Over-reliance on AI tools like ChatGPT reduces student memory retention and critical thinking abilities
Summary
Archana emphasizes AI as an enabler for smart planning and optimization, while Aleksandra warns about cognitive dependency and the need to maintain human thinking capabilities
Topics
Infrastructure | Development | Sociocultural
Unexpected differences
Fundamental nature and capabilities of AI
Speakers
– Archana G. Gulati
– Aleksandra Jastrzebska
Arguments
AI deployment must be rights-based, secure and human-centric with multi-stakeholder engagement
AI is calculating, not thinking – it processes data through mathematical operations rather than genuine understanding
Explanation
While both speakers advocate for human-centric approaches, they have fundamentally different views on AI’s nature – Archana discusses AI in terms that suggest more sophisticated capabilities requiring rights-based frameworks, while Aleksandra emphasizes that AI is purely mathematical calculation without genuine understanding
Topics
Human rights | Legal and regulatory | Sociocultural
Overall assessment
Summary
The speakers show relatively low levels of direct disagreement, with most differences being in emphasis and approach rather than fundamental opposition. The main areas of disagreement center around the role of AI in validation processes, the balance between AI capabilities and human oversight, and the fundamental nature of AI systems.
Disagreement level
Low to moderate disagreement level. The speakers generally align on the importance of responsible AI deployment, data quality, and human-centric approaches, but differ in their specific methodologies and philosophical perspectives on AI’s role and nature. These disagreements are constructive and reflect different professional perspectives rather than fundamental conflicts, suggesting a healthy diversity of approaches within a shared commitment to responsible AI development.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers emphasize the critical need for comprehensive regulatory frameworks that ensure AI deployment is ethical, transparent, and aligned with international standards and human rights principles.
Speakers
– Archana G. Gulati
– Renata Figueiredo Santoyo
Arguments
Inclusive policy frameworks are needed to ensure AI use is ethical and equitable, addressing bias and transparency in algorithms
Brazil’s Anatel is conducting regulatory impact assessments for AI in telecom services, aligned with UNESCO recommendations
Topics
Legal and regulatory | Human rights
Both speakers focus on the technical aspects of AI implementation, emphasizing the importance of proper data validation, quality control, and systematic approaches to AI model development.
Speakers
– Gonzalo Suardiaz
– Sandor Farkas
Arguments
AI and machine learning can be used to validate data by cross-referencing different datasets and identifying inconsistencies
AI model training requires proper dataset preparation, labeling, and validation using metrics like precision and recall
Topics
Infrastructure | Legal and regulatory
Both speakers recognize that emerging technologies create new security challenges that require updated frameworks and systematic approaches to risk management.
Speakers
– Renata Figueiredo Santoyo
– Joshua Ku
Arguments
Cybersecurity regulation must evolve as AI and 5G increase network complexity and risks
Successful open sourcing requires proper licensing, code preparation, security scanning, and community management
Topics
Cybersecurity | Legal and regulatory
Takeaways
Key takeaways
AI serves as an enabler and bridge for ICT infrastructure development, not a replacement for human decision-making
Data quality is fundamental – ‘garbage in, garbage out’ principle means poor input data leads to poor results regardless of algorithm sophistication
Multi-stakeholder partnerships between government, academia, private sector, and open source communities are essential for meaningful AI implementation
Inclusive policy frameworks must ensure AI deployment is ethical, equitable, rights-based, and human-centric
AI can optimize infrastructure planning through geospatial analysis, predictive maintenance, and smart resource allocation
Open source development accelerates innovation with 150 million developers contributing globally
Academic institutions must balance AI tool usage with maintaining critical thinking and learning outcomes
Regulatory frameworks need continuous evolution to address cybersecurity risks and ensure transparency in AI-driven telecom services
Clean, trusted, and transparent data with proper governance frameworks forms the foundation for effective AI applications
AI-driven planning can unlock financing by providing governments with data-backed infrastructure investment plans
Resolutions and action items
ITU-D to continue applying AI in various supported initiatives through partnerships with member states and private sector
Brazil’s Anatel conducting ongoing regulatory impact assessment for AI in telecom services with public consultation
ITU partnership with GitHub to open source software tools and create accessible repositories for developer contributions
Connectivity Planning Platform (CPP) development with MVP presentation at GIGA Connectivity Forum and general availability by June 2026
Extension of AI object detection datasets and development of tools for smooth data pipeline implementation
Continued collaboration between Brazil’s Anatel and ITA university to examine AI’s impact on telecom regulation
Unresolved issues
Technical connection issues preventing remote participation of Walid Mahmoudli from ITU’s Future Networks and Spectrum Division
Need for comprehensive data standards and schemas across all types of infrastructure data beyond fiber
Balancing AI innovation with transparency, privacy, and accountability requirements
Addressing the trade-off between precision and recall in AI model performance for infrastructure mapping
Managing the risk of cognitive debt from over-reliance on AI tools in academic and professional settings
Ensuring AI deployment doesn’t leave anyone behind in digital transformation
Developing effective crowdsourcing mechanisms for data validation and feedback
Suggested compromises
Focus on recall over precision in AI models for infrastructure detection, accepting false positives that can be post-validated rather than missing actual infrastructure
Implement iterative hyperparameter tuning to optimize AI model metrics rather than seeking perfect initial results
Use AI as a thinking aid rather than replacement, maintaining human oversight and critical evaluation
Adopt open innovation systems and collaborative platforms for knowledge-sharing while maintaining security standards
Balance innovation velocity with proper security scanning and vulnerability management in open source development
Thought provoking comments
AI should be a bridge, not a barrier, to inclusive digital infrastructure. And once again, I would like to reiterate that we must ensure that no one is left behind in the next wave of digital transformation.
Speaker
Archana G. Gulati
Reason
This comment reframes AI from a purely technical tool to a social equity instrument, establishing the ethical foundation for the entire discussion. It introduces the critical tension between technological advancement and inclusivity that runs throughout the session.
Impact
This opening statement set the tone for the entire discussion, with subsequent speakers consistently returning to themes of inclusivity, ethics, and ensuring technology serves all populations. It established the framework that AI deployment must be measured not just by technical success but by its impact on digital equity.
GIGO. Garbage in, garbage out… No matter how good your algorithms, no matter how good your tools or your models are, you will get bad results if the input data is bad.
Speaker
Gonzalo Suardiaz
Reason
This comment cuts through AI hype to identify a fundamental, practical challenge that undermines all AI applications. It shifts focus from algorithmic sophistication to data quality as the critical success factor.
Impact
This observation fundamentally shifted the discussion from celebrating AI capabilities to examining its limitations and prerequisites. It introduced a sobering reality check that influenced subsequent speakers to address data governance, validation, and the practical challenges of implementation.
What happens when AI thinks for us? A recent MIT study explored how students use chat GPT to write essays. It turned out that the more they rely on it, the less they remembered… The cognitive shortcut is what the research called cognitive debt.
Speaker
Aleksandra Jastrzebska
Reason
This comment introduces the profound concept of ‘cognitive debt’ – a hidden cost of AI adoption that parallels financial debt. It challenges the assumption that AI assistance is purely beneficial and reveals unintended consequences on human cognitive development.
Impact
This insight dramatically deepened the discussion by introducing neurological evidence of AI’s impact on human cognition. It moved the conversation beyond technical implementation to examine fundamental questions about human-AI interaction and long-term societal implications, adding a crucial dimension about preserving human intellectual capacity.
AI doesn’t need to replace our minds. It should challenge them… we are thinking with and help of AI, but not letting it think for us.
Speaker
Aleksandra Jastrzebska
Reason
This comment provides a philosophical framework for healthy human-AI collaboration, distinguishing between AI as a cognitive enhancer versus a cognitive replacement. It offers a practical principle for maintaining human agency in an AI-driven world.
Impact
This statement provided a synthesizing principle that tied together various concerns raised throughout the session. It offered a constructive path forward that acknowledges AI’s power while preserving human intellectual sovereignty, influencing the moderator’s final takeaways about AI being an enabler, not a replacement.
AI is not thinking, it’s calculating.
Speaker
Aleksandra Jastrzebska
Reason
This succinct statement demystifies AI by clearly delineating what AI actually does versus human perception of its capabilities. It cuts through anthropomorphic language that often obscures AI’s true nature as mathematical computation.
Impact
This clarification was so impactful that the moderator specifically highlighted it in his closing remarks. It provided a grounding reality check that helped participants and audience maintain appropriate expectations and understanding of AI capabilities throughout the discussion.
Without open source, these tech companies can’t move as forward and as quickly as they can today… Google Chrome, is fully open source. And that is powering a lot of the development today.
Speaker
Joshua Ku
Reason
This comment reveals the counterintuitive reality that even the world’s largest tech companies depend on collaborative, open development models. It challenges assumptions about competitive advantage and demonstrates how openness accelerates rather than hinders innovation.
Impact
This insight reframed open source from a nice-to-have community initiative to a fundamental requirement for technological progress. It strengthened the argument for the ITU’s open source initiatives and demonstrated how collaboration, rather than competition, drives the most significant technological advances.
Overall assessment
These key comments fundamentally shaped the discussion by establishing multiple critical frameworks: ethical (AI as bridge vs. barrier), practical (GIGO and data quality), cognitive (thinking with vs. letting AI think for us), and collaborative (open source as essential infrastructure). The comments created a sophisticated dialogue that moved beyond technical implementation to examine deeper questions about human-AI interaction, societal impact, and sustainable development approaches. The discussion evolved from initial optimism about AI’s potential to a more nuanced understanding of its challenges, prerequisites, and proper role in human society. The interplay between these insights created a comprehensive examination that balanced technological enthusiasm with practical wisdom and ethical considerations.
Follow-up questions
How can we ensure that AI algorithms used in telecommunications are free from bias and maintain transparency, especially when serving vulnerable populations?
Speaker
Archana G. Gulati
Explanation
This is crucial for ensuring ethical and equitable AI deployment in ICT infrastructure, particularly for underserved communities
What specific data governance and privacy frameworks are needed when AI is applied to telecommunications infrastructure in vulnerable populations?
Speaker
Archana G. Gulati
Explanation
Essential for protecting privacy rights while enabling AI-driven connectivity solutions in underserved areas
How can we ensure workforce deployment and AI literacy among public authorities working with AI-enabled telecommunications systems?
Speaker
Archana G. Gulati
Explanation
Critical for effective implementation and governance of AI tools in public sector telecommunications planning
How can we develop comprehensive data standards and schemas for all types of infrastructure data (points of interest, coverage data, backhaul data, cost data) to prevent GIGO issues?
Speaker
Gonzalo Suardiaz
Explanation
Essential for ensuring accurate connectivity planning and avoiding wrong infrastructure investment decisions
What governance frameworks are needed to track data ownership, changes, and version control in dynamic infrastructure datasets?
Speaker
Gonzalo Suardiaz
Explanation
Critical for maintaining data quality and accountability in AI-driven infrastructure planning platforms
How can we effectively triangulate and validate data from multiple sources to identify and correct inconsistencies?
Speaker
Gonzalo Suardiaz
Explanation
Important for ensuring reliability of infrastructure planning decisions based on multiple data sources
How can we extend datasets to include a wider variety of cell tower types and improve recall in AI object detection models?
Speaker
Sandor Farkas
Explanation
Necessary to improve the accuracy and coverage of AI-based cell tower detection for infrastructure mapping
What tools need to be developed for a smooth data pipeline in AI-based infrastructure object detection?
Speaker
Sandor Farkas
Explanation
Essential for operationalizing AI object detection for practical infrastructure planning applications
How can we balance precision and recall in AI object detection models for infrastructure mapping, and what are the trade-offs?
Speaker
Sandor Farkas
Explanation
Critical for optimizing AI model performance based on specific use case requirements in infrastructure detection
What are the long-term cognitive effects of over-reliance on AI tools in academic and professional research?
Speaker
Aleksandra Jastrzebska
Explanation
Important for understanding how AI dependency might affect critical thinking and learning outcomes in research and education
How can we develop better quality control mechanisms to prevent AI-generated content from entering the scientific record without proper human oversight?
Speaker
Aleksandra Jastrzebska
Explanation
Essential for maintaining the integrity of scientific research and preventing contamination of academic literature with unedited AI content
What are the best practices for maintaining and building velocity in open source projects after initial release?
Speaker
Joshua Ku
Explanation
Critical for ensuring long-term sustainability and community engagement in open source infrastructure projects
How can organizations effectively manage community contributions and create accessible entry points for new developers in open source projects?
Speaker
Joshua Ku
Explanation
Important for building and maintaining active developer communities around open source infrastructure tools
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
