AI for Good Technology That Empowers People
20 Feb 2026 10:00h - 11:00h
AI for Good Technology That Empowers People
Summary
The session opened with Speaker 1 introducing Fred Werner, Chief of Strategic Engagement at the ITU, to give opening remarks [1-3]. Fred framed AI as potentially the last human invention and argued that ensuring AI serves “good” requires proactive governance, citing a conversation with AI-safety expert Roman Jampolsky [4-12]. He outlined the evolution of the ITU’s AI for Good initiative from its 2017 launch-initially hype-driven-to today’s focus on generative AI, autonomous agents, robotics, brain-computer interfaces and space computing, noting a “zero-click” world where agents act on our behalf [13-22]. Emphasising the breadth of applications, Fred listed sectors such as affordable healthcare, education, food security and disaster response as key use-cases for AI for Good [23-24]. He described AI for Good as a year-long movement built on three pillars-solutions, skills and standards-and highlighted over 400 AI standards in development, including work on future networks and AI-native architectures relevant to the session’s edge-AI theme [55-71].
Brijesh Lal then presented IT Delhi’s edge-AI research, focusing on haptic feedback, split-control architectures and intent-based signal conversion to enable low-latency, safety-critical applications [87-112]. He also reported collaborative technical reports from TSDSI on dynamic AI models for V2X, security, digital twins and AI-native 6G RAN, underscoring the importance of global standard forums such as ITU-R IMT 2030 and CGPP for edge development [115-133].
Ranjitha Prasad followed with a technical overview of federated learning, explaining how data explosion and sub-10 ms latency requirements in 6G networks drive edge-centric training to preserve privacy and reduce bandwidth use [136-152]. She illustrated two use-cases-traffic-prediction during a football event and V2X road-condition sharing-showing how federated models run at the edge while only metadata is sent to the cloud, and noted her role leading the Intellicom lab’s collaboration with IT Delhi [153-166].
The panel, moderated by Fred, began with Mala Kumar describing XR-assisted medical emergency care using public 5G and private 5G for on-premise HCI, and expressed a desire to open-source these solutions through AI for Good platforms [172-188]. Alagan Mahalingam recounted deploying edge AI for small-scale farmers in Portugal and Sri Lanka, adapting hardware such as Raspberry Pi to deliver AI services where connectivity is limited, and emphasized model quantisation to fit edge constraints [200-236]. Sakshi Gupta of Qualcomm highlighted the shift of inference to devices-from smartphones to cars and IoT-pointing to on-device large models and Qualcomm’s “Tech for Good” program that supports startups like India’s Raksa Health with edge AI health assistants [246-286].
Ambassador Egriselda Lopez summarized that “HAI” means placing AI close to people and services, improving speed, cost and privacy in low-connectivity settings, and called for continued cooperation to avoid fragmented approaches [313-317]. Ambassador Reintam Saar outlined the upcoming UN Global Dialogue on AI Governance, stressing inclusive, outcome-oriented discussions that will build on the practical insights shared during the panel, thereby linking standards work with real-world impact [338-350].
Keypoints
Major discussion points
– AI for Good – its evolution, goals and operating pillars – Fred introduced the AI for Good programme, noting its start in 2017, the shift from hype to concrete solutions, the rise of generative AI and autonomous agents, and its mission to “unlock AI’s potential to serve humanity” [4-13][14-21][24-28]. He later clarified that AI for Good is a year-long ecosystem built around three pillars - solutions, skills and standards – with hackathons, machine-learning challenges and a standards portfolio of over 400 AI standards [55-60][68-70].
– Edge AI as a catalyst for the Global South – Multiple speakers highlighted why edge computing is essential for low-latency, privacy-preserving and context-aware AI. Brijesh described the convergence of communication, compute and control and the need for strong edge capabilities for haptics and V2X [90-99]. Ranjitha explained federated learning as a way to keep data at the edge, improving privacy, latency and bandwidth for telecom use-cases [136-144]. Alagan gave concrete examples such as AI-enabled soil-sensing and farmer advisory systems that were adapted to offline villages in Sri Lanka using edge devices [200-215]. Mala showcased XR-assisted medical emergency care that leverages private and public 5G edge networks [172-182]. Sakshi described Qualcomm’s on-device AI, edge-cloud and automotive deployments, and highlighted startup pilots like India’s Raksa Health that run AI locally [246-286].
– Standards, collaboration and capacity-building across UN agencies and industry – Fred emphasized the role of the ITU and its 50 UN sister agencies in co-creating AI standards and standards-related work on future networks, AI-native RAN and edge AI [68-70][29-31]. He later noted that the “standards work … will emerge … to make this work at scale” [292-296]. Panelists referenced existing standardisation frameworks (e.g., ITU-IMT 2030, CGPP, M2M) and the need for inclusive, interoperable specifications [130-133][338-345].
– Human-centred AI governance and the upcoming Global AI Governance Dialogue – The Ambassador of El Salvador stressed that AI must stay “close to people, services, communities” and protect privacy while delivering speed and cost benefits [309-317]. She called for people to remain at the centre, for closing the digital divide, and for avoiding fragmented approaches [322-330]. Ambassador Reintam outlined the mandate of the first UN Global AI Governance Dialogue, stressing inclusivity, capacity-building and actionable outcomes [338-345]. Throughout, speakers linked these governance goals back to the AI for Good ethos of putting humanity first [45-48].
Overall purpose / goal of the discussion
The session was convened to showcase how the AI for Good programme is mobilising research, standards-development and multi-stakeholder collaboration to harness edge AI for solving concrete societal challenges-especially in the Global South-while embedding those efforts within a human-centred governance framework that the UN will advance through its upcoming Global AI Governance Dialogue.
Overall tone and its evolution
– The opening remarks were formal and optimistic, framing AI for Good as a visionary movement.
– As technical speakers took the floor, the tone shifted to informative and pragmatic, focusing on specific edge-AI use-cases, challenges, and engineering solutions.
– During the panel, the conversation became collaborative and solution-oriented, with participants sharing real-world deployments and emphasizing standards and open-source sharing.
– The closing remarks adopted a hopeful and inclusive tone, stressing human-centred values, the need for global cooperation, and the promise of forthcoming governance dialogues. Throughout, the tone remained constructive, moving from high-level vision to concrete action and back to a unifying call for collective responsibility.
Speakers
– Fred Werner – Chief of Strategic Engagement Department, ITU; moderator and panelist; expertise in AI for Good, AI standards, edge AI. [S18][S16]
– Speaker 1 – Host/moderator of the session; role not specified.
– Brijesh Lal – Professor; former Bharti School Chairman; researcher focusing on edge AI, haptics, and global-south initiatives. [S8]
– Mala Kumar – Technologist at the Center of Excellence Wired and Wireless Technologies, Art Park; former post-doctoral researcher at Technical University Berlin; visiting researcher at UC Davis and TU Berlin; works on AI-enabled XR applications.
– Alagan Mahalingam – Founder, CEO, and Chief Software Architect of RootCode; ICT Entrepreneur of the Year 2021; Young Entrepreneur of the Year 2024; Envoy for Estonia e-residency. [S1]
– Sakshi Gupta – Global Government Affairs lead for Qualcomm; tech-policy professional focusing on AI, emerging technologies, market research and stakeholder engagement.
– Ambassador Egriselda Lopez – Her Excellency, Ambassador, Permanent Representative of the Republic of El Salvador to the United Nations Office and other International Organizations in Geneva.
– Ambassador Reintam Saar – Co-chair of the UN Global Dialogue on AI Governance; responsible for organizing the dialogue and producing summary reports. [S2]
– Ranjitha Prasad – PhD researcher specializing in causal inference, survival analysis, Bayesian neural networks, and federated learning; Principal Investigator of the Intellicom Lab at IIIT Delhi. [S11]
Additional speakers:
– Vijay Singh – Mentioned in the introduction; no role or title provided.
– Vishnu ji – Referenced as a host/introducer in the transcript; no specific role or title provided.
The session opened with Speaker 1 thanking the audience and quickly introducing Fred Werner, Chief of Strategic Engagement at the ITU, who delivered the opening remarks [1-3]. Fred began with a provocative question – “What if the last thing that humans ever invent is invention itself?” – and recounted a conversation with AI-safety expert Roman Jampolsky about whether AI should be “for good” or “for good, forever” [4-12]. He used this dialogue to stress that, if AI becomes humanity’s final invention, it must be deliberately guided toward beneficial outcomes.
Fred traced the evolution of the ITU’s AI for Good programme. Launched in 2017, the initiative moved from an early focus on hype and “fear, promise and hype” [15-16] to a concrete effort that now embraces generative AI, autonomous agents, robotics, brain-computer interfaces and even space-based computing [18-23]. He described a “zero-click” world where agents act without explicit prompts [20-21] and highlighted the breadth of societal challenges AI can address, from affordable healthcare to food security and disaster response [24-25]. Fred underscored that AI for Good cannot succeed in isolation; it relies on partnerships with more than 50 UN sister agencies that contribute expertise, drive standards work and foster cooperation on AI governance [28-31].
Fred noted that AI for Good is a year-long movement and global community, not just an annual summit, organised around three pillars-solutions, skills and standards-each supporting concrete activities such as machine-learning challenges, the AI Skills Coalition sandbox, and over 400 emerging AI standards for future networks and AI-native architectures [55-60][61-63][63-66][68-71].
Brijesh Lal from IT Delhi then presented his research on edge-AI, beginning with the convergence of communication, compute and control that makes edge capability essential for safety-critical applications such as haptics [90-95]. He argued that latency-sensitive haptic feedback cannot tolerate errors, so strong edge processing is required [96-99]. Brijesh described a “split-control” architecture that moves substantial processing from the cloud to the edge, and an “intent-based” signal conversion that abstracts raw pressure data into higher-level commands [106-111]. He also highlighted collaborative technical reports from TSDSI on dynamic AI models for V2X, security-enhanced digital twins and AI-native 6G RAN, and pointed to standard-setting bodies such as ITU-R IMT 2030, ITU-T CGPP and M2M as crucial forums for global edge development [115-133][130-133].
Ranjitha Prasad, Principal Investigator of the Intellicom Lab at IIT Delhi and collaborator with the ITU-Delhi team, provided a technical overview of federated learning (FL) as an enabler of edge-centric AI [??-??]. She linked the exponential growth of mobile data traffic and the sub-10 ms latency requirements of 6G services to the need for privacy-preserving, distributed intelligence [136-144]. FL brings the code to the data, allowing training to occur locally while only model updates are shared with the cloud, thereby reducing bandwidth and safeguarding user privacy [139-144]. Ranjitha illustrated two concrete use cases: traffic-prediction during a football-match event, where edge base stations aggregate local traffic before a MEC controller optimises routing [147-152]; and V2X road-condition sharing, where each vehicle communicates with a local edge server before contributing to a global model [153-166].
The panel, moderated by Fred, opened with Mala describing XR-assisted medical emergency care. Using public 5G, first-responders equipped with XR glasses and IoT wearables receive real-time vitals overlaid on video, enabling remote medical experts to guide CPR or AED deployment [172-180]. A separate private 5G deployment supports on-premise XR tours for Industry 5.0 applications [182-186]. Mala expressed a desire to open-source these solutions through the AI for Good sandbox so that the international community can test, fine-tune and scale them [187-188].
Alagan Mahalingam then shared real-world edge-AI deployments. He recounted a farmer-advisory system built for Portugal that combines soil-sensing hardware, mobile-app image analysis and AI models to advise small-scale growers [204-207]. When the solution was trialled in a remote Sri Lankan village with unreliable connectivity, the team introduced a Raspberry Pi edge node running lightweight models (e.g., GemR) to retain functionality offline [208-214]. He also described a “tuk-tuk data-center” concept-edge compute mounted on a mobile vehicle to serve rural villages-illustrating creative deployment ideas [??-??]. This experience reinforced his view that edge is indispensable not only in the Global South but also in well-connected regions when users move out of coverage, as illustrated by a remote-patient-monitoring service in the United States [224-227]. Alagan stressed a “task-first” design philosophy: start from the specific problem, then distil or quantise models to fit edge constraints, avoiding unnecessary large-language-model deployments [230-236].
Sakshi Gupta of Qualcomm discussed why AGI is important for the Global South, highlighting key considerations-latency, security, privacy, personalization, cost and power-that affect both AGI and edge-AI deployments. She noted that modern smartphones already run on-device models with up to ten billion parameters, enabling AI use even in flight mode [262-266], and that similar capabilities are emerging in cars, IoT devices and smart glasses [267-270]. When Fred asked for concrete evaluation metrics for edge AI, Sakshi responded by outlining these considerations but did not propose specific metrics [??]. She also described Qualcomm’s “Tech for Good” programme, which mentors startups such as India’s Raksa Health that have built on-device AI health assistants capable of offline symptom checking and prescription lookup [284-286].
Fred linked these examples back to standards work, observing that the proliferation of edge AI creates a need for new specifications on hardware availability, connectivity quality, privacy safeguards and data handling [240-246]. He warned that without appropriate standards, scaling these solutions will be difficult, yet he also acknowledged the urgency of fast-track deployments that address immediate needs [292-296].
Ambassador Egriselda Lopez, based in New York, framed the discussion in human-centred terms, defining “HAI” as AI placed close to people, services and communities, which improves speed, reduces cost and enhances privacy in low-connectivity settings [313-318]. She reiterated three policy messages: keep people at the centre of AI development, provide decisive support to close the digital divide, and avoid fragmented national approaches by fostering cooperation [322-330].
Ambassador Reintam Saar outlined the forthcoming UN Global AI Governance Dialogue. He explained that the dialogue will bring together governments and multi-stakeholder groups to exchange best practices, focus on practical outcomes, align with existing UN processes and avoid duplication [338-345]. Capacity-building, trust, transparency and a human-rights grounding were identified as core principles, and the dialogue will draw on the “wisdom” of participants to produce a roadmap for future action [346-350].
Across the session, participants reached strong consensus on several points. All agreed that edge AI is essential for delivering low-latency, privacy-preserving services in both underserved and well-connected environments [91-95][202-215][260-266]; that task-driven, lightweight models are preferable to large generic LLMs for edge deployment [230-236][262-264]; that privacy preservation justifies moving processing to the edge [139-144][274-279][313-318]; and that AI for Good must be pursued through inclusive, multi-stakeholder governance [26-27][338-345][55-60].
However, the discussion also revealed disagreements. Alagan argued that edge solutions should rely on quantised, task-specific models and avoid large LLMs [230-236], whereas Sakshi highlighted that current smartphones already host 10-billion-parameter models on-device [262-264], illustrating a tension between perceived resource constraints and actual hardware capabilities. Sakshi responded to Fred’s request for concrete evaluation metrics by outlining key considerations but did not provide specific metrics [??]; this request remained unaddressed by other speakers who focused on use-cases and standards [68-70][230-236]. Finally, Fred emphasised the importance of standards development for AI-native networks [68-70], while Alagan’s fast-track, task-first deployment approach implied that waiting for formal standards could delay impact [230-236].
Key take-aways
(i) AI for Good’s overarching goal is to unlock AI’s potential for humanity through a continuous ecosystem of solutions, skills and standards [55-60][S1];
(ii) edge AI is a critical enabler because the convergence of communication, compute and control permits safety-critical and context-specific applications, especially in the Global South [90-99][200-215];
(iii) practical edge-AI use cases demonstrated include XR-assisted emergency care, farmer advisory systems, remote patient monitoring, traffic-prediction and V2X services [172-188][200-207][224-227][147-152];
(iv) task-driven, lightweight models-achieved via quantisation, pruning or distillation-are preferred over large foundation models for edge deployments [230-236];
(v) federated learning provides a privacy-preserving pathway to edge training while reducing bandwidth and latency [139-144][145-152];
(vi) the ITU is advancing standards for AI-native networks, future 5G/6G architectures and multimodal QoE, laying the groundwork for scalable edge AI [68-70][71];
(vii) human-centred AI (HAI) that brings intelligence close to people improves speed, cost and privacy [313-318];
(viii) the upcoming UN Global AI Governance Dialogue will focus on inclusive, outcome-oriented discussions, capacity-building and alignment with existing UN processes [338-345].
Unresolved issues
– Defining concrete metrics and benchmarks for evaluating edge-AI deployments (e.g., latency thresholds, hardware availability, privacy safeguards).
– Ensuring interoperability of heterogeneous edge devices and haptic interfaces across manufacturers.
– Establishing sustainable funding and business models for scaling edge solutions in under-connected regions.
– Clarifying the balance between on-device training versus cloud-based training in federated learning, especially for large-scale models.
– Creating mechanisms for coordinated data sharing and knowledge transfer among UN agencies, national governments and private-sector partners [Agreements][Disagreements].
Suggested compromises
Adopt a task-centric approach that designs lightweight, purpose-built models for edge deployment; combine cloud-based heavy training with edge-based inference and federated updates to preserve privacy; leverage open-source repositories within the AI for Good sandbox to enable community testing and rapid iteration; pilot regional edge solutions (e.g., in India, Sri Lanka, Portugal) that can be adapted elsewhere; and align standards development with existing UN frameworks to avoid duplication while running in parallel with fast-track pilots [Suggested compromises].
In closing, the session reaffirmed the collective commitment to advance edge AI as a means of realising the AI for Good vision, to develop inclusive standards and governance structures, and to continue collaborative research and capacity-building activities throughout the year. The speakers thanked the participants and invited them to contribute further to the forthcoming Global AI Governance Dialogue in July 2024 [338-350].
Thank you. Thank you very much. We have very little time, so I want to first of all introduce Fred. Fred Werner is the Chief of Strategic Engagement Department at ITU Welcome Fred to give the opening remarks
Hello Let me start with a question What if the last thing that humans ever invent is invention itself Now what do I mean by this? We had, if you’re familiar with Roman Jampolsky He’s a leading AI safety expert And I met him in New York at the UNGA last fall And he said, Fred, what is AI for good? I said, well, what do you mean? He said, well, is it for good or for good? Well, what do you mean? And he said, well, for good as in beneficial, as in good Or as in for good, forever I said, hmm, good point And he said, what if AI is the last thing that humans ever invent?
Now, you might agree or disagree with that statement, but it’s not hard to imagine a future where most future inventions will either be invented by an AI or with the help of an AI. And if that is the case, then I think we do need to make sure that AI, if it’s going to be for good, is indeed for good. So my name’s Fred Werner from the ITU. It’s the United Nations Specialized Agency for Digital Technologies, and we’re also the organizers of AI for Good with 50 -plus UN sister agencies. Now, AI for Good was created in 2017, and if you think about that, that’s basically an eternity in terms of AI years, looking at how fast it’s been developing.
And back then, it was really all about the fear and the promise and the hype of AI. Most solutions existed in fancy PowerPoint slides, but there wasn’t a whole lot of substance. But that changed rather quickly. In 2023, we saw the advent of generative AI. Last year, the unofficial theme of the summit was the rise of the AI agents. And now we’re looking at a world where you’re basically entering a zero -click world where agents are not waiting for our prompts. They’re actually acting on our behalf. And in addition, you have the physical embodiment of AI in the form of robotics, embodied AI, brain -computer interfaces, and we’re even looking at space AI computing now. Now, so I think we’re safe to say there’s no shortage of high -potential AI use cases that can be used to help solve global challenges.
Anything from affordable healthcare to education for all, food security, disaster response, the use cases are definitely there. So what is the goal of AI for Good? Well, simply put, it’s to unlock AI’s potential to serve humanity. And how do we do this? Well, first of all, we can’t do this alone. Nobody can. That’s why we have AI. We have 50 UN sister agencies as partners of AI for Good, contributing knowledge, sharing expertise, helping to drive our standards work. building cooperation around AI governance and we’re very privileged to have here the two co -chairs and facilitators of the UN AI Global Dialogue who will be doing the closing remarks. Now, I could talk about AI for Good for days but to save us some time, I just want to show you a little video so you can actually see AI for Good in action from our last summit.
If we could please play the video. I have a joke that I always say for these occasions. AI is easy, AV is difficult. Actually, we don’t need to see the video. Oh, ah. Is it going to happen? Yes. But now we need sound. Since there’s no sound, that’s lovely, Geneva. Ah, that’s good. We are more than the AI generation. We are the generation that is determined, ladies and gentlemen, determined to shape AI for good. So no matter how fast technology moves, let us never stop putting AI at the service of all people and our planet. If you want an AI literate society, meaning resilient and ready for the future, we need to integrate these new tools into schools, curricula.
Let’s build a future where AI advances progress for all humanity. A shared digital future that is again inclusive, equitable, prosperous and sustainable for all. It is no coincidence that this era of profound innovation has prompted many to reflect on what it means to be human and on humanity’s role in the world. AI must help bring us closer, not to divide us apart. That’s one of the foundational promises of AI for good. We all now have, I think, a much greater level of awareness around AI, and we all need to shift into that as fast as possible because this technology is moving so fast. Ladies and gentlemen, this was a real… fast -track operation that we did, which we call the International AI Standards Exchange Database.
in your domain or industry that require this type of trigger. And we have just started the last step right from the general division. Let’s go! I think it’s fair to say that AI for Good is indeed more than a summit. It’s a movement, it’s a global community, and it would be nothing without you, the participants. 3, 2, 1! Thanks for watching. I’m not sure who that last guy was. Now, I think one of our… I think people often misunderstand that AI for Good, it’s known as a summit that takes place each year in Geneva. But it’s actually a year -long activity. We have online events almost every day of the week, all year long. And we’re organized around three pillars.
Solutions, skills, and standards. And if you look at the solutions pillar, we have machine learning challenges, we have startup pitching competitions, all types of activities to identify real practical applications of AI that you can use here and today. And on the topic of Edge AI, we had a build -a -thon on Edge AI just a few weeks ago here in India. And we also had machine learning challenges on tiny ML, tiny machine learning devices. And when we’re looking at skills, we launched the AI Skills Coalition. And a big piece of that is going to be creating basically machine learning environment sandboxes where we can do training and mentoring for governments to upskill their constituencies on the use of AI using the data from our machine learning challenges.
So it’s not hypothetical. It’s using real data for real solutions. And the last piece, of course, the bread and butter of ITU, is standards. And we have over 400 AI standards published or in development covering a whole suite of topics. But more specifically related to the session, we have a standards work on future networks, basically 5G, 6G and beyond, and a pre -standardization effort on AI native networks. So basically, these are examples of AI for good in action. And the theme of this session is actually edge AI in action in the global south. And I’m very much looking forward to the discussion. And thank you for your time and attention.
Thank you so much, Fred. Now, we have the keynotes coming. Thank you. First of all, let me call Professor Lal. Brijesh is my great friend as well as colleague. He was the Bharti School Chairman, but also right now he is currently looking at edge AI research. Our touch points with ITU are many, where he has hosted AI for Good Challenges, WTSA Hackathon. He was a judge, as well as Kaleidoscope. He is very active. Thank you very much, Vijay Singh, for coming, and over to you.
So it’s been a while. Thank you, Vishnuji, for having me. I’ve been participating in these AI for Good activities, so there’s been a lot happening, not just these talks that you have, but also something on the ground. Hackathon is an example of that, with participation from all over the globe. So today I’m going to talk about some of the work that’s happening here at IT Delhi, where we’re trying to leverage the edge. And the other thing that I’m going to run through very quickly, is TSI and its role in edge. So because we’re focusing here on accelerating development across the global south, so I’m going to pick up those two examples today. Right. So what we’re trying to say is that you have lots and lots of edge agents that will now act simultaneously and in coordination.
So the reason why edge is becoming more and more important is this converge of communication, compute and control. And this convergence is now quite real. And because this convergence is real, it is enabled at least in today’s technology only by a strong edge control specifically for tasks in the area of haptics. As I will show in the next slide, require you to not miss or make mistakes because some of them are catastrophic. And for that reason, a strong development in the area of edge is important. The other reason why looking at edge is important from the perspective of Global South is that. While it might not be easy to have foundation models that solve all the problems of the world, at least.
to an extent context has become increasingly important in modern times. People want to provide solutions which are very very specific to the task at hand and context can be best leveraged or used if there is a strong edge capability that is present. So in that light it is important that the global south focuses on building its strength in the area of edge. This slide here talks about some of the work that we are doing with respect to haptics. Haptics as you know is this sense of touch primarily consists of two aspects. One is kinesthetics which is the pressure that we feel and the second is tactile or texture which is the quality of surface that we you know the fine grained texture of the surface that we are able to measure using our skin.
So the thing with this kind of a modality is that while it seems to be almost abstract it is quite pervasive. It is all around us the temperature. including you know the hardness the softness or the way people meet each other greet each other you know all of that is very very important we just don’t you know it’s not overt but it’s important nonetheless so we sort of take it for granted however it is very very important and therefore it needs to be looked at a little carefully now the challenge with haptics is that while as we move from speech to video people did talk about bandwidth and they did talk about latency and there were quality of experience measures that evolved with haptics it goes to the next level because if you have unsynced and delayed haptic inputs or feedback then it becomes quite confounding and it confuses the person and it sometimes can be quite disconcerting so for this reason it is extremely important that the haptics data that you receive is accurate and received on time.
So for this it becomes extremely important that there is a strong capability that is present at the edge. Now here at IIT we are trying to implement it using two ways. One is what we term as split control where we have tried to move from having solutions deployed only in the cloud and the endpoint. We try to put in significant amount of capability on the edge itself. The other aspect that we are looking at carefully is trying to convert signals which are haptic in form to signals which give you the intent rather than actual measurements of pressure as what haptics is to machines. So these two things are primarily handled at the edge. The first one is quite clear.
Let me just say a few words about the second. So when we talk about intent in today’s world whenever you look at a haptic solution it is sort of locked in right from the operator to the end point where you have some kind of manipulation, dexterous manipulation of the environment around the device. However it’s very very hard for devices of different different manufacturers to interoperate and this happens because it is very very tightly coupled to all the signals that are generated and the form factor of the devices it’s not as simple as pick up any camera and the image that you get you can show it on any display so for that reason the idea is to convert those signals into intent send the intent to the other side and the edge on the other side makes sense of the intent and converts into into a signal that the other or the far point can then use to do whatever works needed so these are the two things that we sort of look at with with reasonable amount of interest at iit delhi and we continue to contribute to standards primarily in the area of msc and quality of experience where multi -modality is involved right now this is the edge foundation network.
I’ll skip this in interest of time because I do have a couple of slides that I want to walk you through because there’s some work that’s also being done by TSDSI which is our SDO here in India and they have in conjunction with ITU doing quite a lot of interesting work which is edge centric. So let me talk about few of those. So there are a few technical reports that have come out of late. There’s a stock of dynamic AI ML models for self -sustainable V2X applications. So V2X applications is being looked at carefully. There’s also work in the area of security aspects and advanced and AI enhanced passive digital twinning initiatives. So we have some technical reports that have happened in this area.
There’s also developing of standards work that’s happening. There’s architectural support for tactile applications that I just spoke about. There’s talk of 6G AI architecture for RAN and also AI native scalable reference. Architectures. I think maybe we’ll talk about quality of experience in the next slide but that’s another thing we’re looking at. We’re also carrying out technical studies in all of these areas in interest of time. You’ll have the slides you can go through them when you find the time. This is the other thing that they wanted me to bring to light to this audience. Just a couple of minutes. So the global standard forums that are of interest to the audience here people who look at edge carefully.
There’s ITUR IMT 2030 framework for included ubiquitous intelligence for overarching design and then there’s ITUT related standards CGPP standards and of course the M2M. So all of these standards are of interest to the audience here and people trying to do research in this area and besides this TSTSI has been trying to be inclusive by holding these flagship conferences annual ones so that more and more people get insight into what is happening. With that I’ll close because we’re really short of time here. Vishnu ji back to you
Thank you Bajeshji Thank you for bringing out the Indian research in the topic and bringing out the 8GI framework also it’s very less time let me invite Ranjitha Ranjitha obtained her PhD from IIC her current research involves causal inference, survival analysis and Bayesian neural networks over to you Ranjitha
Yeah so something that he also missed, I actually do work in federated learning and many other learning paradigms so let me just start, so mine is going to be a technical talk where I’ll tell you the motivation for using federated learning, especially the role of federated learning in telecom networks and why really are people discussing about this the motivation is of course data explosion there’s exponential growth of in mobile data data traffic and you have all these diverse services that are there in 6G EMBB, URLC, I’m sure this audience is well aware of this then there are bottlenecks in these legacy networks which actually motivated moving more towards edge centric architectures. The goal is of course I think this is something very important that most of the standards are looking at.
Predictive zero touch automation closed loop wireless control and this loop closure latency requirement about less than 10 milliseconds for mission critical optimization. And this is exactly where federated learning comes in as a key enabler of privacy preserving and distributed intelligence. So all of this is captured in the AI native network concept where now AI is no longer a peripheral layer but it’s actually coming into the RAN. So this is enabled by what is called as this ORAN alliance particularly the RIC or the RAN intelligence controller and this is how the whole sub 10 millisecond latency requirement is fulfilled. Something that is not very clear here is So why do you really require edge intelligence, right? So to make it even faster and achieve the sub 10 milliseconds, you actually have to bring in inference and training to the edge rather than taking data to the cloud, right?
So that’s where the whole paradigm shifted and this argument about edge intelligence or edge native intelligence came in. And especially something called as MEC or multi -axis edge computing also was introduced. So this brought in a huge architectural change. That is, now we have the core network talking to RAN and then RAN talking to the UEs. And this is where the whole, you know, the UEs basically now have the intelligence along with the MEC controllers. So federated learning. Upon all these things, one very important aspect. that’s how we relate to AI for good is that of privacy right so think of the use case of traffic prediction where there is you know there’s a need for loads and loads of data but you know this data consists of raw user logs location history or I mean if you share it with the with the centralized controller it’s just privacy violation so the solution is to now bring code to the data and not take data to the code right so that’s the that’s where federated learning comes in the intelligence now or the training happens at the edge and only certain metadata is given to the cloud so what is this what is its implication in telecom so there’s impact on privacy that’s exactly where it’s supposed to make the impact and then of course there’s impact on latency and bandwidth so personalization of AI models is possible in real time large -scale training can still happen in the core network but the personalization of smaller models for you know for the edge and there’s impact on bandwidth because i no longer need to send data to the server and of course there’s a huge impact on architecture because you saw there that it becomes a hierarchical style of an architecture where core network is at the top and user ues are at the bottom okay so i just wanted to introduce quickly introduce two use cases in fact left it’s in fact a use case from uh from france it was this is for um a traffic prediction in fact uh predicting certain traffic spikes when they had a football match and uh this scenario is where you need to allocate dynamically uh resources for this particular stadium event so here what’s happening is each of these ues or base stations are picking up uh the traffic in their local area sharing it with the core network sharing it with a mec controller and then the core network is able to say you know how to really route the traffic so that you know there’s less congestion the other one is v2x so this is again for uh sharing road conditions or you know accident information and other things it’s very easy to see why uh fl may be useful here each cars can talk to its own edge server and then go to the cloud server where the global model is trained so this this sort of envisages how federated learning has become a very important technology so last but not the least i come from i have i’m the pi of the lab which is called as intellicom lab at triple it delhi uh we have a collaboration with it delhi uh for this entire work
Thank you, Ranjitha, for the excellent talk. We had an introduction at least for federated running and also the framework that architecture that she explained is really interesting. Last time when ITU colleagues were here, we had visited the lab. If you haven’t done that, please talk to her. It’s a very exciting research which they do. And we also have great collaboration with BAPI and colleagues in IIT Delhi. Thank you, Ranjitha, for coming. we have a panel now we have approximately 20 minutes maybe for the panel let’s kick off the panel can I invite Fred to moderate the panel and can I invite the panelists Mala Alagan and Sakshi to please take the seats Fred to kick off, thank you very much over to you Fred
thank you so I’m looking forward to this panel where we can aim to demystify Edge AI a little bit and explore the practical use cases and AI strategies but first I’ll introduce the panel so the first panelist her name is Mala she has a full name but she personally asked me to just call her Mala and I wish all panelists would do that it’s much easier that way so Mala is currently a technologist at the Center of Excellence Wired and Wireless Technologies at Art Park sorry, Art Park so Mala is currently a technologist Prior to this, she was a postdoctoral research at the Teikian Group at Technical University Berlin. She’s also involved in 6G initiatives such as AI RAN for efficient resource allocation and millimeter wave communications.
And she also has been a visiting researcher at UC Davis and TU Berlin. Welcome. Our next panelist is Alagan Mahalingam, founder, CEO, chief software architect of RootCode. Alagan is the founder of RootCode, and in his early 20s, he worked as a researcher at international research organizations such as the Geoinformatics Center at the Asian Institute of Technology, Thailand, also the University of Tokyo, Japan, where he worked on satellite communications and solar panel optimization algorithms. Alagan. Mahalingam was also awarded the special title of ICT Entrepreneur of the Year at the National ICT Awards in 2021. and also the Young Entrepreneur of the Year in 2024. And he’s also the envoy for the government of Estonia e -residency. So I see a lot of Estonia connections here today.
Last but not least, we have Shaxi Gupta. So she’s the Global Government Affairs responsible for Qualcomm. She’s a tech policy professional in AI and emerging technology policy analysis, market research, and stakeholder engagement. So if we could have a please warm welcome for the panel. So first question is for Mala. As an AI -enabled XR applications, and they’re split between private 5G and on -premise public 5G, could you please give us some examples of XR applications in different scenarios, and what are the trends? And what are the trade -offs in scalability, security, and security?
They get the immersive experience in their own preferred regional languages. And one other application that we have done is the XR -assisted medical emergency care. Here the focus is on the, to provide timely medical response to the patient who was suffering with a cardiac arrest and so on. And an SOS alert would be sent from the, by the bystanders from the life circuit exact to the first responders. And the medical experts and the ambulance through 5G connectivity. Once the first responder gets the alert, he arrives at the scene with XR glasses and IOT wearables. and also the AED kit. So, and while giving the CPR, the IOT patient vitals would be displayed, augmented onto the real -time video.
And the real -time video would also be sent to the medical expert. And the medical expert will guide whether to continue the CPR or it could be the AED and so on. So, the timely response will save multiple lives. So, in this case, we have used public 5G network. But for the XR -assisted facility tour, we have used private 5G network. So, the private 5G network is mainly to have on -premise HCI applications. And this would bring the core next to, the data generation. And then we can also… do real -time decision -making for industry 5 .0 applications. And going forward, we would like to have some of our applications to be in the open source and have it in the best place, like ITUs, AI for good, right?
So then the international community can access this open source AI models and they can fine -tune the models and they can do the rigorous testing before it is bringing it to a real -world deployment. That is what I’m looking forward for this.
Yeah, thanks so much, Mala. And I think this really is a good example of AI for good in action. And I think, to your point, these solutions don’t happen by magic. There’s a lot of difficult problems. There’s a lot of problems to solve. And by putting these solutions in the AI for good, good sandbox that might lead to future standards which could make them replicable and then you could have that adoption at scale. So I’ll just go to the next panelist, Alagan. Given your rich experience in developing AI solutions for partners in different geographies, can you please give us some examples of edge AI deployment in real world scenarios, their impacts, the nuances you see in AI strategies on edge AI in the different regions?
Because from your bio you’ve been involved in many different parts of the world. Thanks.
I started RootCode 11 years back because I was in love with building AI solutions as a college student and then now 11 years later the technology that we have built is used by more than 92 million people across 27 countries including many European governments like the government of Estonia, Portugal and many others. We chose to build edge AI in many cases. One, the obvious one, to bring technology to under -connected spaces and also to increase speed in many cases and sovereignty. And the most interesting project that we have done recently, let me tell that story, a couple of years back, Portugal realized that their farmers, especially the small -scale farmers, didn’t get enough access to advisory and intelligence to grow their crops.
And things have been changing because climate change and unpredictability in growing crops, a lot of people were leaving farming. And so we built a solution from a hardware, a software product and also an AI model. The hardware goes into the soil, so you understand the soil nutrition and you take pictures with the mobile app and we can process the pictures to understand is there a problem with the plant, right? And we built and it worked out fantastically well. And then I tried to bring that to Sri Lanka. I grew up in Sri Lanka and to date a big part of our development team is in Colombo, in Sri Lanka, more than 120 people. And so we went into one of these villages in the middle of the mountains of Sri Lanka, Nubar earlier and I was super fascinated and when we tried to deploy this, we realized they don’t have reliable connectivity in some corners of the villages and our solution was worthless.
And that’s where we started bringing in Edge. So we brought in a new version, we had a Raspberry Pi and we started testing models like GemR, and also we did our own convolutional networks like 2D things to figure out like where do you optimize? You don’t want to use LLM for everything, right? And by the end of it, we managed to bring the same value that the software gave to connected users. to people who didn’t even have internet in some part of Sri Lanka. And that reminded me how much edge is needed, especially in the global south. And yesterday I was at a dinner talking to some of the development finance colleagues from DZ. And somebody was talking about why don’t we put computes on the wheels in a tuk -tuk?
So imagine like we can’t process too many things on a small device of Raspberry Pi. What if you get a tuk -tuk coming to your village every other day or once a week with a data center built in, with Wi -Fi LAN, so farmers can connect and do the processing. Smaller banks, smaller institutions can do. And I was like, yeah. So this week has been super fascinating. And sometimes when we think about edge, we think it’s needed only in places that are not really connected, like rural parts. We have built this, we have built a beautiful solution that’s used in America. If you think America is well connected, you should take a road trip. When you go out of the city, you realize some parts are very disconnected.
And we built, for one of our clients, we built a solution that helps rural patients who are at high risk with remote patient monitoring. And then, yeah, EDGE works all around the world, not just in the South. If I, when I think about all my learnings, because there are so many learnings building EDGE for multiple geographies, multiple customers, multiple communities. If I were to single out, I would single out the fact that when you are trying to do something in the EDGE, you shouldn’t try to think of the model and go find a solution. But instead think of the task and then work backwards on how do you build and distill or fine tune a smaller model.
And that runs on the EDGE because in the EDGE, you can’t do everything, right? if you are building an AI assistant for farmers, you don’t want the AI to be able to tell why two of the famous CEOs didn’t want to hold hands. I mean, that doesn’t matter. You want it to answer about plants and agriculture. So the heavier the model is, it becomes impossible to deploy. So we work on multiple technologies to one, quantize or prune the models in a way that creates a smaller version that does exactly what’s supposed to happen. And I think the global south needs to grow with this AI transformation of the world because infrastructure takes decades, but the next few years is going to change the way we live.
And that’s why we are here. So I’m excited
Yeah, thanks a lot. And I think what you’re saying here has almost been the theme of this week where you don’t need the biggest AI or the biggest large language model. And I think if you look at the example of India, where they’ve managed to enable… billions to have a digital ID to enable financial inclusion, financial payments with the public interest at heart and with relatively low -tech solutions, you can indeed bring AI to the edge in cases that make a lot of sense. So thanks for that. Sapsky, question for you. In your experience with Europe, the intersection of technology, innovation, and AI strategies, what do you think are the metrics to evaluate the usage of edge AI such as availability and capability of hardware at the edge and also the connectivity, privacy, and data issues that you see in your line of work?
Thank you, Fred. And let me start by saying it’s an absolute pleasure to be sitting with fellow panelists and speakers who have preceded me who are deploying edge AI and are doing research on edge AI. At Qualcomm, we are very focused on edge AI and think that that’s going to be the future of how not just Global South, but globally, we’re going to be using AI. So we… Um… And I really relate to what you said about the way to think about deployment of AI is actually to think backwards about what is the use case that you’re trying to solve. And then you think about what is the best architecture that you want to use.
Is it just cloud? Is it on -prem? Is it an edge cloud? Or is it on -device AI? So we have to think about it from a distributed architecture point of view when we think about the use cases that we have here in the Global South. And I do want to mention one important distinction here, which was touched upon earlier also, is that when we think about AI, there’s a training part of it, and then there’s an inferencing part of it. Inferencing is where you’re thinking processing is actually happening. So while training can continue to happen on the cloud, a lot of it, a lot of the inferencing, as we’re seeing, is moving towards the edge.
Now, in terms of availability, if I want to talk about it, I think we’re increasing. We’re increasingly seeing, and Qualcomm is deploying this at the edge of… So, you know, AI being available at the edge, not from, you know, the very basic thing that we all use every day is your smartphones. That we have on -device capabilities coming onto smartphones with 10 billion parameters models already running on -device. So that means that you do not need to be connected. If you’re in flight mode, you do not need to be connected on internet and you can still use AI. So that’s amazing in my point of view. We also have it coming to actually cars. So Qualcomm has developed that technology where you can now use HCI onto the, or it’s actually in development.
We have demos at the Qualcomm booth, which I’ll come to later, but which you can, so HCI is coming to the cars as well. And it’s increasingly coming to IoT devices and your smart glasses as well. So in terms of availability, I think we are seeing increasingly that it’s coming to all types of devices. That are connected to internet now. Now, why is AGI relevant? And some of my panelists have already touched on it. I think latency, security, privacy, personalization, low cost, low power are all very important factors for why AGI becomes important for Global South. We may not have access to as much power. We may not have access to as much water as needed.
But with AGI, we don’t have to worry about that. Apart from that, I do want to touch on one thing. That is Qualcomm, one of the things that we have is a program called Tech for Good, wherein we partner and work with startups and small businesses around the world. We invest in them. We mentor them. They use Qualcomm hardware to develop solutions at the edge. In fact, I do want to encourage that in Hall 4 at our Qualcomm booth, we have some of these startups who are displaying this technology. One of the examples is actually from India. It’s called Raksa Health. They’ve actually built an on -device AI healthcare assistant where it’s for doctors and patients both, where the doctors can take down symptoms and provide solutions for their patients and for patients to actually look up their prescriptions and be able to access all their records offline and ask questions about it.
So, yeah, I think that’s how we’re seeing the transition happen. Thank you.
Thank you. Yeah, some amazing use cases. And I think this week’s coming out of Davos where the narrative was all about go, go, go, the insatiable demands for energy. There’s talk of putting data centers in space. But I think this panel also brings things a bit down to earth where, you know, you can have AI on the edge, and, of course, there’s a lot of things to solve there. when it comes to connectivity, when it comes to data compute. I think there’ll be a lot of standards development work that needs to emerge from this to make this work at scale. But I think your use cases and the way you’re approaching the problem, especially starting from the what are you trying to solve and work backwards from that, I think is very refreshing compared to all the headlines we’ve been seeing lately.
And I don’t see it either or. I see it as a big piece, a complementary piece of the puzzle. So with that, I really want to thank the panel and if we could have a round of applause for them. Thank you.
Thank you very much, Fred, for running the tight panel. Now we are coming to the closing. Thank you, panelists, insightful remarks. Yes? Okay. Can I ask a quick group photo of the panelists, please? Panelists? Yes? Thank you. Thank you very much. Thank you very much. Now we are coming to the closing. There are excellent closing remarks coming. Can I please request Her Excellency Miss Lopez, Ambassador, Permanent Representative, Permanent Mission of the Republic of El Salvador to United Nations Office and other international organizations in Geneva to please give her closing remarks.
I’m actually based in New York. Thank you. Well, good afternoon. I know that I don’t have much time, but I had just to say that this discussion was very enlightening. Thank you so much for sharing everything what you’re doing on the ground. And I guess that it was very clear to me that HAI means simply using an AI closer to where things happen. That means closer to people, closer to services, communities, rather than deepening only faraway systems. So amazing what you’re already doing. So this can be important for development because it can work better in places with limited connectivity, as we were hearing, and it can help with speed. And it can help with speed, cost, and privacy, since not everything has to be sent everywhere.
So I guess that I had to begin also with something. I am also the co -chair of the Global Dialogue on AI Governance. This is going to happen in July this year, and it’s going to be the first dialogue of its kind. So trying to also bring together what we have been hearing from member states and also other stakeholders in these months, I can tell you three specific things connecting with what we just heard today. First, that people must remain at the center. And we have heard with all these examples. And I guess that a common message that we have been hearing also in this week is that AI should be developed and used in a way that protects but also helps people.
Second, closing the gap is not a slogan. We are hearing this a lot. It requires decisive support. And I was very pleased, for instance, saying that you’ve been trying to replicate in some countries what it has in others, for instance. And I think that’s a really important thing. This information sharing, this is critical if we’re talking about closing the gaps. And the third message, the final one, is that we should avoid a world of disconnected approaches. And this also is aligned with what I was just saying, that cooperation across different national but also regional approaches, it will help us to reduce fragmentation. So, with that, I just have to tell you that we’re very looking forward to see some of you in Geneva in July, so we can hear and learn more about what AI is.
So, it’s my pleasure to give the floor to my distinguished co -chair, Ambassador Reintam Saar. He’s going to explain to you very, very shortly what the global dialogue on AI governance is. And this is really important work that we are putting a lot of effort to it. Thank you so much again for the invitation.
Thank you. yes hello hello everyone frankly i really feel humbled among real experts not to say i feel helpless so please allow me then to do a little bit of awareness raising when it comes to the first global dialogue on ai governance and maybe this way i’ll try to fit into a discussion that we’ve heard here today so three points on my side first about tasking so the tasking was to put together a distinctive identifiable un global dialogue with all the elements that are prescribed in the mandate so bringing governments and stakeholders together to exchange best practices and of course to focus on cooperation and to execute it back to back with itu ai for good summit in july in geneva produce co -chair summary.
So this is what we are going to do. So, so far we’ve engaged with member states, with stakeholders, multi -stakeholders, and from member states we’ve kind of covered three different approaches, I would say a little bit. Risks versus opportunities, state -centric approach versus multi -stakeholder approach, closing AI divide versus free market innovation. But we also were able to pick up three convergences. Practical outcomes preferred over endless theoretical discussions, alignment with existing UN processes, avoiding duplication, clear timeline formats and thematic focus to produce actionable insights. And the unified element, I would say, in these discussions is that the dialogue needs to be inclusive. and capacity building was absolutely a crucial element that is, of course, one of the most important things to a global self.
So from multi -stakeholders, what we’ve heard, the key words, so to say, were trust, transparency, no duplication, interoperability, equal access and participation for everyone, rooting the dialogue in human rights and to be of a practical value and innovative in form. So what we are going to do, we will guide the discussions, but we will not predetermine the outcome. It’s for member states, it’s for you, for stakeholders. And, of course, we will engage also with international scientific panel that was also established through the same resolution. We will rely on member states and your wisdom, we would need to collect this wisdom somehow. and this is something that we are going to do because we would need this wisdom so that the dialogue would be really inclusive we would come up on certain point with a road map to Geneva where you would see building blocks towards dialogue and whatever opportunities to engage into dialogue and of course I very much hope that all these fantastic ideas and frankly I mean chapeau to the panel because you are already making or changing life on the ground and it’s absolutely fantastic we really need this also to inform our dialogue and so that the dialogue would be also result oriented on the ground.
Thank you very much.
Thank you Monsieur Excellencies. At this point I would like to call Fred to give out the mementos if you don’t mind Fred please. and can we have the momentos for Brijeshji thank you very much Ranjitha Ranjitha please Moala can I request Nodal officer to please felicitate Fred yeah we have an event with him yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah thank you very much for attending the session session is closed thank you thank you Thank you. Thank you.
The AI for Good initiative, launched in 2017, has evolved from a concept-focused summit addressing the “fear, promise, and hype” of AI into a practical year-round global movement. Werner explained tha…
Event“Standards, skills, and solutions”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/ai-for-good-technology-that-empowers-people-2?diplo-deep-link-text=It+is+all+around+us+the+temperature.+i…
EventIn her conclusion, Martin articulated that the fundamental question should not be “who can build the most powerful models fastest” but rather “what are we doing to make sure that AI works for all of h…
EventStrategy built around four pillars: Governance and Ethics with Clear Regulatory Standards and Human Oversight
EventAUDIENCE: Can I add to this? Yeah, please. Okay. So I’m just going to be brief and quick on this. I think there is no one answer for everything. So in the sense of data, right? LLMs today have scru…
EventArtificial intelligence | Information and communication technologies for development | Social and economic development
EventThis disagreement is unexpected because both speakers are discussing the same technological trend (edge computing) but attribute it to fundamentally different drivers. Roy suggests infrastructure limi…
EventHe further expounded on the collaborative essence of standardisation work, which relies on mutual trust, understanding, and consensus. Such collaboration, Onoe contended, is crucial for encouraging in…
EventHe positioned this approach as leveraging ITU’s 160 years of experience and its global community’s commitment to collaboration. Onoe noted that “the principles we want to revive must be embedded in th…
EventKey areas of convergence included the importance of process-oriented standards that can adapt to evolving capabilities, the need for uncertainty quantification rather than binary safety assessments, t…
EventBilel Jamoussi:Fantastic. Thank you, Philippe, for that excellent introduction of the three organizations. Between ISO, IEC, and ITU, there are hundreds of years of experience in developing standards….
EventAshley Casovan:Yes, thank you so much, and thanks for having us here to present about the work that we’re doing, and also, I think, just establishing this really important conversation related to AI s…
EventPoncelet Ileleji: Thank you very much. Mr. Ilericic, can you hear me? Yes, can you hear me? Can you hear me? Okay. Good morning. Good afternoon. Thank you all. It’s a great pleasure to be in …
EventAt theInternet Governance Forum 2025in Lillestrøm, Norway, a keysessionspotlighted the launch of the Freedom Online Coalition’s (FOC) updated Joint Statement on Artificial Intelligence and Human Right…
UpdatesDoreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance with someone who has a unique and incredible view on where everything is going. Please join me in welcomi…
EventEttore Balestrero: On behalf of His Holiness Pope Leo XIV, I would like to extend his cordial greetings to all participants in the AI for Good Summit 2025, organized by the International Telecommunica…
EventAdvocating for a human-centred design of the GDC, the speaker argued that ensuring data privacy and broadening technology access are intrinsically connected to prioritising human welfare in developmen…
EventThe overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological change but expressed confidence in the ability of democratic institutions and mult…
EventThese key comments transformed what could have been a typical AI hype discussion into a nuanced exploration of civilizational transformation. Sharma’s opening productivity-focused framing set an optim…
EventThe tone is consistently optimistic and visionary throughout, beginning with congratulatory remarks and maintaining an inspirational, forward-looking perspective. The speaker acknowledges current limi…
EventThe tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate for a high-level international gathering, with speakers expressing honor, gratitud…
EventThe discussion maintained a professional, collaborative tone throughout. Both speakers demonstrated mutual respect and acknowledged the validity of different regulatory approaches. The tone was inform…
EventThe discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around infrastructure, energy, skills, and governance, speakers consistently emphasize…
EventThe discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s points rather than disagreeing. The tone was professional and solution-oriented, …
EventThe discussion maintained a constructive and collaborative tone throughout, with speakers building on each other’s points rather than debating. There was a sense of urgency mixed with cautious optimis…
EventThe discussion maintained a consistently collaborative and optimistic tone throughout. It began with academic framing but quickly became practical and solution-oriented as panelists shared real-world …
EventThe tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for AI’s potential to solve societal challenges. The conversation maintained a forward…
Event– **From dialogue to action**: A recurring theme was the need to move beyond policy discussions to concrete implementation. Panelists called for better data collection, funding for grassroots innovati…
EventThe tone was largely collaborative and solution-oriented. Panelists built on each other’s points and offered complementary perspectives. There was a sense of urgency about addressing connectivity gaps…
EventThe tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementations and expressing confidence in achieving ambitious goals. There’s a sense of ur…
EventThe discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mutual respect. While there were some tensions around specific content (particularl…
EventThe tone throughout was consistently formal, diplomatic, and optimistic. It maintained a collaborative and forward-looking atmosphere, with speakers expressing mutual respect and shared commitment to …
EventThe tone began earnestly optimistic about dialogue and cooperation, with leaders acknowledging criticisms of elite gatherings while committing to greater transparency and inclusion. It maintained dipl…
EventThe tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and complexity of the challenges. Speakers maintained a pragmatic optimism, recognizing si…
EventThe discussion maintained a consistently collaborative and optimistic tone throughout, with speakers emphasizing partnership, shared responsibility, and collective action. While acknowledging signific…
Event“Fred Werner, Chief of Strategic Engagement at the ITU, delivered the opening remarks of the session.”
The knowledge base records Frederic Werner, Head of Strategic Engagement at the ITU, delivering opening remarks at the AI Policy Summit [S102].
“Fred began with the question “What if the last thing that humans ever invent is invention itself?” and referenced a conversation with AI‑safety expert Roman Jampolsky.”
The transcript excerpt in the knowledge base contains the same opening question and mentions Roman Yampolsky as the AI safety expert [S2].
“AI for Good relies on partnerships with more than 50 UN sister agencies.”
The knowledge base states that 47 UN organizations are collaborating on the AI for Good initiative, not “more than 50” [S19].
“AI for Good works together with many UN agencies that contribute expertise and drive standards work.”
The AI Governance Dialogue notes ITU’s AI for Good programme partners with numerous UN agencies on standards and governance, confirming the collaborative nature of the effort [S21].
“AI for Good is organised around three pillars—solutions, skills and standards—supporting activities such as machine‑learning challenges, the AI Skills Coalition sandbox, and over 400 emerging AI standards.”
A related source describes AI for Good’s three pillars (including comprehensive skills development and inclusive governance) and highlights its focus on standards and collaborative projects [S23].
The discussion shows strong convergence around four core ideas: (1) edge AI is indispensable for delivering inclusive, low‑latency, privacy‑preserving services in both underserved and well‑connected environments; (2) edge solutions must be task‑driven and lightweight, avoiding reliance on massive LLMs; (3) privacy is a primary justification for moving AI processing to the edge; (4) the AI for Good initiative should be pursued through inclusive, multi‑stakeholder governance that keeps humanity at the centre.
High consensus – the majority of speakers independently arrived at the same conclusions about the role of edge AI, privacy, and inclusive governance, indicating a solid shared understanding that will likely shape future standards, deployments and policy frameworks.
The panel largely converged on the importance of edge AI for the Global South and on AI for Good as a multi‑pillar, year‑round movement. Disagreements emerged around the appropriate size of models for edge deployment, the need for formal standards versus rapid, task‑driven implementations, and the definition of evaluation metrics. Unexpected tensions appeared between claims of massive on‑device models and the resource‑constrained view, as well as between a human‑centred AI narrative and a technology‑first edge strategy.
Moderate – while there is strong consensus on the overall goal (bringing AI to underserved regions), the differing technical approaches and measurement frameworks indicate a need for coordinated policy and research to reconcile model size expectations, standardisation timelines, and metric definitions. These divergences could affect the speed and inclusivity of edge AI roll‑out if not addressed.
The discussion was driven forward by a series of pivotal remarks that moved the conversation from abstract philosophical concerns about AI’s ultimate role, through concrete technical challenges of edge computing, to real‑world applications and finally to policy and governance. Fred Werner’s opening question set a high‑level purpose, which was then grounded by Brijesh’s edge‑haptics argument, Ranjitha’s federated learning solution, and Alagan’s farmer‑centric deployment story. These insights reframed edge AI as a universal necessity rather than a niche technology. Subsequent contributions from Mala, Sakshi, and the ambassadors linked the technical possibilities to societal impact and global coordination, culminating in a clear call for inclusive standards and actionable outcomes. Collectively, these comments shaped a narrative that progressed from vision to implementation to governance, ensuring the panel remained focused, interdisciplinary, and outcome‑oriented.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

