From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop

20 Feb 2026 14:00h - 15:00h

From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop

Session at a glance

Summary

This discussion focused on Qualcomm’s vision for AI deployment across edge devices and cloud infrastructure, featuring insights from company executives and industry partners. Durga Malladi, Qualcomm’s Executive Vice President, opened by highlighting how AI model sizes are decreasing while performance improves, with models shrinking from 175 billion to 7-8 billion parameters while delivering superior results. This trend enables powerful AI capabilities on consumer devices like smartphones, glasses, and PCs without requiring constant cloud connectivity.


Malladi emphasized that edge AI provides consistent user experiences regardless of internet connectivity and keeps personal data local for privacy. He demonstrated how AI is transforming user interfaces from traditional touch-based interactions to voice-driven AI agents that can orchestrate multiple applications seamlessly. The presentation included examples of AI-first devices already in market, such as ByteDance’s agent-based smartphone in China and Qualcomm’s Humane PC in Saudi Arabia.


The discussion explored Qualcomm’s hybrid AI approach, distributing processing between edge devices, on-premises servers, and data centers based on specific use cases. Malladi outlined how lessons learned from energy-efficient mobile processors are being applied to data center solutions, introducing the AI 250 platform with innovative memory architecture optimized for inference workloads.


A panel discussion with startup founders from robotics, legal tech, and enterprise AI sectors revealed practical challenges and opportunities in AI adoption. Key themes included the importance of local processing for enterprise security, the emergence of “shadow AI” usage, and the need for thoughtful integration rather than wholesale replacement of human workflows. The panelists emphasized that successful AI implementation requires understanding both technological capabilities and human factors, with edge AI expected to become ubiquitous and taken for granted by 2030.


Keypoints

Major Discussion Points:

Edge AI Evolution and Model Efficiency: The discussion highlighted how AI models are becoming smaller yet more powerful (from 175 billion to 7-8 billion parameters while improving quality), making edge AI viable for consumer devices like smartphones, glasses, and PCs without requiring constant cloud connectivity.


AI as the New User Interface: A significant focus on how AI agents will replace traditional app-based interfaces, with voice becoming the primary interaction method. Examples included ByteDance’s AI-first phone in China where users interact primarily through an agent rather than individual apps.


Hybrid AI Architecture (Edge-to-Cloud): The speakers emphasized a distributed AI processing approach where different tasks are handled at different levels – devices for personal/immediate tasks, edge servers for medium complexity, and data centers for training and complex operations.


Enterprise AI Adoption Challenges: Panel discussion covered key enterprise concerns including “shadow AI” (unauthorized AI tool usage), data sovereignty, the need for local processing capabilities, and the importance of setting proper expectations about AI limitations rather than overpromising capabilities.


Future of Human-AI Interaction: The conversation explored how AI will fundamentally change work patterns, with humans shifting from doing tasks to making decisions, and the societal implications of increasingly intelligent and autonomous systems.


Overall Purpose:

The discussion aimed to showcase Qualcomm’s comprehensive AI strategy from edge devices to data centers, demonstrate real-world AI applications, and facilitate dialogue between industry leaders about practical AI implementation challenges and opportunities. It served as both a technology showcase and a platform for discussing the broader implications of AI adoption across different sectors.


Overall Tone:

The discussion maintained an optimistic and forward-looking tone throughout, with speakers expressing excitement about AI’s potential while acknowledging realistic challenges. The tone was professional yet accessible, with technical depth balanced by practical examples. During the panel discussion, the tone became more conversational and candid, with panelists sharing both successes and failures, and expressing genuine concerns about AI’s societal impact alongside their enthusiasm for its capabilities.


Speakers

Speakers from the provided list:


Moderator: Role not specified, facilitated the discussion and introduced speakers


Durga Maladi: Executive Vice President and General Manager, Technology Planning, Edge Solutions and Data Center at Qualcomm Technologies. Expertise in AI landscape from Edge to cloud, cellular technology, and 6G development


Shreenivas Chetlapalli: Leads the innovation track for TechMindra for AI and emerging technologies (including blockchain, metaverse). Responsible for creating innovation ecosystem across global network of labs


Madhav Bhargav: Co-founder and CTO at SpotDraft. Expertise in AI for legal applications, creating AI agents for contract review, drafting, and negotiation


Siddhika Nevrekar: Senior Director and Head of Qualcomm AI Hub. Expertise in developer ecosystem acceleration and AI model deployment


Ritukar Vijay: Works in robotics and autonomous systems. Expertise in edge AI for robotics, fleet orchestration, and physical AI applications


Praveer Kochhar: Co-founder of Kogo AI. Expertise in full stack private agentic operating systems, edge-to-cloud AI solutions, and enterprise data sovereignty


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

This comprehensive discussion at Qualcomm’s AI summit explored the company’s vision for distributed artificial intelligence deployment across edge devices and cloud infrastructure, featuring insights from senior executives and industry partners. The session, moderated by Siddhika Nevrekar, Senior Director and Head of Qualcomm AI Hub, provided both technical depth and practical perspectives on AI implementation challenges, revealing a mature understanding of how AI will transform computing architectures and human-machine interactions.


The Edge AI Revolution: From Massive Models to Efficient Processing

Durga Malladi, Qualcomm’s Executive Vice President for Technology Planning, Edge Solutions and Data Centre, opened with a compelling argument for why edge AI represents a fundamental shift in computing paradigms. His central thesis challenged conventional wisdom about AI model requirements: whilst GPT models announced in November 2022 demonstrated the potential of large language models, today’s smaller models achieve superior performance with just 7-8 billion parameters. This dramatic reduction in model size whilst simultaneously improving quality represents what Malladi termed “an AI law” – a trend line similar to Moore’s Law that fundamentally enables edge AI deployment.


These are not merely small language models, but small multimodal models capable of processing text, images, and other data types. The evolution has profound practical implications: premium smartphones can now run 10 billion parameter models “without breaking a sweat,” personal computers can handle up to 30 billion parameters, AR glasses can process 1-2 billion parameter models locally, and on-premises servers can manage 100-300 billion parameter models. These capabilities transform the user experience by providing consistent AI functionality regardless of internet connectivity quality – a critical advantage that eliminates the frustration of switching between AI-enhanced and basic experiences based on network availability.


The privacy implications are equally significant. By processing personal data locally, edge AI addresses growing concerns about data sovereignty and enterprise security. As Malladi emphasised, there exists “a large amount of data that happens to be very personal” that users and organisations may prefer not to store in the cloud. This local processing capability becomes particularly crucial for enterprise applications where regulatory compliance and data protection requirements often mandate on-premises solutions.


Qualcomm’s Developer Ecosystem and Infrastructure

A significant portion of Malladi’s presentation focused on Qualcomm’s comprehensive developer platform through the AI Hub. This platform offers developers flexibility to “pick a model, bring a model, or if you don’t have a model, we’ll create one for you if you bring your data.” The service provides free cloud-native access to device farms, enabling developers to test AI applications across various hardware configurations without requiring physical devices.


On the data centre front, Qualcomm’s AI 250 solution addresses the fundamental challenge of AI inference through innovative memory architecture. Malladi explained the distinction between the compute-bound pre-fill stage and the memory bandwidth-bound decode stage of AI processing. This technical insight drives Qualcomm’s approach to building solutions that optimise for both phases. The company maintains an annual development cadence, with the AI 300 already in planning, applying lessons learned from designing 4-watt smartphone constraints to 150-kilowatt data centre racks.


AI Agents: The New User Interface Paradigm

Perhaps the most transformative aspect of the discussion centred on how AI will fundamentally reshape human-computer interaction. Malladi traced the evolution of user interfaces from command-line systems in the 1960s and 1970s, through graphical interfaces in the 1980s, to touch-based interactions over the past two decades. The current era represents another paradigm shift towards voice-driven AI agents that can orchestrate multiple applications seamlessly.


This vision extends beyond incremental improvements to existing interfaces. Instead of managing cluttered app collections on smartphones, users will interact with a single AI agent through natural voice commands. The agent authenticates the user’s voice, processes complex requests, maps them to appropriate applications running in the background, and accesses personal knowledge graphs to provide contextualised responses.


Real-world implementations are already emerging. ByteDance has introduced an AI-first smartphone in China that eliminates traditional app interfaces entirely, requiring users to interact primarily through an agent. Whilst still early and occasionally “clumsy,” this device demonstrates the practical viability of agent-based computing. As Malladi noted, “some of us do have the luxury of actually visiting China quite frequently” to observe these developments firsthand. Similarly, Qualcomm demonstrated their Humane PC concept during the event, showcasing how this paradigm extends beyond mobile devices to traditional computing platforms.


Industry Panel: Practical AI Implementation Insights

The panel discussion featured four startup founders and executives who provided practical insights from their AI implementation experiences:


Praveer Kochhar, co-founder of Kogo AI, which develops full-stack private agentic operating systems


Shreenivas Chetlapalli, who leads the innovation track for TechMahindra


Madhav Bhargav, co-founder and CTO at SpotDraft, a legal AI company


Ritukar Vijay from Autonomy, focusing on robotics applications


Shadow AI: The Hidden Enterprise Challenge

Kochhar introduced the critical concept of “shadow AI” – unauthorised use of cloud-based AI tools with enterprise data. His revelation that 78% of enterprise users engage in shadow AI highlighted what he considered the most underrated pain point in current AI adoption. This statistic exposes the tension between productivity gains and data governance: employees recognise AI’s value for improving efficiency but often circumvent official policies to access these tools, inadvertently creating security vulnerabilities.


“Shadow AI is still driving efficiency. So not a lot of eyeballs are going there. But I think that’s going to become one of the critical issues as we move forward,” Kochhar warned, emphasising the need for organisations to address this gap between policy and practice.


Enterprise Adoption Strategies

Chetlapalli provided insights from TechMahindra’s work with Indian enterprises, including public sector banks, PSU units, and state government AI centres. He emphasised the importance of expectation management in successful AI adoption, arguing that understanding AI’s limitations is as crucial as recognising its capabilities: “If we can set the expectations right, that AI will augment their work to a certain extent, that will be one. Second, the complete misnomer that it is here to take away jobs has to be removed.”


His company’s collaboration with Qualcomm on fraud call detection using edge LLMs demonstrates practical applications addressing the growing problem of fraudulent communications. This solution provides real-time protection whilst maintaining privacy by processing calls locally rather than transmitting them to cloud services.


Legal AI Innovation

Bhargav shared insights from SpotDraft’s implementation of AI in legal services, revealing that successful deployment requires capturing data “as lawyers were using the technology that they anyways use” rather than forcing adoption of entirely new systems. Their Word plugin approach enables them to provide grounded answers using customers’ own data whilst maintaining familiar workflows.


The company’s AI can infer company policies from historical data, creating always-updated knowledge bases that would be prohibitively expensive to develop manually. This capability enables knowledge work that would otherwise not be done due to cost constraints.


Robotics and Edge AI

Vijay discussed how Autonomy distributes AI processing in robotics applications: “We do orchestration on the cloud which is for the fleets of robots, but we were doing all autonomous navigation on the edge.” This exemplifies thoughtful decomposition of AI workloads, with fleet coordination handled centrally whilst safety-critical navigation decisions occur locally with minimal latency.


He provided specific examples including Rio Tinto mining robots that maintain satellite connectivity for fleet management whilst processing navigation decisions locally, and the use of Vision Language Models (VLMs) for autonomous navigation that can understand and respond to complex visual environments.


Data Sovereignty and Hybrid Architectures

The discussion revealed sophisticated thinking about optimal AI deployment strategies. Rather than viewing edge and cloud AI as competing approaches, the speakers advocated for hybrid architectures that distribute processing based on specific use case requirements. This nuanced approach recognises that different AI tasks have varying computational, latency, and security requirements.


Chetlapalli argued that “too much data leaving the device” poses greater security risks than too little data, advocating for training AI models with smaller datasets and synthetic data rather than large-scale data collection. This perspective aligns with growing enterprise preferences: “A large number of requirements that have come to us is how do I process things in my own premises rather than doing an API call or taking it to the cloud.”


However, the discussion also acknowledged operational complexities. Vijay pointed out that connectivity remains crucial for robot fleet management and remote access capabilities, even in edge-first architectures, creating tension between security preferences for local processing and operational requirements for remote monitoring and control.


6G and AI Convergence

Malladi outlined Qualcomm’s vision for 6G and AI convergence, targeting the 2028 Summer Olympics as a showcase for next-generation capabilities, with first commercial deployments planned for 2029. This timeline reflects the industry’s recognition that AI and connectivity technologies must evolve together rather than as separate innovations. The convergence will enable new applications that require both high-bandwidth, low-latency connectivity and sophisticated local AI processing capabilities.


Societal Implications and Future Concerns

The conversation extended beyond technical challenges to explore broader implications of AI advancement. Kochhar expressed concerns about AI systems becoming “extremely addictive” as they develop the ability to generate personalised content and adapt based on individual user behaviour. He warned: “It will be very difficult to keep attention away from a device when you have a hyper intelligent system on the other side that’s changing itself based on you.”


This concern extends to fundamental questions about human agency and purpose in an AI-augmented world. Kochhar suggested that AI will create significant amounts of free time by handling routine tasks, leaving humans primarily to make decisions, raising profound questions about how society will adapt to such changes.


Regarding regulation, Kochhar advocated for an innovation-first approach, arguing that “regulation in the age of AI is always going to play catch up because technology, the speed at which it’s growing, it’s very difficult to regulate it before it goes because we don’t even know the social implications of what we are building.” The speakers generally favoured “innovation at the side of caution” rather than waiting for comprehensive regulatory frameworks.


Rapid-Fire Insights: Industry Priorities

The panel concluded with rapid-fire questions that revealed each company’s strategic priorities:


When asked about 6G versus AI, most panelists emphasised AI as the more immediate priority, though acknowledging the eventual convergence. On data centre versus local processing, responses varied by use case, with robotics favouring edge processing for safety-critical functions whilst legal AI balanced local privacy with cloud-based model training.


The discussion of artificial versus human intelligence revealed consensus that AI should augment rather than replace human capabilities, with each panelist emphasising different aspects of human-AI collaboration in their respective domains.


Looking Towards 2030: Predictions for Edge AI

Each panelist provided one-word predictions for edge AI in 2030:


– Vijay predicted it will be “taken for granted” – becoming as ubiquitous and invisible as current connectivity infrastructure


– Bhargav envisioned “ubiquitous” deployment across all applications


– Others suggested “emergent” capabilities that we cannot yet fully anticipate


Bhargav elaborated on his vision that everything will become “generative,” with interfaces and applications created on-demand based on specific user needs rather than requiring users to learn complex software platforms. “Everything is going to move away from just being SaaS that people learn, and it as an individual persona is actually caring about. And that actually opens a lot more, specifically in the Indian context where you might not, like people might not have to go through so much training and learning, and they can just go and start using it because the platform can actually understand your needs.”


Conclusion: Towards Invisible, Ubiquitous AI

This discussion demonstrated a sophisticated understanding of AI implementation that moves beyond simple edge-versus-cloud debates to embrace hybrid architectures tailored to specific use cases. The speakers showed remarkable consensus on key principles: AI should augment rather than replace human capabilities, data sovereignty and security must be prioritised, and successful adoption requires careful expectation management.


The conversation revealed honest acknowledgement of significant challenges, from shadow AI security risks to potential societal addiction concerns, suggesting a maturing industry moving beyond hype towards practical, responsible deployment. Most significantly, the discussion highlighted how AI is becoming a foundational technology that will be embedded invisibly throughout computing infrastructure, fundamentally changing how humans interact with technology while respecting privacy and security constraints.


Session transcript

Moderator

To share how these pieces come together and how Qualcomm is unlocking AI’s full economic potential, it’s my privilege to invite on stage Durga Malladi, Executive Vice President and General Manager, Technology Planning, Edge Solutions and Data Center at Qualcomm Technologies. Please join me in welcoming Durga.

Durga Maladi

Okay, so we’re reaching towards the later half of the afternoon and hopefully everyone had their lunch and their coffee. So I hope to talk over the next 25 minutes. I won’t take that much time, but about 25 minutes talking about what is going on in the AI landscape from Edge all the way into the cloud. Starting from yesterday, there was a lot of discussions on the relevance of Edge AI, what exactly is happening in that space, what should be the opportunities at the Edge and where we should be going in the cloud as well. So I’ll try to distill that. in a few slides, and I’ll probably go through a little faster so that we have enough time later on for the team to actually go through the panel discussion.

All right, I’m just going to click through this. This is good. This is probably a good indication of why the edge matters. If you go back in time three years, when GPT was originally announced back in November of 2022, that was a very large 175 billion parameter model. And if you take a look at what the model sizes today look like, they’re more like 7 to 8 billion parameters, but they actually outperformed that original model by quite a bit. Model sizes are coming down quite dramatically, while the model quality continues to increase. This is the equivalent of an AI law that seems to be emerging as far as models themselves are concerned. It’s an important trend line because this actually is the foundation for why edge AI is actually a big part of the model.

And if you take a look at the actual model size, you’ll see that the model size is actually relevant. In other words, you don’t have to necessarily use the trillion parameter models to be able to get through a large number of use cases that average consumers actually care about. and when you think about it that way this is a depiction of just in the last one year alone how much of a progress has been made just in terms of the model quality index itself there’s several parameters over here but the punch line is model quality is getting extremely powerful and now the question is what should we do about it what can we actually build on top of it so we’ve already established the fact that the model sizes are coming down while and these are sometimes known as slms though i would argue that it’s not just small language models but these are small multimodal models that are coming in but there are increased capabilities coming with it much larger context length a lot of on -device learning and personalization that can be done built upon that and reasoning models which actually mimic what we typically expect to see from some of the larger models when you put both of these together and build the right kind of an innovative architecture that’s what actually leads to edge ai in devices that you and i care about so is it here is this just a powerpoint presentation or are there actual consumer devices where you can do edge ai the answer is absolutely yes In fact, today, if you can get any of the premium smartphones where you can easily run a 10 billion parameter model without breaking a sweat, or glasses which have up to a billion to 2 billion parameter models which you can easily run, PCs with up to 30 billion parameter models and so on.

These are devices that you and I use very frequently, at least the PCs and the smartphones with more people adopting AR glasses as well. But one thing that’s nice about running on -device AI or AI inference that’s running directly on devices is the quality of the AI experience is invariant to the quality of connectivity that those devices had to have to the back end of the network. That is a key attribute. I don’t want to keep going back and forth between a regular experience and an AI experience just because I don’t have internet connectivity. That would not be very compelling for any of the consumer or enterprise use cases, and that’s key. The second part is there’s a large amount of data that happens to be very personal.

It can be consumer -centric or it can be enterprise -centric, but either way, I might or not be interested in storing the data in the cloud. And if you kind of think about it that way, then that’s another vector that takes us towards what you can do at the end. and as you put it all together, what exactly are we trying to do with the AI to begin with? Now, I was not there around in the 60s or the 70s, well, I was there in the 70s but I was not involved in, you know, what people used to do with very large mainframe computers where there was just a command line interface, there is a gigantic machine in front of you and you just keep typing something onto it.

That was the user interface between a human and a machine. The 80s changed that with the advent of you use a mouse, you use a PC, there is a graphical interface, you actually get to see something, not just see a command line interface, that changes things. Fast forward to where we are today, about 20 years back, we started using touch as the main UI. We all have our smartphones which happen to be touch -based and increasingly laptops and tablets and these are places where the UI shifted from just using typing and using a keyboard to touch interface as well. Well, we are now at a different era now. It’s at a place where we now are increasingly using voice as an interface towards devices.

And if you put it together, you have a combination of different modalities, whether it is text or voice or video, any other camera interaction, some sensors which tell you exactly where you are located, provide some context to what you’re doing. All of that gets ingested by a single interface, an AI agent. Imagine the following. Let’s take a smartphone because one can easily relate to it. You have your smartphone. Right now, people are either looking at it or scrolling through their apps. We all have a clutter of apps on our phone today. If I wanted to use one app, I’ll have to click that one. If I wanted to then correlate that information with another app, I have to go back, then open up a new app and go in again.

Instead, all you have, and this is a future where all you have is a voice UI where the device is sitting somewhere. It’s in your pocket. You talk to it. Your voice gets authenticated and then it says, OK, I’m ready for you. How can I help you? That’s your agent right there. I would always love to say talk to my agent, but this is the beginnings of that. that agent distills all the information that you’re saying encapsulates it maps it to apps that are running somewhere in the behind the models are actually they only provide a means towards an end goal they perform a job but that’s not the end job by itself so the agent actually picks one or two from a bouquet of models and then also accesses some of the personal attributes that could be sitting right there we call it the personal knowledge graph together when you put it all together you end up seeing a glimpse into how ai can then become the new ui to all the devices around us and this is a very powerful concept is this also just on powerpoint till about last year that was the case not anymore byte has introduced a new phone in china very recently and it’s not available everywhere in the world some of us do have the luxury of actually visiting china quite frequently this phone is like fundamentally different you can’t just buy a new phone you can’t just buy a new phone you can’t just buy a new phone it’s designed from ai first all you have is an agent by the way and all the other apps are actually missing.

They’re somewhere in the back, but you don’t get to see them. And if you think about it, it’s a very disruptive mechanism. It’s still early. Of course, it’s going to be a little clumsy and it doesn’t work all the time in a picture -perfect manner, but it’s something that is beginning to change the conversation of how you take AI agents from something that happens to be in presentations to something that is far more practical in devices. So I’m going to just skip through this part of it. A lot of it is in Mandarin, so it’s kind of hard to see, but at the same time, you get the picture of how it can do things for you when you give it a very generic, nebulous task and it figures out exactly what is it that you need and then does things for you.

It’s like shop something for me, check my bank balance. If I have enough over there, I want to buy that thing and then when it is done, do let me know. It does it. You actually don’t know it’s happening, but it actually does it. All right. So far, we talked about the edge. What about the cloud? Well, a lot of the data actually comes in from the edge. it’s the consumers who are actually generating the data. That’s where the AI action really is. But the cloud has an important role to play as well, as the data actually gets used both between the edge and in the cloud. And so our philosophy over here is to make sure that we have AI processing that is distributed across the network depending upon what the use case is.

For instance, the cloud is extremely powerful for training foundational models, creating new kinds of models. That’s very helpful. At the same time, there’s a large number of enterprises where you have on -prem servers where with using air -cooled cards, it’s very easy to run 100 to 200, 300 billion parameter models. Very useful for small and medium enterprises which don’t necessarily have to rely on the data center. Just buy a card server, plug in an AI accelerator card, maybe a handful of them, and you end up with extremely sophisticated processing. And keep in mind, in the beginning, we talked about the fact that the model sizes continues to actually come down while the quality continues to improve.

So whatever you have, if tomorrow there’s a new model that comes in or you just want to replace your existing AI accelerator card, take out one, plug in another one, as opposed to rolling in a new rack, fundamentally different in terms of the network economics. And finally, we just talked about devices as well. So bottom line is, when you think of AI processing as a hybrid AI, it’s a mix and match of processing between devices, the edge cloud, and in the data center. And speaking of what is it that you can do with it, imagine the following. This is one of the PCs that was launched in Saudi Arabia. It’s called the Humane PC. We had a lot to do with it.

It’s a place where, in fact, the only interface is what you see in front of you. This is not a standard PC which you open up and you have the regular kind of a screensaver and you have all the other apps that are there and you open up your, you know, your mail client, your calendar, and so on. you ask a question and in real time and it doesn’t matter what it is in real time it decides should i run it on the device or should i run it on the cloud maybe some questions that you ask are so complicated i want to run it on the cloud and the other questions are yeah without breaking a sweat i can just run it on the device and this is a place where you actually switch back and forth between what’s running on the device and what runs on the cloud it’s the beginnings of where we can go with it another step when we actually talk about devices we all have a universe of devices around us glasses which could be connected directly to the network tomorrow and today they are tethered through a phone your earbuds your wearables it could be a watch that you’re wearing and increasingly on our ring as well i think they’re running out of places where you put devices but every time i think that there is a new device that comes up already we’ve reached four this is like a universe of devices around you and perhaps the hub happens to be a phone how do you actually go back and forth between these two and how exactly do you make sure i wouldn’t even probably want my smartphone with me I want to keep it somewhere, just have my earbuds and constantly talk to my phone and do some of the processing perhaps in my earbuds itself, the rest of it on the phone and some of it on the edge server and the rest of it on the data center.

That is the vision of how the evolution of AI ought to be. Speaking of the number of things that we just discussed, it’s important. This is now more from a Qualcomm perspective. We have made sure that we have a good, easy way for developers to onboard our platforms, bring in their applications, their platforms and actually run from there. And in the subsequent session, as we go through that, there might be a little bit more talk about it. But suffice it to say, if you go to the Qualcomm AI Hub, it’s a place where any developer can pick a model, bring a model. Or if you don’t have a model, we’ll create one for you if you bring your data.

Once you do that, we’ll give you free cloud native access to device farm, which exists somewhere. But you don’t. You just have an IP address that you log into and you take it from there. And the rest of it is you write your application. You have the ability to test it without once having the device actually in your hand. If you’re comfortable with that, you get to deploy that app out there in any kind of an app store. Very powerful concept that we’ve actually worked on for a large number of time. And this is a place where, you know, we are not a model creator. We ingest models, which means we work locally with every single model provider out there on the planet and happy to actually discuss a lot more offline as it comes to it.

All right. How am I doing on time? Maybe I have 10 minutes. So let me talk a little bit on data center. I don’t see the timer here. That’s why I was asking. So what happens in data centers over here? Well, one thing that’s clear is that the data center capabilities are becoming more and more sophisticated. And as we learned a lot of lessons from the edge, one thing that became very clear for us is that it’s important to pay attention to energy efficiency in addition to performance. So we call it as energy efficient, high performance computing. And we kind of start bringing that sort of a paradigm into data center. A few other. Observations came in.

One is that. the processors that are designed for training are not necessarily the best processors that are intended for inference. They’re actually different kind of problem statements. It’s a little more subtle, but once you understand that, once we get past the whole notion of let’s just buy the biggest GPU that’s out there, and then you realize it’s a little bit of an overkill when it comes to the inference task that you might have. It’s a different architecture that’s needed. The second part is that we want to make sure that in addition to the rollouts that are currently occurring, we bring in solutions which would lower the total cost of ownership. So when we put it together, we introduced our family of solutions in the data center as well, learning from what we learned in devices, and then bringing those lessons into the data center.

A smartphone today operates at four watts at best. The battery inside a smartphone is 4 ,500 milliamp hours at best. In a data center, if you buy a state -of -the -art rack, it’s about 150 kilowatts. Fundamentally different. It’s directly liquid -cooled. You need water. There’s no water or liquid -cooled kind of a smartphone over there. two different universes but there is a way to learn lessons from one universe and actually apply it on the other side i would argue that in ai terminology that’s transfer learning that you seriously apply going from devices all the way into data centers itself so we entered that space and we have two different categories of solutions the second one ai 250 is a place where we focused on an innovative memory architecture as it turns out and it’s a little more of a subtle argument here but as it turns out that when we talk of inference the pre -fill stage is extremely compute bound the more computation horsepower you throw at it the better it is tokens per second is higher however the decode stage is fully memory bandwidth bound you can throw as much compute as you want it makes zero difference whatsoever so the memory architecture is equally to it’s actually equally important and so we innovated on that putting it together for our ai 250 solution this is the one that’s actually rolling out in the middle east and this was part of that earlier demo that we just talked about with a pc and something else that’s running in the cloud you We have an annual cadence that’s coming up.

This is stable stakes at this point in time with the innovative memory architecture continuing into the second generation by the time we get into AI 300, which is not yet announced, but something that is in planning. Now, finally, and I want to actually move a little faster here. There is a buzz in the industry about the next generation of cellular platforms, and usually one would scratch their head and say, wait a minute, we just launched 5G. I don’t know exactly why we’re talking of 6G over here. And besides, isn’t this all about AI? What does AI have to do with 6G? Are we just throwing AI pixie dust on top of every technology right now and simply saying there’s a hype cycle associated with it?

That’s not the case. It is true that cellular communications and AI have evolved as two parallel sets of innovation. But the time has come to actually put both of those together because cellular technology at the end of the day does involve the very same devices that we just talked about. It involves a network through which all the devices are connected. The data goes through and eventually goes into a data center as well. So we have a view in terms of how 6G can unlock a full potential of AI. And, you know, if you think exactly how the GE transitions occur, it’s only 10 years or so. So the earliest 5G launches were in 2019. So we are in year seven of the journey.

It’s not that far off. And it turns out we have a convenient Summer Olympics that’s coming up right next door. We’re based in, I mean, our headquarters is in San Diego. That’s where I live. And there’s the 2028 Summer Olympics. So there’s going to be a lot of show and tell in terms of what 6G capabilities can be. And there’ll be technology trials at that point in time culminating into the first set of deployments that we are driving towards in 2029. And we have another two minutes. I’m just about done. I want to actually stop with one final thing, and that is this part over here. What you heard is just a glimpse into the kind of world that we as Qualcomm live in.

We are probably the only ones in the industry that work on everything from doorbells to data centers. We are probably the only ones in the industry that work on everything from doorbells to data centers. There’s a lot of others who focus on data centers, maybe on servers, but they don’t exist below phones. We actually work ground up from everywhere over there. So happy to talk with

Moderator

Thank you, Durga, for this insightful presentation. As we talk about inclusive AI at scale, enabling developers is critical. Innovation only moves as fast as the tools behind it. Through the Qualcomm AI Hub, we are simplifying how developers access optimized models, test and deploy high -performance on -device AI from edge to cloud. To share how we are accelerating this developer ecosystem, please join me in welcoming Siddhika Nevrekar, Senior Director and Head of Qualcomm AI Hub, to moderate our panel discussion. We’re leading startup founders, exploring the evolution, evolving AI ecosystem and what excites them about building with on -device AI. Please join me in welcoming Siddhika.

Siddhika Nevrekar

I would like to welcome the panel over you guys know who you are so I don’t need to I know can we just take a moment for a quick picture if that’s possible thank you Thank you.

Shreenivas Chetlapalli

So I’m Srinivas Shetlapalli. I lead the innovation track for TechMindra for AI and emerging technologies, which includes blockchain metaverse. And I’m also responsible for creating an innovation ecosystem across a network of labs that we’ve created globally. Thanks.

Madhav Bhargav

Hi, I’m Madhav. I’m the co -founder and CTO at SpotDraft. We do AI for legal. We’ve created a bunch of agents that help lawyers not just review contracts, but also draft them and negotiate them faster and better.

Praveer Kochhar

Hi, everyone. I’m Praveer Kochhar . I’m one of the co -founders of Kogo AI. We run a full stack private agentic operating system from the edge to the cloud. So we are bringing agents closer to enterprise data rather than taking data to agents. So we are 100 % sovereign, built from scratch platform. And we do some very… exciting work with Qualcomm. I hope I get to share that with you today.

Siddhika Nevrekar

All right. Let’s start with some questions. None of you know these, so these are fun because they’ll be a surprise to you. They’re not hard. They’re very easy. We’ll start with you, Praveer. We’ll go in the reverse order because that kind of throws a curveball. What’s the most underrated pain point for enterprise users that AI will solve? You can perhaps talk specific to your product.

Praveer Kochhar

Did you say underrated?

Siddhika Nevrekar

Yes.

Praveer Kochhar

So there’s a concept called shadow AI. I don’t know how many of you know about shadow AI. Shadow AI is a lot of people who work in companies and sharing critical enterprise data on the cloud while using unauthorized AI tools like OpenAI or Cloud. So 78 % of enterprise users use shadow AI. And that’s a big concern. It’s underrated, but it’s still driving efficiency. So not a lot of eyeballs are going there. But I think that’s going to become one of the critical issues as we move forward. Things get more complex. Agentic systems get more complex. More data is shared on the cloud. So I think, yeah, for me, I think it would be shadow AI that people are using.

Siddhika Nevrekar

That’s a good answer. It was a curveball, but you caught it. Okay. Let’s go to Madhav. You work in a very niche field, you know, legal, which is very, very niche. You’re biggest and you also still dabble with technology, right? Yes, you like it. So your biggest and favorite AI failure, building spot draft that set you up for success. Can you remember any of that?

Madhav Bhargav

That’s a great question. So it sort of goes back to our founding years where we were a little bit early to the game. This is around six to eight years back when. And transformers were what people were talking about and not LLMs. And back then, we came in with the idea that, you know, cars are driving themselves. So why can’t AI actually review contracts for you? So we spent a bunch of time with enterprise customers trying to deploy AI and realize that we would have to train a model for each customer. And we built out our entire data labeling annotation pipeline as well as team at that point. So while that was in a way a failure because we then decided not to do that because we didn’t want to do services.

Otherwise, we would be building models one per customer. And the genesis of SpotDraft as it exists today came from there because we wanted to capture the data as lawyers were using the technology that they anyways use, which is where our word plugin comes in. So we can actually capture what they’re doing. And then our annotation team was also set up back then. And that’s sort of how today we are able to give. Grounded answers using data that is the customer’s data because of all the things we built back then.

Siddhika Nevrekar

That’s a good one. So now you’re on a path of just never regretting making single models for each customer.

Madhav Bhargav

I mean, I hope we don’t have to go back there, and I think a lot of the models that have come out are enabling that. But that part, yes, not regretting it.

Siddhika Nevrekar

All right. Srini, last -minute addition, so thank you. I know that it’s difficult to get here. This is probably something that you’ll be able to share with us. Yes. What’s the special ingredient for successful AI adoption in India specifically?

Shreenivas Chetlapalli

Okay, that’s a tough question to ask. I think the most important thing is understanding the limitations of AI. So typically it’s very easy to understand what are the advantages of doing AI. But if we can set the expectations right, that AI will augment their work to a certain extent, that will be one. Second, the complete misnomer that it is here to take away jobs. has to be remote. I think these are two things.

Siddhika Nevrekar

How do you feel about AI being trusted in India? Is it trusted enough? Is it adopted?

Shreenivas Chetlapalli

So if you look at the adaptability of AI, we are almost at the global level in terms of the enterprises that we are talking to. But the best part that I have seen is that a large number of public sector banks have taken to AI in a big way. Some of the banks have been our customers for both AI and emerging technologies. And we’ve also seen PSU units talking about AI. And I’ve also seen a lot of state governments, I had a chance to meet a lot of ministers today, ministerial delegations today, have set up AI centers. So we are in the game.

Siddhika Nevrekar

Yeah, good. Ritukar, this is an easy one. You think about this probably a lot. Cloud or on -device AI, which is the most important? Which, where and when?

Ritukar Vijay

so I think in continuation to the previous question so you know just through a bunch of compute and a problem statement is not how AI is adopted in enterprise settings because it’s very important to break down the big problem into smaller chunks and for what you want to use AI and for what you don’t want to use AI and that’s exactly what we do in robotics so we break down what is happening on edge and what is happening on the cloud so right now at this point in time for us it’s like you know we do orchestration on the cloud which is for the fleets of robots but you know we were doing all you know autonomous navigation on the edge part of it and for us it’s very important that you know we wanted to understand more intelligent navigation so at this point it’s been almost one and a half years since we started running VLMs on the edge to understand the context So I think that’s how you break down the overall problem, not just running everything on the edge or running everything on the cloud, because that won’t solve the problem.

Yeah, that’s pretty much how we break it down into small chunks.

Siddhika Nevrekar

So you guys are very thoughtful and very quickly giving these answers for longer questions. So we’ll go to rapid fire, which is just picking one word. There’s no judgment here. You can pick A or B. You pick A or B. Maybe a couple of words about B. Not too long. So we’ll start with you, Ritika. 6G or AI?

Ritukar Vijay

Sorry?

Siddhika Nevrekar

6G. Or AI.

Ritukar Vijay

So, okay, this is a long one. I can just share a good anecdote there. So we were running robots in Rio Tinto in Australia, mining areas, right? There is no internet. Still, we want to use AI on the edge. And so what we did was we put some installing satellites each on the robot. Right? so connectivity is very important if it is 6G it’s better I’ll go for 6G because that opens up a lot much possibilities there

Siddhika Nevrekar

that’s a good one I thought you would pick AI because that’s the buzz word that’s anyways happening good answer Shreemi data center or local

Shreenivas Chetlapalli

local is the first option but for India data center makes business local because one of the key products that we have built called Orion which is an AI platform has been built for on prem and we also see that a large number of requirements that have come to us is how do I process things in my own premises rather than doing an API call or taking it to the cloud and we have seen I know you asked for India but I have seen this happening in the Middle East also where a large one of the large the largest world largest companies said that can my exact solve their things on their own desktops or locally.

So local.

Siddhika Nevrekar

Local for you, okay. For you, I’m looking through because I want to ask a specific one. Madhav, artificial or human?

Madhav Bhargav

I mean, when you deal with lawyers at the end of the day, I have to go with human because… I know you easily pick artificial. So you can’t hold AI models neck, but you will go hold a lawyer’s neck. So for us, it’s important to give the lawyer the capability to do their job better, faster with a more thorough research. But at the end of the day, it has to be them taking that decision because a lot of times it’s not the black and white. Those are the easy scenarios. It’s the gray area where the lawyers are able to come in and really guide their customers, clients as to what to do and what not to do.

Siddhika Nevrekar

That’s a great question. I think we still want AI to be human, right? So I think it’s… That’s a good one. answer but there is no judgment you could have said otherwise to provide regulate or innovate

Praveer Kochhar

okay in regard to AI 100 % innovate I don’t I don’t see any reason anyways regulation in the in the age of AI is always going to play catch up because technology the speed at which it’s growing it’s very difficult to regulate it before it goes because we don’t even know the social implications of what we are building and as we build them and as it goes into public and people start using it these tools are very intelligent they’re getting intelligent by the week so I think it will always be innovation at the side of caution but I don’t think this is an industry that you can regulate first and then expect it to grow.

Siddhika Nevrekar

Having your first answer very first answer about I wouldn’t say illegal but unauthorized usage was pretty much in line to this. and it still was saving time, and it still is so, yeah, I think that’s a good answer. For the next ones, you don’t have to say why. So you can pick an answer. Nobody is, again, no judgment. Go to AI?

Praveer Kochhar

No, 100 % innovate. I don’t see any reason. Anyways, regulation in the age of AI is always going to play catch up because technology, the speed at which it’s growing, it’s very difficult to regulate it before it goes because we don’t even know the social implications of what we are building. And as we build them and as it goes into public and people start using it, these tools are very intelligent. They’re getting intelligent by the week. So I think it will always be innovation at the side of caution, but I don’t think this is an industry that you can regulate first and then expect it to grow.

Siddhika Nevrekar

Your first answer, very first answer about, I wouldn’t say illegal, but unauthorized usage was pretty much in line to this and it still was saving time and it’s still so, I think that’s a good answer. For the next ones, you don’t have to say why. So you can pick an answer. Nobody’s, again, no judgment. You can pick whichever one you want. Agent tech or robotics?

Ritukar Vijay

Robots are the agents.

Siddhika Nevrekar

You have to pick one.

Ritukar Vijay

So agents, yeah.

Siddhika Nevrekar

Okay. LLM or SLM?

Shreenivas Chetlapalli

SLM, all the time.

Siddhika Nevrekar

Intellectuals or automation?

Madhav Bhargav

You can’t do automation without integrations, so I would have to go with integrations.

Siddhika Nevrekar

Build a chip or buy a chip? This is just a selfish question, but, you know.

Shreenivas Chetlapalli

I would sell a chip, but then build a chip always.

Siddhika Nevrekar

Wow, it’s an interesting answer. I don’t know how much time is left. Okay. All right, we’ll do some few extra questions. You guys can take longer now to answer the questions, I guess. Just moderate the time accordingly. So what’s the one hardware constraint that keeps you up at night?

Ritukar Vijay

So one of the biggest hardware constraints is if your entirety of the system is without any connectivity and you are restricted that you cannot access remotely. If you cannot access robots remotely in any which way, be it for scheduled maintenances or predictive maintenance or anything of that sort, and even emergency situations, like what we see, you know, the Waymos which are running in San Francisco SF right now, they are monitored from Philippines, right? So I think that part is something which is very important, that everything should be connected at all times. So I think that keeps us awake that, you know, the robots should not go in silos or isolated where we cannot reach them.

And only then we have to. We have to physically, you know, make sure that somebody is around. to manage a fleet or whatever.

Siddhika Nevrekar

You talked about local. So I’m going to ask this question which seems apt for you. What’s more dangerous? Too much data leaving the device or too little?

Shreenivas Chetlapalli

Too much data leaving the device. I think too much data leaving the device.

Siddhika Nevrekar

How do you train? I was saying how do you train if it doesn’t?

Shreenivas Chetlapalli

See, I think the focus for us also has been how do we train with lesser data and make it much better. The moment we’re talking about more data and more data leaving, we’re actually talking about more issues happening, more breaches happening. So with lesser amount of data, if we can train or if you can create synthetic data sets and work it, that’s the best way for LLM to be trained rather than waiting for large data set to come. And then then like you said, then wait for it to leave.

Ritukar Vijay

If I may, it depends. If it is enterprise, then less data going up is always better. If it is B2C, then everybody wants to learn from that data. Because that is free data. So in a way, that’s something which is very important. Situation.

Siddhika Nevrekar

Yeah. Okay. This is probably going to be interesting. You get to tell another story. What was the last thing that made you go, wow, about AI? And this doesn’t need to do, don’t pitch your company. It’s fantastic.

Madhav Bhargav

I’ll try not to. I think we’ve seen the kind of, and this sort of goes back to the last question in a way, where a lot of companies have so much data sitting in people’s heads, in people’s inboxes, random share points, drives. And historically what we’ve seen as we onboard customers, they’re like, oh, I have a playbook. You know, which is a policy of what contracts we will sign, won’t sign. but we also know that it is out of date and we’ve been working on techniques to be able to really infer that from older data and one of the things that we’ve seen which really blew my mind was we actually ran one of the early prototypes of that on our internal data we run SpotDraft on SpotDraft and some of the things it threw up when I was talking to our internal legal team and I expected them to say no this is absolutely wrong and that guy is like actually I want to know where this came from because I have been trying to track this down that why are certain contracts having certain clauses and not so it’s that ability to do knowledge work which otherwise would not be done at all and to have this always up to date always learning sort of knowledge base that truly captures what your company and organization policies are that’s something that no one wants to spend you know 100k to get lawyers to create that but if you have a agentic way of doing it, then suddenly that becomes the one thing that everyone cares about, because that is now your onboarding, that is now what you, you know, compare your new contracts with.

And I think in the coding side, we’ve already started seeing a lot of this where things like, you know, Cloud Code and Codex are able to go in and learn from your code base and give you these insights, which earlier would take a new engineer, like maybe a month or a quarter to get onboarded. Now they’ve started shipping code within days, because of this, and that is going to start happening across all kinds of knowledge work. And for us, the, you know, the, the wow moment was when the lawyer who doesn’t trust AI suddenly said, No, I need to see this, this is

Siddhika Nevrekar

So that’s, I’m going to spin to Madhav, the not CTO, maybe a consumer AI feature that just wowed you in recent time, any you can think of?

Madhav Bhargav

I think I’m sure everyone has been talking about OpenCloud, the ability for me to have a personal assistant almost when I, of course, can’t afford one. But for that to really sit and start doing a lot of these things for me, and I’m sure it’s going to come to everyone’s devices very soon, hopefully with Qualcomm chips. And that, I think, is where I was really wowed by it because I deployed it on my WhatsApp and it started sending messages to people. It was a little bit scary, but also saved me a bunch of time. So that was where I was like, OK, this is something that was not at all possible before.

Siddhika Nevrekar

All great responses on WhatsApp. I had to switch it off very quickly because there’s just too much data in there. But that is the next challenge, right? How do we control these autonomous agents, especially when they’re sitting on your personal data? Given you’re a rebel, we’re going to ask you, what are you most scared? The deal.

Praveer Kochhar

No there’s a lot of fear there’s a lot of fear because I think we don’t know the societal impact of this technology yet and I think that’s probably the largest fear because up till now we were engaging with algorithms that were trained to derive attention from us now we are dealing with intelligent algorithms that can self adapt and become far more personalized now with the ability to generate content at will I think it will be very difficult to keep attention away from a device when you have a hyper intelligent system on the other side that’s changing itself based on you it will become extremely addictive so I think that that’s the biggest fear

Siddhika Nevrekar

Yes, but but then but then we are pleasure seeking beings, we will we will go after that until it it gives us some guardrails and then we’ll have apps that will lock themselves up for two days and we won’t use them. It’s possible that we’ll be all on vacation and the robots will be interacting with

Praveer Kochhar

Yeah, and then imagine what we’ll be doing we’ll be interacting with these attention seeking agents, right? I just want to take the last question also, because I just I just saw a real recently and they got a unitary robot in Bangalore and they sent it out to beg. So it was the first robotic beggars that somebody started out and was there more empathy, probably there was more empathy than I don’t know, but but I still think that there’s a lot of tangential use cases of AI that can come out of come out of all this. And yeah, I mean, I mean, that’s something that got me and also kind of told me that you can think very, very differently about this technology and not just think what we do and replicate what we do.

There are a lot of tangential things that might come out of this.

Siddhika Nevrekar

I asked why, if there was more empathy because I was recently driving and there was a two lane road. One lane was completely blocked. Everybody was trying to squeeze into the other lane. And then when you pass by, you saw way more that was not operational. And everybody would go, oh, you know, nobody was upset. Nobody was screaming. I’m like, just because it’s a robot, you’re more empathetic. But they were. So it changes your psychology somehow.

Praveer Kochhar

Yes, yes. And we are still not interacting with robots on a day -to -day basis. And I think that that will be another kind of mystery thing added to our societal weave.

Siddhika Nevrekar

True. Thanks for taking the second question, too, which was interesting. All right. We’ll get into closing so we can wrap up. You all will get to pitch to companies, so that’s very exciting. We’ll start with, you know, complete the sentence in one word. So you have to just say one word. Edge AI in 2030 will be blank. You can repeat the sentence.

Ritukar Vijay

Edge AI in 2030, it will be, it will be, I mean, it will be very not so sophisticated. I mean, it will be taken for granted. So just like you use connectivity for granted. That’s how the Edge AI will be. It will be everywhere almost. My default, like the pins and, you know, the human pins and everything. So what we talked about in the keynote as well. So I think it will be like that. So taken for granted.

Siddhika Nevrekar

Will you still complete itwith one word? Sorry. Taken for granted is one word. Okay. Granted. So it will be business as usual or taken for granted. That’s it. I mean, nobody will mind that. Edge AI in 2030 will be as a default.

Madhav Bhargav

be ubiquitous I think there will not be anything that does not have AI and I think there is a lot of Hollywood sci -fi that has demonstrated this but we will probably be trying to talk to tables or screens or walls to that degree where anything that can have a chip inside it the chip will also have AGI inside it AGI in 2030 will be I think AGI

Praveer Kochhar

in 2030 will be emergent we will start seeing signs of of what OpenClaw just did was a very small trick in the play but it added a little bit of emergent behavior into LLM giving it autonomy to be able to create its own files. That’s all that OpenClaw did and that’s the magic behind it. And I think that’s going to come to the edge and with that emergent behavior you’re actually giving a model the ability to create its own learning. That’s why I say emergent. That’s a good answer. One last thing you

Siddhika Nevrekar

want the audience to remember. This is also the cue for pitch if you like. As I said earlier robots are agents and

Ritukar Vijay

I think I kind of agree with that so we are going to be, part of us will be agentic as well because we’ll have something some AI in us as well whether it is, so there’s a lot of work which is going on with Neuralink so the airports are tracking the brain waves of how you react to a particular situation so agentic you know both robots and people will be agentic in some fashion and I think that’s how things will be and you need some orchestration where everything can talk to each other that’s what we are looking forward to do So I think one thing that we should

Shreenivas Chetlapalli

all remember is there is a lot of work that TechMindray and Qualcomm is doing together in detecting fraud calls and this is using Agile LM so I think that will grow as we go ahead that research will see a lot of action because the number of fraud calls that we are getting are increasing every day so I think that’s an area we will see a lot of action happening and I think both our companies are geared for it I think and I think it was mentioned

Madhav Bhargav

in the keynote I think one of the takeaways for me would be how we think about interacting with technology today is going to change entirely like uh UIs, phones, you know, screens, all of these going away and everything becoming very, very generative, whether it is, you know, slides being generated for you on the fly based on the conversation you’re having, or even entire apps, UIs being generated for each specific scenario and use case. I think everything is going to move away from just being SaaS that people learn, and it as an individual persona is actually caring about. And that actually opens a lot more, specifically in the Indian context where you might not, like people might not have to go through so much training and learning, and they can just go and start using it because the platform can actually understand your needs, as opposed to you having to understand the platform.

Siddhika Nevrekar

Can you just repeat the question once for me, please? One thing you want audience to remember. Whatever you want. To remember.

Praveer Kochhar

So, so, so, so remember how we used to work. work and plan for how we are going to work because very soon we’ll have a lot of time that will be available to us because a lot of systems that we are going to manage will be intelligence and autonomous and we’ll have to only take decisions. So what we do with that time is going to be a critical question everyone’s going to ask themselves and I think all of us are also going to be builders because we’ll have very intelligent tools to build things, run them and manage multiple systems at the same time. So I see that future and I think we should all look around and see how we manage things today and how we are going to do that in the future.

Siddhika Nevrekar

Great. This is a chance to actually pitchyour company but it’s okay. It’s pitched. I will give a more specific one to pitch which is there are a lot of people in the audience, maybe some customers, if they were to find you where should they find you or a spot where they can talk and what should they come and talk to you about? Okay. What specifically and what industry?

Ritukar Vijay

So, I mean, so we are autonomy, so you can always find us at autonomy. So that’s where you can find us. Always where. Yeah, we are brand, I think. We are proud of it. And the most important thing is, like, you know, robots and, you know, just like AI, there’s a lot of emphasis on physical AI. And it’s not something which is going to come. It’s there. It’s just the adoption curve which is happening now. So think more ways of adopting technology. And if you want, if the enterprise customers are looking forward to adopt more and more robots, not only just dull and dirty scenarios, but also in, you know, different walks of life, I think that is where, you know, talk to us and we can help.

Even if they are not our robots, we can help them to have a set of orchestration with, you know, variety of things. But still, they have some level of control. Yeah. Thank you.

Siddhika Nevrekar

Thank you.

D

Durga Maladi

Speech speed

214 words per minute

Speech length

3564 words

Speech time

997 seconds

Model size reduction while quality improves

Explanation

Durga explains that recent AI models are becoming smaller in parameter count while delivering higher quality outputs. This trend enables more efficient deployment on a variety of devices.


Evidence

“Model sizes are coming down quite dramatically, while the model quality continues to increase.” [1]. “And keep in mind, in the beginning, we talked about the fact that the model sizes continues to actually come down while the quality continues to improve.” [2]. “And if you take a look at what the model sizes today look like, they’re more like 7 to 8 billion parameters, but they actually outperformed that original model by quite a bit.” [6].


Major discussion point

AI Model Trends and Edge Feasibility


Topics

Artificial intelligence


Modern devices can run multi‑billion‑parameter models on‑device

Explanation

Durga points out that today’s premium smartphones, AR glasses, and PCs are capable of running models with billions of parameters locally, eliminating the need for constant cloud access.


Evidence

“if you can get any of the premium smartphones where you can easily run a 10 billion parameter model without breaking a sweat, or glasses which have up to a billion to 2 billion parameter models which you can easily run, PCs with up to 30 billion parameter models and so on.” [4]. “These are devices that you and I use very frequently, at least the PCs and the smartphones with more people adopting AR glasses as well.” [16].


Major discussion point

AI Model Trends and Edge Feasibility


Topics

Artificial intelligence | Environmental impacts


On‑device AI experience invariant to connectivity

Explanation

Durga notes that running AI inference directly on devices ensures consistent user experience regardless of network quality, preserving privacy and responsiveness.


Evidence

“But one thing that’s nice about running on‑device AI or AI inference that’s running directly on devices is the quality of the AI experience is invariant to the quality of connectivity that those devices had to have to the back end of the network.” [19]. “I don’t want to keep going back and forth between a regular experience and an AI experience just because I don’t have internet connectivity.” [29].


Major discussion point

Benefits and Use‑Cases of On‑Device AI


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


AI agents become the new universal UI

Explanation

Durga describes AI agents that consolidate information across apps, acting as a voice‑driven universal interface for devices.


Evidence

“that agent distills all the information that you’re saying encapsulates it maps it to apps that are running somewhere in the behind the models are actually they only provide a means towards an end goal they perform a job but that’s not the end job by itself so the agent actually picks one or two from a bouquet of models and then also accesses some of the personal attributes that could be sitting right there we call it the personal knowledge graph together when you put it all together you end up seeing a glimpse into how ai can then become the new ui to all the devices around us and this is a very powerful concept” [26].


Major discussion point

Benefits and Use‑Cases of On‑Device AI


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Qualcomm AI Hub onboarding and testing

Explanation

Durga outlines how the Qualcomm AI Hub lets developers import models, test them on a cloud‑native device farm, and deploy without physically owning the hardware.


Evidence

“But suffice it to say, if you go to the Qualcomm AI Hub, it’s a place where any developer can pick a model, bring a model.” [52]. “Once you do that, we’ll give you free cloud native access to device farm, which exists somewhere.” [53]. “You have the ability to test it without once having the device actually in your hand.” [56].


Major discussion point

Qualcomm Developer Ecosystem


Topics

Artificial intelligence | The enabling environment for digital development


Energy‑efficient high‑performance computing for inference

Explanation

Durga emphasizes that inference workloads require different, more energy‑efficient processors than those used for training, highlighting a shift in hardware design.


Evidence

“the processors that are designed for training are not necessarily the best processors that are intended for inference.” [65]. “So we call it as energy efficient, high performance computing.” [66].


Major discussion point

Data‑Center AI Architecture & Efficiency


Topics

Artificial intelligence | Environmental impacts


AI‑250 innovative memory architecture

Explanation

Durga explains the AI‑250 solution’s memory‑centric design that balances compute‑bound pre‑fill with memory‑bound decode stages for efficient LLM inference.


Evidence

“we focused on an innovative memory architecture as it turns out and it’s a little more of a subtle argument here but as it turns out that when we talk of inference the pre‑fill stage is extremely compute bound … however the decode stage is fully memory bandwidth bound … so we innovated on that putting it together for our ai 250 solution” [69].


Major discussion point

Data‑Center AI Architecture & Efficiency


Topics

Artificial intelligence | Environmental impacts


6G will unlock AI’s full potential

Explanation

Durga projects that 6G networks will enable new AI capabilities, with trials leading to deployments around 2029, because cellular and AI share the same device ecosystem.


Evidence

“So we have a view in terms of how 6G can unlock a full potential of AI.” [83]. “And there’ll be technology trials at that point in time culminating into the first set of deployments that we are driving towards in 2029.” [90]. “But the time has come to actually put both of those together because cellular technology at the end of the day does involve the very same devices that we just talked about.” [87].


Major discussion point

AI Integration with Future Cellular (6G)


Topics

Artificial intelligence | The enabling environment for digital development


S

Siddhika Nevrekar

Speech speed

130 words per minute

Speech length

1174 words

Speech time

539 seconds

Hub simplifies access to optimized models and accelerates development

Explanation

Siddhika emphasizes that the Qualcomm AI Hub streamlines the process for developers to obtain, test, and deploy optimized on‑device AI models, reducing time‑to‑market.


Evidence

“Through the Qualcomm AI Hub, we are simplifying how developers access optimized models, test and deploy high‑performance on‑device AI from edge to cloud.” [27].


Major discussion point

Qualcomm Developer Ecosystem


Topics

Artificial intelligence | The enabling environment for digital development


Concern about shadow AI and data governance

Explanation

Siddhika raises the question of what is more dangerous—too much data leaving the device or too little—highlighting data‑governance challenges.


Evidence

“Too much data leaving the device.” [106]. “I think too much data leaving the device.” [107]. “The moment we’re talking about more data and more data leaving, we’re actually talking about more issues happening, more breaches happening.” [108].


Major discussion point

Enterprise AI Challenges – Shadow AI & Data Governance


Topics

Data governance | Building confidence and security in the use of ICTs


S

Shreenivas Chetlapalli

Speech speed

158 words per minute

Speech length

559 words

Speech time

211 seconds

Shadow AI and data exfiltration risk

Explanation

Shreenivas points out that excessive data leaving devices poses a major security risk, emphasizing the need to minimize exfiltration.


Evidence

“Too much data leaving the device.” [106]. “I think too much data leaving the device.” [107]. “The moment we’re talking about more data and more data leaving, we’re actually talking about more issues happening, more breaches happening.” [108].


Major discussion point

Enterprise AI Challenges – Shadow AI & Data Governance


Topics

Data governance | Building confidence and security in the use of ICTs


Set realistic expectations for AI in India

Explanation

Shreenivas stresses that AI should be presented as an augmentative tool rather than a job‑replacer, which helps build trust among Indian enterprises.


Evidence

“But if we can set the expectations right, that AI will augment their work to a certain extent, that will be one.” [131].


Major discussion point

AI Adoption in India – Expectations, Trust, Local Processing


Topics

Social and economic development | Human rights and the ethical dimensions of the information society


Demand for on‑premise AI platforms (Orion)

Explanation

Shreenivas notes strong demand in India for on‑premise AI solutions that keep data within the enterprise, citing the Orion platform as an example.


Evidence

“local is the first option but for India data center makes business local because one of the key products that we have built called Orion which is an AI platform has been built for on prem and we also see that a large number of requirements that have come to us is how do I process things in my own premises rather than doing an API call or taking it to the cloud…” [137].


Major discussion point

AI Adoption in India – Expectations, Trust, Local Processing


Topics

Artificial intelligence | The enabling environment for digital development


M

Madhav Bhargav

Speech speed

180 words per minute

Speech length

1182 words

Speech time

392 seconds

Early mistake: separate model per client

Explanation

Madhav recounts that initially they attempted to train a distinct model for each customer, which proved unsustainable, leading them to adopt a single grounded model approach.


Evidence

“Otherwise, we would be building models one per customer.” [7]. “We spent a bunch of time with enterprise customers trying to deploy AI and realize that we would have to train a model for each customer.” [116].


Major discussion point

Lessons from Building AI in the Legal Domain


Topics

Artificial intelligence | Capacity development


Leveraging customer‑generated data for personalized contract analysis

Explanation

Madhav describes how using internal usage data via plugins enabled a single model to deliver accurate, up‑to‑date contract insights without per‑client training.


Evidence

“we actually ran one of the early prototypes of that on our internal data we run SpotDraft on SpotDraft and some of the things it threw up when I was talking to our internal legal team…” [121]. “The genesis of SpotDraft as it exists today came from there because we wanted to capture the data as lawyers were using the technology that they anyways use, which is where our word plugin comes in.” [122].


Major discussion point

Lessons from Building AI in the Legal Domain


Topics

Artificial intelligence | Capacity development


AGI will be embedded in any chip‑enabled device

Explanation

Madhav predicts that by 2030 artificial general intelligence will be commonplace, residing in any device that contains a processor.


Evidence

“be ubiquitous I think there will not be anything that does not have AI and I think there is a lot of Hollywood sci‑fi that has demonstrated this but we will probably be trying to talk to tables or screens or walls to that degree where anything that can have a chip inside it the chip will also have AGI inside it AGI in 2030 will be I think AGI” [91].


Major discussion point

Future Outlook – Ubiquitous Edge AI, AGI, and Emergent Behavior


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Wow moment: SpotDraft internal data insights

Explanation

Madhav shares a surprise when SpotDraft, run on its own data, uncovered unexpected contract clauses, demonstrating self‑learning knowledge work.


Evidence

“we actually ran one of the early prototypes of that on our internal data we run SpotDraft on SpotDraft and some of the things it threw up when I was talking to our internal legal team…” [121].


Major discussion point

Wow Moments & Surprising AI Capabilities


Topics

Artificial intelligence | Social and economic development


Wow moment: WhatsApp personal assistant automation

Explanation

Madhav describes deploying an AI personal assistant on WhatsApp that automatically messaged contacts, a capability previously impossible.


Evidence

“And that, I think, is where I was really wowed by it because I deployed it on my WhatsApp and it started sending messages to people.” [169].


Major discussion point

Wow Moments & Surprising AI Capabilities


Topics

Artificial intelligence | Social and economic development


P

Praveer Kochhar

Speech speed

171 words per minute

Speech length

974 words

Speech time

341 seconds

Shadow AI is an underrated pain point

Explanation

Praveer highlights that many enterprises use unauthorized AI tools, creating shadow AI that jeopardizes data security.


Evidence

“So there’s a concept called shadow AI.” [78]. “Shadow AI is a lot of people who work in companies and sharing critical enterprise data on the cloud while using unauthorized AI tools like OpenAI or Cloud.” [100]. “So 78 % of enterprise users use shadow AI.” [103].


Major discussion point

Enterprise AI Challenges – Shadow AI & Data Governance


Topics

Data governance | Building confidence and security in the use of ICTs


Emergent behavior in large models enables autonomous file creation

Explanation

Praveer notes that emergent capabilities in large language models now allow them to autonomously generate files and exhibit new behaviors.


Evidence

“And I think that’s going to come to the edge and with that emergent behavior you’re actually giving a model the ability to create its own learning.” [12]. “in 2030 will be emergent we will start seeing signs of of what OpenClaw just did was a very small trick in the play but it added a little bit of emergent behavior into LLM giving it autonomy to be able to create its own files.” [160].


Major discussion point

Future Outlook – Ubiquitous Edge AI, AGI, and Emergent Behavior


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Hyper‑intelligent AI could become addictive and socially disruptive

Explanation

Praveer expresses concern that self‑adapting, highly intelligent AI systems may capture user attention excessively, leading to addiction and societal disruption.


Evidence

“No there’s a lot of fear because I think we don’t know the societal impact of this technology yet… it will become extremely addictive so I think that that’s the biggest fear” [174].


Major discussion point

Wow Moments & Surprising AI Capabilities


Topics

Human rights and the ethical dimensions of the information society | Social and economic development


R

Ritukar Vijay

Speech speed

168 words per minute

Speech length

851 words

Speech time

303 seconds

Edge vs. cloud orchestration in robotics

Explanation

Ritukar explains that fleet orchestration runs in the cloud while autonomous navigation runs on the edge, splitting the problem into manageable parts.


Evidence

“we do orchestration on the cloud which is for the fleets of robots but … autonomous navigation on the edge part of it…” [99].


Major discussion point

Edge vs. Cloud Orchestration in Robotics


Topics

Artificial intelligence | Capacity development


Continuous connectivity essential for remote robot monitoring

Explanation

Ritukar stresses that robots must stay connected at all times to allow remote maintenance, predictive upkeep, and emergency interventions.


Evidence

“If you cannot access robots remotely in any which way, be it for scheduled maintenances or predictive maintenance or anything of that sort… they are monitored from Philippines… So I think that part is something which is very important, that everything should be connected at all times.” [145]. “So I think that part is something which is very important, that everything should be connected at all times.” [147].


Major discussion point

Hardware Constraints for Autonomous Systems


Topics

Artificial intelligence | Environmental impacts


Edge AI will be taken for granted by 2030

Explanation

Ritukar predicts that by 2030 edge AI will be ubiquitous and assumed, embedded in everyday objects.


Evidence

“Edge AI in 2030, it will be, it will be, I mean, it will be very not so sophisticated.” [94]. “It will be everywhere almost.” [157]. “I think it will be taken for granted.” [154].


Major discussion point

Future Outlook – Ubiquitous Edge AI, AGI, and Emergent Behavior


Topics

Artificial intelligence | Social and economic development


M

Moderator

Speech speed

135 words per minute

Speech length

156 words

Speech time

69 seconds

Qualcomm AI Hub simplifies developer workflow

Explanation

The Moderator introduces the Qualcomm AI Hub as a platform that streamlines access to optimized models, testing, and deployment for developers.


Evidence

“Through the Qualcomm AI Hub, we are simplifying how developers access optimized models, test and deploy high‑performance on‑device AI from edge to cloud.” [27]. “As we talk about inclusive AI at scale, enabling developers is critical.” [60].


Major discussion point

Qualcomm Developer Ecosystem


Topics

Artificial intelligence | The enabling environment for digital development


Agreements

Agreement points

Hybrid AI architecture with distributed processing across edge, cloud, and on-premises is optimal

Speakers

– Durga Maladi
– Ritukar Vijay

Arguments

AI processing should be distributed across edge, on-premises servers, and cloud based on use case requirements


Breaking down problems into smaller chunks determines what runs on edge versus cloud


Summary

Both speakers advocate for thoughtful distribution of AI processing across different infrastructure layers based on specific use case requirements rather than a one-size-fits-all approach


Topics

Artificial intelligence | The enabling environment for digital development


Local/on-premises AI processing is preferred for enterprise security and data sovereignty

Speakers

– Shreenivas Chetlapalli
– Praveer Kochhar

Arguments

Local processing is preferred for enterprise security and compliance requirements


Bringing agents closer to enterprise data rather than moving data to agents ensures sovereignty


Summary

Both speakers emphasize the importance of keeping enterprise data local rather than transferring it to cloud-based AI services, prioritizing data sovereignty and security


Topics

Artificial intelligence | Data governance | Building confidence and security in the use of ICTs


AI should augment human capabilities rather than replace humans entirely

Speakers

– Shreenivas Chetlapalli
– Madhav Bhargav

Arguments

Setting proper expectations about AI limitations is crucial for successful adoption


AI should augment human work rather than replace it, especially in legal and knowledge work


Summary

Both speakers agree that AI’s role should be to enhance human capabilities and productivity while maintaining human oversight and decision-making authority


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Social and economic development


Edge AI will become ubiquitous and taken for granted by 2030

Speakers

– Durga Maladi
– Ritukar Vijay
– Madhav Bhargav

Arguments

Premium smartphones can run 10 billion parameter models, PCs can handle 30 billion parameters


Edge AI will be taken for granted by 2030, becoming ubiquitous by default


Everything will become generative, moving away from traditional SaaS platforms


Summary

All three speakers envision a future where edge AI capabilities become so commonplace and integrated into devices that users won’t consciously think about them, similar to how connectivity is currently taken for granted


Topics

Artificial intelligence | Information and communication technologies for development


Similar viewpoints

Both speakers envision a fundamental shift away from traditional user interfaces toward more natural, AI-mediated interactions that eliminate the need for users to learn complex software platforms

Speakers

– Durga Maladi
– Madhav Bhargav

Arguments

AI agents will replace traditional app-based interfaces with voice-first interaction


Everything will become generative, moving away from traditional SaaS platforms


Topics

Artificial intelligence | Information and communication technologies for development


Both speakers are concerned about data security risks when enterprise data leaves local control, whether through excessive data transfer or unauthorized AI tool usage

Speakers

– Shreenivas Chetlapalli
– Praveer Kochhar

Arguments

Too much data leaving devices poses greater security risks than too little data


Shadow AI (78% of enterprise users sharing data on unauthorized cloud AI tools) is an underrated but critical security concern


Topics

Artificial intelligence | Data governance | Building confidence and security in the use of ICTs


Both speakers advocate for processing sensitive or real-time data locally while using cloud resources for coordination and less sensitive tasks

Speakers

– Durga Maladi
– Ritukar Vijay

Arguments

Personal data processing locally addresses privacy concerns


Orchestration happens on cloud while autonomous navigation runs on edge for robotics


Topics

Artificial intelligence | Data governance | Human rights and the ethical dimensions of the information society


Unexpected consensus

Innovation should proceed faster than regulation in AI development

Speakers

– Praveer Kochhar
– Durga Maladi

Arguments

Innovation should proceed faster than regulation in AI development


6G and AI convergence will unlock new possibilities, with 2028 Olympics as showcase target


Explanation

While Kochhar explicitly advocates for innovation over regulation, Maladi’s focus on rapid technological deployment and integration (6G-AI convergence with specific timeline targets) implicitly supports a similar innovation-first approach. This consensus is unexpected given the growing calls for AI regulation globally


Topics

Artificial intelligence | The enabling environment for digital development


Connectivity and infrastructure remain critical constraints for AI deployment

Speakers

– Ritukar Vijay
– Durga Maladi

Arguments

Connectivity is crucial for robot fleet management and remote access capabilities


Edge AI provides consistent experience regardless of connectivity quality


Explanation

Despite advocating for edge AI, both speakers acknowledge that connectivity remains a fundamental requirement – Vijay for robot management and Maladi for ensuring consistent AI experiences. This creates an unexpected consensus that even ‘edge-first’ approaches still depend heavily on network infrastructure


Topics

Artificial intelligence | Information and communication technologies for development | Internet governance


Overall assessment

Summary

The speakers demonstrate strong consensus on hybrid AI architectures, the importance of data sovereignty, AI as human augmentation rather than replacement, and the inevitability of ubiquitous edge AI. There’s also unexpected agreement on prioritizing innovation over regulation and the continued importance of connectivity infrastructure.


Consensus level

High level of consensus with complementary rather than conflicting viewpoints. The agreement spans technical architecture decisions, business strategy, and philosophical approaches to AI deployment. This consensus suggests a mature understanding of AI implementation challenges and a shared vision for distributed, human-centric AI systems that prioritize security and user experience.


Differences

Different viewpoints

Data processing location preferences

Speakers

– Shreenivas Chetlapalli
– Ritukar Vijay

Arguments

Local processing is preferred for enterprise security and compliance requirements


Breaking down problems into smaller chunks determines what runs on edge versus cloud


Summary

Chetlapalli strongly advocates for local processing citing security concerns and enterprise requirements, while Vijay takes a more nuanced approach suggesting different processing locations based on specific use case requirements


Topics

Data governance | Building confidence and security in the use of ICTs


Data sharing and collection approaches

Speakers

– Shreenivas Chetlapalli
– Ritukar Vijay

Arguments

Too much data leaving devices poses greater security risks than too little data


Connectivity is crucial for robot fleet management and remote access capabilities


Summary

Chetlapalli emphasizes minimizing data transfer for security reasons, while Vijay argues that connectivity and data sharing are essential for effective robot operations and fleet management


Topics

Data governance | Building confidence and security in the use of ICTs


Regulation versus innovation priority

Speakers

– Praveer Kochhar
– Shreenivas Chetlapalli

Arguments

Innovation should proceed faster than regulation in AI development


Setting proper expectations about AI limitations is crucial for successful adoption


Summary

Kochhar advocates for innovation-first approach with minimal regulation, while Chetlapalli emphasizes the importance of setting proper expectations and limitations, implying a more cautious, structured approach


Topics

The enabling environment for digital development | Human rights and the ethical dimensions of the information society


Unexpected differences

Approach to AI safety and control

Speakers

– Praveer Kochhar
– Siddhika Nevrekar

Arguments

Societal impact of hyper-intelligent, self-adapting systems creating addiction concerns


Autonomous AI agents accessing personal data raise significant control and privacy challenges


Explanation

Both speakers express concerns about AI control and safety, but from different perspectives – Kochhar focuses on societal addiction risks while Nevrekar emphasizes practical control challenges. This disagreement is unexpected because both work in AI development but have different primary concerns


Topics

Human rights and the ethical dimensions of the information society | Data governance


Enterprise data security priorities

Speakers

– Praveer Kochhar
– Shreenivas Chetlapalli

Arguments

Shadow AI (78% of enterprise users sharing data on unauthorized cloud AI tools) is an underrated but critical security concern


Too much data leaving devices poses greater security risks than too little data


Explanation

Both speakers are concerned about enterprise data security, but Kochhar focuses on unauthorized usage patterns while Chetlapalli focuses on data transfer volumes. This creates an unexpected tension between addressing user behavior versus technical architecture


Topics

Building confidence and security in the use of ICTs | Data governance


Overall assessment

Summary

The main areas of disagreement center around data processing location preferences, the balance between innovation and regulation, and different approaches to AI safety and security


Disagreement level

Moderate disagreement level with significant implications for AI deployment strategies. The disagreements reflect different priorities – security versus functionality, innovation speed versus cautious adoption, and local versus distributed processing. These differences could impact how organizations approach AI implementation and policy development


Partial agreements

Partial agreements

All speakers agree that hybrid AI processing is important, but they disagree on the optimal distribution – Maladi advocates for flexible use-case-based distribution, Vijay focuses on problem decomposition, and Chetlapalli strongly prefers local processing for security

Speakers

– Durga Maladi
– Ritukar Vijay
– Shreenivas Chetlapalli

Arguments

AI processing should be distributed across edge, on-premises servers, and cloud based on use case requirements


Breaking down problems into smaller chunks determines what runs on edge versus cloud


Local processing is preferred for enterprise security and compliance requirements


Topics

Artificial intelligence | Data governance | The enabling environment for digital development


Both speakers recognize the importance of human oversight in AI systems, but Bhargav focuses on augmentation and accountability while Kochhar emphasizes security risks of unsupervised AI usage

Speakers

– Madhav Bhargav
– Praveer Kochhar

Arguments

AI should augment human work rather than replace it, especially in legal and knowledge work


Shadow AI (78% of enterprise users sharing data on unauthorized cloud AI tools) is an underrated but critical security concern


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs


Similar viewpoints

Both speakers envision a fundamental shift away from traditional user interfaces toward more natural, AI-mediated interactions that eliminate the need for users to learn complex software platforms

Speakers

– Durga Maladi
– Madhav Bhargav

Arguments

AI agents will replace traditional app-based interfaces with voice-first interaction


Everything will become generative, moving away from traditional SaaS platforms


Topics

Artificial intelligence | Information and communication technologies for development


Both speakers are concerned about data security risks when enterprise data leaves local control, whether through excessive data transfer or unauthorized AI tool usage

Speakers

– Shreenivas Chetlapalli
– Praveer Kochhar

Arguments

Too much data leaving devices poses greater security risks than too little data


Shadow AI (78% of enterprise users sharing data on unauthorized cloud AI tools) is an underrated but critical security concern


Topics

Artificial intelligence | Data governance | Building confidence and security in the use of ICTs


Both speakers advocate for processing sensitive or real-time data locally while using cloud resources for coordination and less sensitive tasks

Speakers

– Durga Maladi
– Ritukar Vijay

Arguments

Personal data processing locally addresses privacy concerns


Orchestration happens on cloud while autonomous navigation runs on edge for robotics


Topics

Artificial intelligence | Data governance | Human rights and the ethical dimensions of the information society


Takeaways

Key takeaways

Edge AI is becoming viable due to dramatically smaller model sizes (7-8B parameters vs 175B) while maintaining superior quality, enabling sophisticated AI processing on consumer devices


AI agents will fundamentally transform user interfaces, replacing traditional app-based interactions with voice-first, personalized experiences that generate content and interfaces on-demand


Hybrid AI architecture is essential – processing should be distributed across edge, on-premises, and cloud based on specific use case requirements rather than a one-size-fits-all approach


Shadow AI (unauthorized use of cloud AI tools with enterprise data) affects 78% of enterprise users and represents a critical but underrated security risk


Local AI processing is preferred for enterprises due to security, privacy, and compliance requirements, with training on smaller datasets and synthetic data being more secure than large-scale data collection


AI should augment human capabilities rather than replace them, particularly in knowledge work like legal services where human judgment remains crucial for complex decisions


6G and AI convergence will enable new possibilities, with 2028 Olympics targeted as a showcase for next-generation capabilities


Edge AI will become ubiquitous and taken for granted by 2030, similar to how connectivity is viewed today


Resolutions and action items

Qualcomm AI Hub provides developers with free cloud-native access to device farms for testing and deploying AI applications without requiring physical devices


Continued collaboration between TechMindra and Qualcomm on fraud call detection using edge LLMs


Development roadmap for Qualcomm’s AI data center solutions with annual cadence, including AI 250 currently rolling out and AI 300 in planning


Target timeline of 2029 for first 6G deployments with technology trials at 2028 Summer Olympics


Unresolved issues

How to effectively regulate AI innovation without stifling technological progress, given the rapid pace of development


Managing the societal impact of hyper-intelligent, self-adapting AI systems that could become highly addictive


Determining optimal balance between data privacy and AI model training effectiveness


Addressing the challenge of maintaining human agency and decision-making as AI systems become more autonomous


Establishing standards for AI agent behavior and control mechanisms, especially when handling personal data


Managing the transition period as traditional user interfaces are replaced by AI agents


Suggested compromises

Innovation at the side of caution – proceeding with AI development while being mindful of potential risks rather than waiting for comprehensive regulation


Hybrid approach to AI processing that leverages both edge and cloud capabilities based on specific use case requirements rather than choosing one exclusively


Augmentation rather than replacement philosophy for AI in enterprise settings, particularly in knowledge work where human expertise remains valuable


Training AI models with less data and synthetic datasets to balance performance needs with privacy and security concerns


Thought provoking comments

There’s a concept called shadow AI. I don’t know how many of you know about shadow AI. Shadow AI is a lot of people who work in companies and sharing critical enterprise data on the cloud while using unauthorized AI tools like OpenAI or Cloud. So 78% of enterprise users use shadow AI.

Speaker

Praveer Kochhar


Reason

This comment introduced a critical but underexplored security concern in AI adoption. The statistic that 78% of enterprise users engage in shadow AI reveals a massive gap between official AI policies and actual user behavior, highlighting the tension between productivity gains and data security.


Impact

This comment immediately established the discussion’s focus on practical, real-world AI challenges rather than theoretical benefits. It set a tone of addressing uncomfortable truths about AI adoption and influenced subsequent discussions about regulation vs. innovation, with Praveer later advocating for innovation over regulation despite acknowledging these risks.


Model sizes are coming down quite dramatically, while the model quality continues to increase. This is the equivalent of an AI law that seems to be emerging as far as models themselves are concerned.

Speaker

Durga Malladi


Reason

This observation challenges the common assumption that bigger models are always better and introduces the concept of an ‘AI law’ similar to Moore’s Law. It fundamentally reframes the edge vs. cloud AI debate by suggesting that powerful AI doesn’t necessarily require massive computational resources.


Impact

This comment provided the foundational argument for the entire edge AI discussion that followed. It justified why on-device AI is not just feasible but inevitable, influencing how panelists later discussed the balance between edge and cloud processing. It shifted the conversation from ‘whether’ edge AI is viable to ‘how’ to implement it effectively.


We don’t know the societal impact of this technology yet… now we are dealing with intelligent algorithms that can self adapt and become far more personalized… it will become extremely addictive so I think that that’s the biggest fear

Speaker

Praveer Kochhar


Reason

This comment introduced a sobering perspective on AI’s potential negative societal impacts, moving beyond technical capabilities to psychological and social consequences. It highlighted the unprecedented nature of self-adapting, personalized AI systems and their potential for manipulation.


Impact

This shifted the discussion from purely optimistic technical achievements to a more balanced view that included serious concerns. It prompted deeper reflection on responsible AI development and influenced the conversation about the need for guardrails and ethical considerations in AI deployment.


Understanding the limitations of AI… if we can set the expectations right, that AI will augment their work to a certain extent, that will be one. Second, the complete misnomer that it is here to take away jobs has to be remote.

Speaker

Shreenivas Chetlapalli


Reason

This comment addressed one of the most fundamental barriers to AI adoption – unrealistic expectations and job displacement fears. It emphasized the importance of proper expectation management and reframing AI as augmentation rather than replacement.


Impact

This comment grounded the discussion in practical adoption challenges, moving beyond technical capabilities to human factors. It influenced subsequent discussions about how AI tools should be positioned and marketed to users, emphasizing collaboration rather than replacement.


We are probably the only ones in the industry that work on everything from doorbells to data centers… We actually work ground up from everywhere over there.

Speaker

Durga Malladi


Reason

This comment highlighted Qualcomm’s unique position in the AI ecosystem and introduced the concept of end-to-end AI solutions. It emphasized the importance of having a holistic view of AI deployment across different scales and use cases.


Impact

This comment reinforced the theme of distributed AI processing and validated the need for seamless integration between edge and cloud. It positioned the subsequent panel discussion within the context of comprehensive AI ecosystem thinking rather than siloed solutions.


We’ll have a lot of time that will be available to us because a lot of systems that we are going to manage will be intelligence and autonomous and we’ll have to only take decisions. So what we do with that time is going to be a critical question everyone’s going to ask themselves

Speaker

Praveer Kochhar


Reason

This comment reframed the AI impact discussion from job displacement to time liberation, suggesting a fundamental shift in how humans will spend their time when AI handles routine tasks. It raised profound questions about human purpose and productivity in an AI-augmented world.


Impact

This comment provided a thought-provoking conclusion that elevated the discussion beyond technical implementation to philosophical implications. It left the audience with a forward-looking perspective on how AI might fundamentally change human work and life patterns.


Overall assessment

These key comments shaped the discussion by moving it through several important phases: from technical feasibility (model size reduction enabling edge AI) to practical implementation challenges (shadow AI, expectation management), then to broader societal implications (addiction, time liberation). The comments created a balanced narrative that acknowledged both the tremendous potential and serious concerns around AI adoption. Praveer Kochhar’s contributions were particularly impactful in introducing uncomfortable truths and philosophical depth, while Durga Malladi’s technical insights provided the foundation for understanding why distributed AI is both possible and necessary. The discussion evolved from a typical tech presentation format into a more nuanced exploration of AI’s real-world implications, with each insightful comment building upon previous ones to create a comprehensive view of the AI landscape from technical, business, and societal perspectives.


Follow-up questions

How can we effectively manage and control autonomous AI agents, especially when they have access to personal data?

Speaker

Siddhika Nevrekar


Explanation

This emerged from discussion about OpenAI’s autonomous features and the challenge of controlling agents that can send messages and access personal information autonomously


What will be the societal impact of hyper-intelligent, self-adapting AI systems that become extremely addictive?

Speaker

Praveer Kochhar


Explanation

This addresses concerns about AI systems that can generate content at will and adapt based on individual users, potentially creating unprecedented levels of digital addiction


How do we prepare for and manage the significant amount of free time that will become available when AI systems handle most routine tasks?

Speaker

Praveer Kochhar


Explanation

This explores the fundamental question of how society will adapt when AI handles most operational work, leaving humans primarily to make decisions


What are the implications and potential applications of emergent AI behavior at the edge?

Speaker

Praveer Kochhar


Explanation

This builds on the observation that giving AI models autonomy to create their own files and learning represents a significant shift toward emergent behavior


How can we better understand and address the psychological effects of human-robot interactions?

Speaker

Siddhika Nevrekar


Explanation

This arose from observations about people showing more empathy toward non-operational robots than humans in similar situations


What are the broader implications of tangential AI use cases that go beyond replicating human tasks?

Speaker

Praveer Kochhar


Explanation

This was prompted by the example of robotic beggars in Bangalore, suggesting AI applications that are entirely novel rather than human task replacements


How can enterprises effectively balance edge AI processing with cloud capabilities for optimal performance and security?

Speaker

Ritukar Vijay


Explanation

This addresses the need for better frameworks to determine what AI processing should happen locally versus in the cloud, especially for enterprise applications


What training methodologies can be developed to create effective AI models with minimal data while maintaining privacy?

Speaker

Shreenivas Chetlapalli


Explanation

This explores the challenge of training AI systems without compromising data security, particularly relevant for enterprise applications


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.