Keynote-Lars Reger
Summary
The session opened with a ministerial welcome and the introduction of Lars Reger, Executive Vice President and CTO of NXP Semiconductors, who would discuss the role of chips in artificial intelligence [1-4]. Reger argued that current AI discourse focuses on large data-center deployments, but the real question is what AI is intended to do for everyday users [9-12]. He linked this shift to long-standing megatrends such as demographic change, infrastructure upgrades, supply-chain pressures and energy constraints, which together drive a world that “anticipates and automates” [18-24]. Looking ahead twenty years, he envisioned barrier-free homes, autonomous manufacturing, and vehicles that function as rolling living rooms, all coordinated by intelligent robots [25-33][43-48].
Despite the diversity of form factors, he said every robot shares four ingredients: sensing the environment, thinking with AI, connecting to the cloud, and acting on actuators [55-64]. Trust, he emphasized, is essential; functional safety and security must guarantee that devices never fail or get hacked, otherwise users will revert to manual control [65-66]. For semiconductor makers, volume matters, and NXP is developing modular “Lego-brick” blocks that can be scaled from tiny sensors to larger edge processors [95-98][140-149].
He showcased a Kinara AI accelerator acquired by NXP that runs a 10-billion-parameter language model at only 7 watts, enabling edge applications such as smart fridges and medical imaging [100-104]. Reger highlighted ultra-wideband communication, car-to-car links, and standardized smart-device languages as infrastructure that lets these low-power AI units cooperate across homes, factories and transport [118-128]. He warned that without ultra-low-power, secure architectures, the projected 50 billion connected devices would exceed the planet’s energy capacity [141-144].
Consequently, NXP’s roadmap focuses on safe, energy-efficient silicon that can run highly efficient, task-specific AI models at the edge, aligning with political calls to democratize AI [155-158]. The discussion concluded that delivering trustworthy, low-power edge AI through scalable semiconductor solutions is the key to realizing the anticipated, automated future envisioned by the panel [156-158].
Keypoints
Major discussion points
– A future “anticipating and automating” world – Lars paints a picture of barrier-free homes that monitor health, wealth and security, and of transportation that becomes a “rolling cocoon” or living room. He asks how this vision will look 20 years ahead and stresses that it is driven by megatrends such as demographics, infrastructure and energy constraints. [25-33][44-48]
– Universal functional architecture: sense → think → connect → act, with trust as the foundation – Every smart robot, regardless of form-factor, must first sense its environment, then process data (AI), connect to the cloud, and finally act on actuators. Trust is achieved through functional safety, security against hacking and ultra-low power operation, otherwise users will revert to manual control. [55-64][65-66][68-69]
– Edge-centric AI and semiconductor “Lego-block” solutions – The speaker argues that 80 % of AI tasks will run on tiny, efficient models at the edge, not in massive data centres. NXP’s strategy is to provide modular AI accelerators (e.g., the Kinara-based 10-billion-parameter chip) that can be plugged into devices ranging from drones to refrigerators, delivering large-language-model capability at only a few watts. [93-100][101-104][149-154]
– Scalability challenges: energy, standards and billions of devices – Deploying 50 billion connected robots demands three times the planet’s current energy capacity unless devices are ultra-energy-efficient. A common “meta-standard” language is needed so that all devices (home gateways, blinds, solar managers, etc.) can interoperate. [142-144][125-128][141-148]
– Biological inspiration for safe, deterministic control – Rather than scaling up monolithic AI, the talk suggests copying nature’s layered architecture (spine for reflexes, cerebellum for stability) to build real-time, safety-critical systems. This approach informs the design of autonomous vehicles and other robots. [74-82][88-92]
Overall purpose / goal
The discussion is a strategic briefing aimed at policymakers and industry leaders, illustrating why AI must move from data-center-centric models to secure, ultra-low-power edge computing. It positions NXP’s semiconductor portfolio as the enabler that can democratize AI across 50 billion devices worldwide, supporting the broader governmental ambition to “bring AI to everyone.” [8][156-158]
Overall tone
The tone begins formally and visionary, using rhetorical questions and futuristic imagery to engage the audience. It then shifts to a more technical, explanatory mode when detailing architecture, safety, and hardware specifics. Throughout, the speaker maintains an optimistic, persuasive tone, concluding with an inspirational call to action that aligns technology with policy goals. The progression moves from broad, aspirational statements to concrete technical solutions, ending on a hopeful, rally-cry note. [8][18-20][134-136][155-158]
Speakers
– Speaker 1
– Role/Title: Moderator / Event host (introduces speakers) [S1][S3]
– Area of Expertise:
– Lars Reger
– Role/Title: Executive Vice President and Chief Technology Officer, NXP Semiconductors; Keynote speaker [S4][S5]
– Area of Expertise: Semiconductor technology, edge AI, functional safety
Additional speakers:
– (none)
The session opened with a formal welcome from the ministerial host, who thanked the panel of ministers and introduced the keynote speaker, Lars Reger (CTO of NXP Semiconductors) – note: the transcript spells his name both as “Recher” and “Reger”. The host highlighted that “artificial intelligence runs on chips” and positioned NXP at the forefront of designing the semiconductors that will power the next generation of edge AI for applications ranging from cars to medical devices and industrial systems [1-7].
Lars Reger greeted the audience and immediately questioned the prevailing focus on feeding ever-larger AI models into massive data-centres. He asked “What is this AI for?” and “What is this AI actually doing?” – urging a shift from infrastructure-centric hype to user-centric purpose [8-12]. He illustrated his point with a personal narrative, describing his own journey from an analog world in the 1970s through the digitisation that turned a laptop into a smartphone, and the resulting on-demand services such as ordering pizza, hailing an Uber or remotely controlling home climate [13-17].
From this perspective he painted a vision of a world that “anticipates and automates”. He linked this vision to long-standing megatrends – demographic shifts, infrastructure upgrades, supply-chain constraints, renewable-energy integration, and overall energy limits – that have been shaping society for the past fifteen years [18-24]. Looking twenty years ahead, he envisaged barrier-free homes that continuously monitor health, wealth and security, allowing occupants to live without touching anything while still enjoying maximum safety [25-33]. In manufacturing, manual tasks would largely disappear, leaving humans as highly skilled equipment operators, much as today’s pilots operate intelligent flying robots rather than mechanical aircraft [34-41]. In transport, cars would become “rolling cocoons” or mobile living rooms, a trend already evident during the COVID-19 pandemic when people used vehicles as extensions of their homes [42-49].
Reger argued that every smart robot will share the same four-step functional loop: sense the environment, think using AI, connect to the cloud for data, and act on actuators [55-58]. He warned that this loop is meaningless without trust; functional safety (e.g., brakes that never fail) and robust security (preventing hacking) are the non-negotiable foundations that keep users from reverting to manual control [59-60]. He added that, for a semiconductor maker, volume matters – the ability to produce chips at scale is essential for realising billions of such devices [68-69].
He warned that many recent autonomous-vehicle fatalities were traced to flaws in the AI “brain” architecture rather than the mechanical platform, underscoring the need for biologically-inspired, safety-first designs [159-161]. He also used vivid analogies – from telepathic car-to-car communication to “X-ray-vision” sensors and Dumbledore-like magic – to illustrate how far-reaching the desired capabilities of future robots could become [162-165]. Finally, he framed edge AI as a way to avoid a massive increase in global energy generation, asking rhetorically how many new nuclear power plants would be required if all AI stayed in data-centres [166-168].
To meet these requirements NXP is pursuing a modular “Lego-brick” strategy that can be scaled from tiny sensors to larger edge processors. The company showcases a complete drone-control unit as an example of a small-form-factor device and highlights its acquisition of an India-made Kinara AI accelerator that can run a 10-billion-parameter language model at only 7 watts [95-104][149-154]. Such ultra-low-power edge AI enables applications ranging from a CT scanner that writes its own report to a refrigerator that autonomously orders missing milk, consuming power only when needed [101-104].
Reger suggested that the most reliable architecture should be inspired by biology. He compared a robot’s “spine” to human spinal reflexes that provide deterministic, real-time safety, and the “cerebellum” to the subsystem that maintains stability – both operating without large-scale AI [74-82]. He argued that, like insects with only 100 000 neurons, most edge tasks can be handled by tiny, task-specific models rather than massive LLMs, echoing the view that 80 % of AI work will run on efficient, bespoke models at the edge [88-95].
Scalability, however, faces two major hurdles. First, the projected 50 billion connected robots would require roughly three times the energy currently available on Earth unless each device is ultra-energy-efficient [141-144]. Second, interoperability demands a common “meta-standard” language so that devices such as home gateways, window blinds and solar-cell managers can seamlessly communicate [125-128]. Reger pointed to concrete enablers – ultra-wideband technology that unlocks car-to-car communication in milliseconds, and long-range radar that can detect vulnerable road users in adverse weather – as steps toward this unified ecosystem [118-124].
The current industry focus, according to Reger, is therefore on delivering safe, secure, ultra-low-power architectures while pushing the limits of physics to improve sensing beyond human capability [141-148]. By building reusable “Lego-brick” blocks, NXP aims to provide a scalable hardware foundation for any form-factor, from drones to building-control systems [95-104][149-154]. As AI models become smaller and more efficient, they can be deployed at the edge, fulfilling the promise of democratised AI.
Finally, Reger linked the technical roadmap to global policy ambitions. He noted that leaders such as Prime Minister Modi call for AI to reach everyone, and argued that the answer lies in edge AI embedded in end-devices; data-centres will still exist, but the primary answer is edge AI [155-158]. The talk concluded with an optimistic call to action: by combining secure semiconductor innovation, bio-inspired architectures and common standards, the vision of an anticipatory, automated world can become a reality.
Ladies and gentlemen, I thank our elite panelists who were a part of this ministerial conversation. Her Excellency, Ms. Togo, His Excellency, Nizar Patria, His Excellency, Rafat Hindi, Honorable Ministers from Togo, from Indonesia, and from Egypt, and I thank Ms. Debjani Kosh for moderating this ministerial conversation. And now I would like to invite Mr. Lars Recher, Executive Vice President and Chief Technology Officer, NXP Semiconductors. As we all know, artificial intelligence runs on chips, and Lars Recher is at the frontier of designing the semiconductors that will power the next generation of edge AI. In cars, in medical devices, in industrial systems. NXP’s work on secure, efficient, real -world AI hardware is essential to everything on the stage.
Ladies and gentlemen, please welcome the Chief Technology Officer of NXP Semiconductors, Mr. Lars Reger.
Namaste. Hello everyone and thanks for having me here. When we are talking about AI, at the moment there is a lot of talk about how do we pump AI in big data centers, how are we energizing these big data centers, but very honestly, there is a lot of questions. What is this AI for? What is this AI at all doing? And if I’m looking at my own lifespan, I’m coming from an analog world, was born in the 1970s. Then there was some heavy digitization in there over the last 20 years, when someone stuffed a laptop into… into a mobile phone and they called it smartphone. So we had a data display device. We could run topics that were on demand.
So on demand, I need a pizza, I need an Uber, I need to switch on the climate control in my house. And now my Marcom people would say, Lars, we are entering a phase of the world that anticipates and automates. And this little world that anticipates and automates is driving megatrends around us. And these megatrends are unchanged over the last 15 years. We have demographics changes. We have infrastructure upgrades. We have supply chain constraints. We have renewable and we have energy constraints. So out of all of these drivers, what is this modern world that anticipates and automates able to do for us? Well, jumping forward maybe 20 years, how is the cocoon that I’m living in going to look like?
I will have a shelter. I will have my house, and that house is totally barrier -free. That house will check about my health, my wealth, will protect me. I can enter and I can live. I can live. without touching anything. No one else can do the same and my property is protected very seamlessly. No barriers for me, but maximum safety and security. How will be my manufacturing landscape look like? Well, most of the manual tasks are gone. I need better education and I may be the most advanced equipment operator in the world. Look at airplane pilots 70 years ago. They were guys my size. These type of muscles and arms were flying in thunderstorms, real heroes, mechanical pilots.
Today we have more pilots, but they are all genders, shapes and sizes because they are operating flying intelligent robots. So when I come from Germany here to India, a pilot, mechanically, I’m not a pilot. has to work for 30 seconds at the end of the runway, pull up the plane, and all the rest is happening already today autonomously. And that’s going to get better in the industrial world. And of course, also in the transportation world. How are cars going to look like in 20 years? Well, they are rolling cocoons, rolling robots, and these cars are rolling living rooms. You have seen this during the COVID pandemics in China, for example. A lot of people use their cars as their office extensions.
Too many people in the house at home, the kids were too noisy. You go to these type of places, so you have a rolling cocoon again that is anticipating and automating what you want to do, what you want to achieve. And what does this all have in common? I mean, most of the people are asking me now, okay, Lars, nice. You are predicting that there is 50 billion of these smart connected robots out there in 10 years from now. But they have so different form factors. What does that mean? Well, simple answer. They have all the same ingredients. Each of these little robots has to sense its environment. So what’s happening around me? Has to connect to the cloud to get the data.
Last ones to drive from here to Mumbai, how is the traffic situation? Getting the information from the web. And then you start thinking of a smart advice. This is where AI comes into the play. At that moment, you have to think of what is my best advice to the arms and legs to my robot. And whether these arms and legs are an automotive powertrain and a steering wheel, is a manufacturing arm, or is the wireless connection to my climate control from my smart thermostat. I don’t care. Sense, think, connect, act are the ingredients for every of these 50 billion robots. Now, the only thing is, that all is nothing if you cannot trust. Because if your fridge starts ordering 500 liters of milk for the next weekend, you go shopping alone if your car does erratic driving you start driving manually again and if your thermostat sets your house on 50 degrees centigrade and your flowers are dried out and your cat is dead you go organizing it all manually again so trust is the essence and how does a nerd like me define trust that’s very simple this is functional safety like in automotive make sure that your braking system never ever fails and make sure that your connected device your car or whatsoever is never ever being hacked and then you can trust your device you can be sure that it doesn’t turn against you so these underlying levels make sure that you are energy efficient because otherwise you cannot be battery powered make sure that you are trustworthy so safe and secure you and then make sure that you can sense, think, connect, and act.
And you can build every robot in the world. And that is, of course, interesting for a semiconductor maker because for us, volume matters in these semiconductor chips. Now, you will ask me, but Lars, we have so long already these discussions on autonomous vehicles. In 2018, the entire press community thought, in 2020, my kids are going to the kindergarten without a steering wheel and without me, autonomously. That did not happen. Why? Because we have designed these robots wrong. And how do you design the robots right? Well, try to copy from nature. That normally works. And here on stage is a 90 -kilo bag of water with a couple of bones, or in other words, a biological robot. And that robot has a certain architecture.
That robot has different layers. That robot has a real -time system, highly functional safety, and that is my spine. And that is my spine. and if I stumble, the reflexes in my spine tell me already straighten your leg. In real time, highly undisturbed, very, very fast. No AI, not big AI, very deterministic system. Then I have in green my cerebellum that is working also in a highly functional, safe environment for heartbeat, stomach control, stability control. I can stand here and stand in a stable way because only the blue part is trying to find out what is the next sentence that I’m firing towards you. And green and orange are working to manage the infrastructure in a functional, safe way that is standing here in front of you.
So why don’t we copy these approaches into vehicles, into cars, into houses, into planes again? Well, there are simple architectural constraints and building mechanisms. There are building blocks that we need and we need to scale. But how big does the AI really have to be? So that AI in these systems. can be comparatively tiny. If you’re talking about transportation robots and how these transportation robots should look like, well, look at intelligent transportation robots, insects, for example. These insects have 100 ,000 neurons and an ant is already a very, very nice, very sexy transportation device. It’s not as intelligent as a human being with 90 billion neurons, but for most of the tasks, it is also not needed in this way.
And Ashwini Vishnath said it very nicely in Davos. 80 % of the AI tasks around us will be on very tiny, efficient, and very, very tailor -made models at the famous edge, so in the end devices. And this is what we are designing for. So in other words, NXP is trying to build all these Lego blocks where you can start scaling small, medium, and large devices. You have these devices here. This is, for example, sorry, very small devices. This is a complete drone control unit. And this is a complete drone control unit. that also flies with AI, artificial intelligence, and reaches targets, not only remote control, but is operating the entire drone and is finding via the camera its way.
What I have here is an India -made AI accelerator from Kinara and Hyderabad that NXP has acquired. This is carrying 10 billion parameters in a large language model. So it is not as big as JetGPT. But the combination of those two systems carries a large language model and operates an intelligent system at the edge for a power consumption of 7 watts. So in other words, you can build these type of plug -in combinations and you have a system, for example, at a computer tomograph that is taking my entire X -ray pictures and is writing the doctoral report that is operating at my fridge and tells me how many bottles of milk are missing. you do not have to have it always on and always operational so these seven watts are only consumed the moment where the fridge tries to find out what is missing and then you can go to sleep again that is the answer for this global quest of how many nuclear power plants do we need when we send one question to chat gpt and that is what the edge is going to solve for all of us but beyond all of that we are always talking about ai and the brain structure of these robots most of the cars that have created fatalities in the last 10 years these autonomous vehicles didn’t create these fatalities on the roads because they had a bug in the brain structure they created these issues because they were more short -sighted than i so wouldn’t it be great to have these robots with superhero senses wouldn’t it be great if these robots out there would have telepathic capabilities?
I do not need to touch anything, but the stuff around me is arranging, like Dumbledore. One move of the magic wand, everything is arranged. Wouldn’t it be great to know what is ahead of your line of sight, like Yoda, telepathy? Wouldn’t it be great to have X -ray vision like Superman? You look in rain, in snow, and in fog what is around you. Wouldn’t it be great for the very old ones amongst us to be like in Hitchhiker’s Guide through Galaxy? You have one bubble fish that you plug into your ear and you understand the entire universe, every language that is spoken. A German can understand Hindi without a big barrier in between. And wouldn’t it be great if our robots would have better ears than Daredevil or an owl in real life and would be able to hear what is being spoken out there in the outer ranks?
If we would have that. then the driving robot that replaces Lars is way better than Lars the driver himself. But I am the entry ticket for driving 250 kilometers an hour on the left lane of a German highway with my car. Now, you think we cannot have that for our robots next to this little bit of AI that we need? Well, let me tell you, we have it already. We have ultra wideband technology that is opening gates and car keys from my watch to everything around me. I have car to car communication over more than one mile of distance in three milliseconds. I can immediately tell the device there is an ambulance rushing into the crossroads, switch the traffic lights to green for that ambulance and to red for me.
Telepathy. We have radar systems over 300 meters that see two persons sitting like you next to each other. And we can detect. I’m in rain and snow and in focus. We have meta standards. So the English for smart connected devices. There is a common language in place and all devices are talking to each other. The home gateway is talking to the window blinders, is talking to the solar cell management. This is the entry tickets for this democratization of AI functionality and for the entrance of these tiny devices here with a little bit of AI, a lot of functional, safe and secure architectures to building the right devices. And what we have done with a little bit of AI and a couple of microphones in cars, we can take the in -car microphones, the sound in a way that we hear a bicycle bell behind the cars.
And we can easily detect whether there is vulnerable road users, for example, behind the cars. We can do this in any other settings as well. But automotive is there a very nice one. So in other words, where are we at? At the moment, when I’m talking to my fellow nerds and the and the semiconductor researchers. it is not about ai alone it is how you can build systems that you absolutely can trust how can you go low power and then the key question how big does the brain has to be and the answer is somewhere between a hundred thousand and a hundred billion neurons beyond all of that there is very very interesting questions that we have to solve and where india is deeply with the europeans in research and in the activities how do we make the wiring harnesses how do we battery operate all of that how we are sensing in the right way how do we think so all of these separate topics and to not make it too nerdy and too complex all of these silicons here are driving then these form factors a lot of people are only talking about humanoids and sorry to say humanoids are the tiny fraction of robots because why should a robot look like a human being I mean, that only makes sense in a very human environment, climbing stairs or whatsoever.
Otherwise, you have robots that are looking like ultrasonic devices, that are looking like infant monitor devices on neonatology stations in hospitals. There is no need to look like me. But all of this, we are equipping already with silicon in the hundred thousands today, and the ingredients are always the same. And just to get this pitch here down on the runway, to say it in the drone language, what do you need to do? What do we need to work on? What does the industry do at the moment? Well, in a very simple way, we are working on safe and secure architectures that are ultra low power, ultra energy efficient. Again, otherwise, this dream of 50 billion smartphones.
Connected devices will not work. because these 50 billion smart connected devices need three times the energy that Mother Earth can provide. So that is the absolute must for these markets to come into play. Then what we need to work on is we have to push the boundaries, the envelope of physics, and we are doing. We are sensing better than human beings in the meantime. And then what we need to do is just a simple game that semiconductors have done since 50 years now. We need to scale in the right way. So we need to build these little Lego bricks and say, okay, here is a complete drone control unit that you can fly autonomously. You want to fly with large language models and very, very smart AI slalom between the trees.
Plug this little dongle in, and you have everything on board that you can do. And the same you can do for building control systems with manuals. You can do this for any form factor that you like. And that is what we are doing at the moment. while the AI models are getting much, much more efficient, smaller, and we carry them here. So my pitch is, when PM Modi says he wants to bring AI to everyone, this is the answer. The answer is not data centers. They will exist. But the democratization of AI and equipping everyone in Togo, as we heard earlier, or in India, or in Germany, with the right levels of AI that create the world that anticipate and automates, the answer lies at the famous edge in the end device.
Thank you.
“The keynote speaker is Lars Reger, CTO of NXP Semiconductors”
The knowledge base identifies the speaker as Lars Reger, confirming the name though it does not specify his CTO role or affiliation with NXP [S5].
“Lars Reger questioned the prevailing focus on feeding ever‑larger AI models into massive data‑centres, asking “What is this AI for?” and “What is this AI actually doing?””
The transcript excerpt shows Reger explicitly raising those questions about AI in big data-centres [S5].
“For a semiconductor maker, volume matters – the ability to produce chips at scale is essential for realising billions of such devices”
Reger states that “volume matters in these semiconductor chips,” directly supporting the claim [S4].
“NXP is positioned at the forefront of designing semiconductors that will power the next generation of edge AI for applications ranging from cars to medical devices and industrial systems”
While the knowledge base does not mention NXP specifically, it highlights a broader industry focus on delivering power-efficient computational capacity for diverse AI applications, which underpins the importance of edge-AI chip design [S32].
“Long‑standing megatrends such as renewable‑energy integration and overall energy limits are shaping society and AI deployment”
A separate source emphasizes the critical role of energy infrastructure in economic and technological progress, adding nuance to the reported megatrends [S39].
The discussion shows a clear convergence between the opening remarks and the keynote on the central role of semiconductor technology for AI, especially for edge deployment. Both speakers align on the necessity of secure, efficient chips to enable future autonomous homes, vehicles, and billions of connected robots. Beyond this hardware focus, there is limited overlap on broader social, policy, or ethical dimensions, indicating a moderate level of consensus that is primarily technical in nature.
Moderate consensus on the technical foundation (semiconductors, edge AI, security, energy efficiency) with limited agreement on wider societal or policy issues, suggesting that future dialogue should broaden to incorporate those aspects.
The discussion shows a high degree of alignment rather than conflict. Speaker 1’s introductory framing and Lars Reger’s extensive keynote converge on the same core messages: AI’s reliance on semiconductor technology, the need for ultra‑low‑power, secure edge chips, and the vision of billions of connected robots that must be trustworthy. No opposing viewpoints or substantive debates emerge between the participants.
Minimal – the speakers largely agree on goals and the technical pathway, implying a cohesive narrative that reinforces NXP’s strategic positioning and the broader agenda of democratizing AI at the edge.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
