Keynote-Surya Ganguli

19 Feb 2026 16:30h - 16:45h

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Speaker 1 introducing Professor Surya Ganguly of Stanford, whose research bridges AI, neuroscience, and physics to build theoretical foundations for intelligence [2-5]. Ganguly noted that while the past decade has produced transformative AI systems, our understanding of how they operate remains minimal, and the brain still outperforms machines on many fronts [14-16]. He outlined a unified science of intelligence that targets three pillars: data efficiency, energy efficiency, and integration of brains with machines [17-20].


Regarding data efficiency, he explained that modern AI requires orders of magnitude more language exposure than humans and follows a slow power-law scaling that his team recently derived from first principles, matching experimental results [20-23]. By identifying redundancy in large datasets and selecting non-redundant training examples, his group demonstrated a shift from the slow power law to a much faster exponential decay in error [24-28]. He also showed that evolutionary design of robot morphologies can accelerate learning, providing empirical support for the morphological Baldwin effect [29-36].


On energy efficiency, Ganguly contrasted AI’s megawatt consumption with the brain’s 20-watt operation, attributing the gap to reliance on fast, reliable digital bit flips versus biology’s use of slow, unreliable steps that co-design computation with physical laws [38-46]. His work identified fundamental limits for chemical sensing, revealing that optimal chemical computers resemble G-protein-coupled receptors and linking neuronal function to physical sensing mechanisms [47-56]. Further analysis indicated that the brain operates like a smart energy grid, predicting and delivering energy precisely where and when needed [60-64].


To bridge the gap, he proposed quantum neuromorphic computing, replacing neurons with atoms and synapses with photons, enabling quantum Hopfield memories and photonic optimizers with superior capacity and robustness [65-78]. He illustrated the potential of melding brains and machines through digital twins, citing a highly accurate retinal twin that reproduced decades of experiments in days, and mouse-brain models that could read and write visual perception, even inducing controlled hallucinations [78-86]. Extending this approach, his team built a digital twin of an epileptic brain, used explainable AI and control theory to modulate seizure amplitude, and launched a startup, Metamorphic, in partnership with Stanford’s Enigma project to scale twins to the primate brain [99-110].


Ganguly concluded that an open, interdisciplinary science of intelligence is essential for creating more efficient, explainable AI and for advancing treatments of brain disorders, urging greater public investment in academic research [111-115]. The discussion closed with appreciation for Professor Ganguly’s contributions and a reaffirmation of the importance of collaborative, transparent research in shaping the future of intelligence [116-117].


Keypoints


Major discussion points


A unified science of intelligence spanning brains and machines – Ganguly frames his talk around three pillars (data efficiency, energy efficiency, and brain-machine melding) and repeatedly stresses the need for a common theoretical framework that can explain both biological and artificial cognition [17-20][111-114].


Improving data efficiency in AI – He explains why modern AI systems are extremely data-hungry, describes neural scaling laws, presents his group’s first-principles theory that predicts their shallow power-law slope, and shows how selecting non-redundant training data can bend the curve toward a much faster exponential decay [20-28][29-36].


Closing the energy-efficiency gap – By contrasting the brain’s ~20 W power budget with AI’s megawatt consumption, he attributes the gap to digital bit-flipping, highlights how biology co-designs computation with physics (e.g., using Maxwell’s equations), and outlines recent work on fundamental limits of chemical sensing and quantum-neuromorphic hardware [38-49][50-66].


Melding brains and machines through digital twins – He showcases concrete examples: a high-fidelity digital twin of the retina, AI-driven decoding and “hallucination” in mice, and a controllable digital twin of epileptic brain dynamics that was used to modulate seizures in vivo; these efforts are being commercialized via the startup Metamorphic [78-86][99-108].


Call for open, academic-driven research and public investment – The talk concludes with a plea to expand public funding for an open, interdisciplinary science of intelligence, warning that most breakthroughs today occur behind corporate walls and urging academia to lead the next wave [112-116].


Overall purpose / goal


The presentation aims to persuade the audience that advancing AI responsibly requires a holistic, open scientific approach that unites insights from neuroscience, physics, and computer science. By highlighting recent theoretical and experimental breakthroughs in data and energy efficiency, as well as practical brain-machine integration, Ganguly argues for greater public and academic support to build a shared foundation for future intelligent systems.


Overall tone and its evolution


– The talk opens with a light-hearted, informal tone (“there’s going to be an exam at the end”) [11-13].


– It quickly shifts to a technical and authoritative tone, delivering dense scientific content on scaling laws, energy limits, and quantum neuromorphic concepts [20-66].


– As the discussion moves to brain-machine integration, the tone becomes optimistic and visionary, emphasizing transformative applications (digital twins, controlling perception, treating epilepsy) [78-106].


– The closing segment adopts a persuasive, advocacy-driven tone, urging open collaboration and increased public funding [112-116].


Overall, the tone remains enthusiastic and forward-looking, but it transitions from playful introduction to rigorous exposition, then to hopeful vision, and finally to a rallying call for collective action.


Speakers

Surya Ganguly


– Role/Title: Professor of AI, Neuroscience and Physics, Stanford University


– Areas of Expertise: Artificial Intelligence, Neuroscience, Physics, Unified Science of Intelligence


– Affiliation: Stanford University [S2]


Speaker 1


– Role/Title: Moderator / Host (introducing the keynote speaker)


– Areas of Expertise:


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Speaker 1 opened the session by thanking the audience and formally introducing Professor Surya Ganguly, a Stanford professor whose work sits at the intersection of artificial intelligence, neuroscience, and physics. He then light-heartedly warned that “the slides will change pace, there will be an exam at the end, and I might even throw in a joke about my own slides,” setting a playful tone for the talk [1-2].


Unified Science of Intelligence – Three Pillars


Professor Ganguly framed his presentation around a unified science of intelligence that simultaneously addresses biological brains and engineered machines. He identified three inter-related pillars-data-efficiency, energy-efficiency, and brain-machine melding-as the core challenges for creating more efficient, explainable, and powerful AI systems, and urged the community to pursue an open, interdisciplinary approach with long-term horizons and public support [13-20][111-114].


Data-efficiency. He highlighted the stark contrast between human and machine language exposure: humans acquire roughly 100 million words of language experience, whereas modern AI systems ingest about 10 trillion words-a volume that would take a human 240 000 years to read [14-15]. AI error rates decline only slowly with data, following a power-law scaling observed for over half a decade but lacking a solid theoretical basis [20-21]. Ganguly’s team recently derived a first-principles theory that predicts the shallow slope of this neural scaling law by linking it to the weak surface statistical structure of natural language. The black line is our theory and the colored lines are experiments in modern LLMs, and the theory (black line) matches the experimental results (colored lines) from large language models [22-23]. By recognizing that large random datasets contain extensive redundancy, they devised algorithms that select non-redundant training examples, each contributing novel information; this bends the original power-law decay into a much faster exponential drop in error [24-28]. In a separate line of inquiry, they showed that evolutionary design of robot morphologies-allowing bodies to evolve across generations-produces forms that are easier to control, thereby speeding up learning. This empirical validation of the morphological Baldwin effect provides the first concrete demonstration of a long-standing evolutionary hypothesis [29-36].


Energy-efficiency. He contrasted the brain’s modest 20 watts power budget with modern AI systems that can require up to 10 million watts, attributing the gap to the reliance on fast, reliable digital bit-flips, which thermodynamics dictates must consume substantial energy [38-40][41-43]. Biology, by contrast, achieves efficiency through slow, unreliable intermediate steps and by co-designing computation with the underlying physics of the universe-for example, using Maxwell’s equations directly for addition rather than energy-intensive transistor circuits [44-48]. He argued that bridging the energy gap demands a complete redesign of the technology stack, from electrons to algorithms, to match computational dynamics with physical dynamics [49-51]. His group recently solved the fundamental limits of chemical sensing under energy constraints, identifying a lower bound on achievable error for any chemical computer and characterising the family of optimal sensors that attain this bound. That’s the red curve, and the optimal chemical computers behave like G-protein-coupled receptors, linking neuronal function to optimal physical sensing mechanisms [52-56]. Further experiments measuring both neural activity and ATP consumption across the entire fly brain revealed that the brain operates like a smart energy grid, predicting future energy demand and delivering power precisely where and when needed [57-64].


Brain-machine melding. To move beyond the limits of evolution, Ganguly proposed quantum neuromorphic computing, wherein individual neurons are replaced by atoms whose firing states correspond to electronic excitations, and synapses are replaced by photons that mediate communication via emission and absorption [65-70]. This architecture enables the construction of a quantum Hopfield associative memory, a quantum analogue of the classic network that earned John Hopfield a Nobel Prize-he quoted, “John Hopfield the Nobel Prize in physics,”-offering superior capacity, robustness, and recall [71-75]. He also described photonic optimisers, fully optical computers that solve optimisation problems with novel energy-landscape dynamics. The convergence of neural algorithms with quantum hardware inaugurates a new field-quantum neuromorphic computing-that could surpass the capabilities of biologically evolved systems [76-78].


Melding Brains & Machines – Digital Twins


Ganguly illustrated practical potential through several digital-twin projects. A high-fidelity twin of the biological retina reproduced two decades of experimental results in a matter of days, dramatically accelerating neuroscience discovery [78-80]. In mice, AI decoded visual neural activity to reconstruct the animal’s perceived image at the resolution of its visual system, and, by injecting carefully designed neural patterns, induced specific perceptual hallucinations-effectively “writing” to the mouse’s mind [81-86]. Extending this approach to pathology, his team built a digital twin of an epileptic brain that faithfully reproduced seizure dynamics across the whole brain. Using explainable AI to pinpoint seizure origins and control theory to modulate amplitude, they successfully transferred the control signals from the twin to the living brain, thereby regulating seizure intensity in vivo [99-105]. These breakthroughs are being commercialised through a new startup, Metamorphic, which will work with Stanford’s Enigma project to scale digital twins from the visual cortex to the entire primate brain [106-110].


Call for Open, Interdisciplinary Research


In his concluding remarks, Ganguly stressed that advancing AI responsibly requires a unified, open science of intelligence that spans both brains and machines. He argued that academic research, publicly funded and freely shared, is essential because past academic work underpins today’s AI breakthroughs and will shape tomorrow’s technologies, warning that most current advances occur behind corporate walls and that an open, interdisciplinary approach will maximise societal benefit [111-115].


Speaker 1 closed the session by expressing gratitude to Professor Ganguly for his contributions [117].


Session transcriptComplete transcript of the session
Speaker 1

also for contributing your expertise to this summit. Ladies and gentlemen, I now take this opportunity to invite Professor Surya Ganguly, Professor of AI, Neuroscience and Physics, Stanford University. Professor Ganguly’s research sits at one of the most intellectually fertile intersections in science today. Using the mathematics of physics and the insights of neuroscience to understand how intelligence, biological and artificial intelligence, actually works. His work is helping build the theoretical foundations that practice so urgently needs. Please welcome Professor Surya Ganguly from Stanford University.

Surya Ganguly

Thank you. Great, we got the slides. So we went from a world leader to a VC to now a professor now. So we have a little bit of a change of pace. It’s going to get a little bit more technical. And because I’m a professor, there’s going to be an exam at the end. All right, so pay attention. All right, so I’m going to talk about advancing the science and engineering of intelligence. So, the last decade of AI research has led to stunning advances in the engineering of intelligence, yielding AI systems that stand poised to transform our society. Yet, alarmingly, we understand almost nothing about how they work, and we desperately need to. At the same time, our brain is the product of 500 million years of vertebrate brain evolution, and it is still orders of magnitude better than AI along several axes, and we also need to understand why.

So, I work in a unified science of intelligence across both brains and machines that seeks to both understand biological and artificial intelligence and create more efficient, explainable, and powerful AI. Today, I’ll work on understanding and improving intelligence along three lines. Data efficiency, energy efficiency, and melding brains and machines. First, data efficiency. so um ai is vastly more data hungry than humans we get about 100 million words of language experience ai gets 10 trillion it would take us 240 000 years to read everything that ai read so why is ai so data hungry well well in ai error falls off as a power law it falls off very slowly as a power law with the amount of data this is an example of a famous neural scaling law which captured the imagination of industry and motivated significant societal investments in data collection compute and energy but despite the importance of these neural scaling laws discovered over half a decade ago we lack any scientific theory for why they exist for any modern large language model and why they are so slow just last week we posted the first theory to do so from first principles, we could analytically predict the slope of these neural scaling laws and reconnected their shallow slope to the weak surface statistical structure of natural language itself.

The black line is our theory and the colored lines are experiments in modern LLMs. You can see there’s a good match. But can we make the scaling laws better? We actually can. We actually showed, both in theory and practice, that we can bend the slow power law down to a much faster exponential drop. The key idea is that large random data sets are extremely redundant. If you already have a billion random sentences, it’s unlikely that the next sentence is going to tell you very much that’s new. But what if you could find a non -redundant training set in which each new data point is carefully chosen to tell you something new compared to all the other data points?

We developed theory and algorithms to do just this, and that’s what got us the better exponential. In a completely different line of work, we asked if the process of evolution itself could speed up learning. And we showed it actually can. We evolved robot morphologies, shapes of bodies, from generation to generation. And we showed that successive generations could learn faster. They did so by designing the body to be easier to learn to control. This is an example of something called the morphological Baldwin effect. It’s an effect that has long been conjectured in evolutionary theory, but hard to test in the real world. We demonstrated it for the first time in our simulations. Okay, let’s go on to energy efficiency.

AI is vastly more energy hungry than humans. Our brain only spends 20 watts of power, but modern AI can consume 10 million watts. So why is AI so energy hungry? Well, the fault lies in the choice of digital computation itself, where we use very fast and reliable bit flips at every intermediate step of the computation. Now the laws of thermodynamics demand that every fast and reliable bit flip must consume a lot of energy. Biology chose a very different route. It gets the right answer just in time using the slowest, most unreliable intermediate steps possible. Biology does not rev its engine any more than it needs to. It also co -designs computation and physics much better. For example, it directly uses Maxwell’s equations of electromagnetism to do addition, instead of using complex energy -hungry transistor circuits.

So biology matches its computation directly to the native physics of the universe. So to bridge the vast energy gap between brains and machines, we need to rethink our entire technology stack. . from electrons to algorithms, and optimally match computational dynamics to physical dynamics. For example, given a particular computation, what are the fundamental limits on its speed and accuracy under energy constraints? We recently solved this question for the computation of sensing, which every cell has to do. We found fundamental limits on the lowest achievable error achieved by any chemical computer whatsoever. That’s the red curve. And we also found the family of optimal computers that hug this curve. And we showed, remarkably, that these optimal chemical computers behave a lot like something called G -protein coupled receptors, which hide in every single cell, and they do sensing.

So this yields a connection between what neurons do and what optimal physical sensors would do. Popping up a level, in neuroscience, we can now measure non -neural sensors. We can now measure non -neural sensors. We can now measure non -neural sensors. We can now measure not only neural activity, but also energy consumption in the form of ATP usage. the fundamental chemical fuel that powers all life’s processes. We can do this across the entire fly brain. So by analyzing the couple dynamics of neural computation and energy consumption, we discovered that the brain actually works like a smart energy grid, remarkably. The brain can predict where and when energy will be needed in the future, and it produces just the right amount of energy at just the right time, at just the right location.

So in summary, we still have a lot to learn from evolution in our quest to build more energy -efficient AI, but we don’t have to be limited by evolution. We can go beyond evolution to instantiate neural algorithms in quantum hardware that evolution could not discover. For example, we can replace individual neurons with individual atoms. A neuron in different states of firing correspond to atoms in different excited electronic states. So we can do this with the help of neural networks. We can also replace individual synapses between neurons with photons, quanta of light. Just as synapses allow two neurons to communicate, photons allow electronic states of atoms to communicate through photon emission and absorption. So what can we build with this?

As one example, we could build a Hopfield associative memory network. This is the same network that recently won John Hopfield the Nobel Prize in physics. But this is a quantum version this time that can be built with atoms and photons. And we can show that the quantum dynamics endows the memory with superior capacity, robustness, and recall. We can also go beyond this to build quantum optimizers made entirely out of photons. These photonic computers solve optimization problems in interesting new ways, and we can analyze their energy landscape. So the marriage of neurons… …and neural algorithms with quantum hardware leads to an entirely new field that I like to call quantum neuromorphic computing. okay now returning to the brain the marriage of neuroscience and ai enables a powerful new path forward by melding minds and machines as follows imagine a scenario where we read lots and lots of neural activity from the brain then we use ai to build a model or a digital twin of brain circuits then we can do rapid in silico experiments on the digital twin and use explainable ai to understand how it works but we don’t have to stop there we can control the brain too we can use control theory to learn specific neural patterns that we can write into the digital twin to control it then we can transfer these same neural patterns into the actual brain to write into the brain and control the brain in essence we can learn the language of the brain and then speak directly back to it in its own neural language you So, as one example of this program, we recently developed the world’s most accurate digital twin of the biological retina, and we used explainable AI to understand it.

And in Silico, we could reproduce two decades’ worth of experiments in a matter of days. So this shows a general path forward to dramatically accelerating neuroscience discovery using AI. We also carried out this program in mice, where we were able to use AI to read the mind of a mouse. We could look directly at neural activity in the brain of a mouse, and we could decode what it was seeing at the lower level of resolution that mice can see. This shows that we can learn the native language of the visual brain. But we can go further than that to write to the mind of a mouse. By writing in carefully designed neural activity patterns, we could make the mouse hallucinate a particular percept.

In fact, we could control the mouse brain’s soul. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. So in essence, we could control what the mouse saw by writing directly into the brain using the native language of its brain itself.

We also applied this to epilepsy. Sorry, we also carried out this program in epilepsy where we built a digital twin of the epileptic brain. Our twin could reproduce actual epileptic seizure dynamics across the entire brain. We then used explainable AI to understand how these seizures were starting. Then we used control theory to be able to control the seizure amplitude in the digital twin. Then we injected these same control signals into the actual brain and controlled seizure amplitude in the actual brain. This shows how to meld brains and machines to control epilepsy. Building on all this, we’re actually creating a new startup called Metamorphic. And we’re going to be using this to control the seizure amplitude in the digital twin.

It will work closely with the Enigma project at Stanford University and together Enigma and Metamorphic. will scale up the construction of digital twins to encompass the entire primate brain, starting with the visual brain. Such scaled -up digital twins offer a powerful path forward to building robust biohybrid AI systems that are taught directly by brain data and to treat brain disease in new AI -driven ways. More generally, the possibilities of melding brains and machines are limitless, both to advance AI and to understand, cure, and augment the brain. To close, what I think we really need is a unified science of intelligence that spans both brains and machines to help us understand both biological and artificial intelligence and create more efficient, explainable, and powerful AI.

Importantly, this pursuit must be done out in the open and shared with the world, and it must be done in a way that is both biological and artificial. It must be done with a long time horizon. This makes academia an ideal place to pursue a science of intelligence, and I believe it’s imperative to expand public investment in the academic study of intelligence, because the academic studies of yesterday laid the strong foundation for today’s AI technology, and it will be the academic studies of today that lay the foundation for tomorrow’s technology, enabling us to go beyond large language models and diffusion models and so forth. Despite the huge and exciting advances happening now increasingly, unfortunately, behind closed doors at companies, I’m extremely excited about what the science of intelligence can achieve out in the open for the public benefit of all.

Thank you.

Speaker 1

Thank you so much, Professor Ganguly.

Related ResourcesKnowledge base sources related to the discussion topics (30)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Speaker 1 introduced Professor Surya Ganguly, a Stanford professor, and provided only an introductory opening before Ganguli’s presentation.”

The knowledge base notes that Speaker 1 only provides an introduction while Professor Ganguli presents his research, confirming the introductory role described in the report [S1].

Confirmedhigh

“The brain’s power budget is about 20 watts, whereas modern AI systems can require up to 10 million watts, a gap attributed to the use of fast, reliable digital bit‑flips which consume substantial energy.”

The source explicitly states that the brain uses ~20 W and modern AI can consume ~10 million W, and attributes the high consumption to the choice of fast, reliable digital computation [S2].

Additional Contextmedium

“The transcript represents a single academic presentation by Professor Surya Ganguli rather than a multi‑speaker discussion or debate.”

The knowledge base clarifies that the document is a single-speaker keynote, providing context that the report’s format is a presentation, not a multi-person dialogue [S1].

External Sources (71)
S1
Keynote-Surya Ganguli — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be a moderator or host introducing t…
S2
https://dig.watch/event/india-ai-impact-summit-2026/keynote-surya-ganguli — also for contributing your expertise to this summit. Ladies and gentlemen, I now take this opportunity to invite Profess…
S3
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S4
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S5
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S6
UN Human Rights Council: High level discussion on AI and human rights — So I think when Doreen has spoken so eloquently, speaks about the digital divide, we need to be aware that it’s not just…
S7
https://dig.watch/event/india-ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — to be with us, so thank you. We are here because we believe in AI’s transformative potential, and I’m certain you’ve hea…
S8
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — AI is compressing discovery timelines and reducing development risk. And therefore, I believe that the next frontier is …
S9
The Foundation of AI Democratizing Compute Data Infrastructure — Okay, so first of all, I think the computing requirements for training modern AI systems is temporary. It’s temporary be…
S10
Is AI the key to nuclear renaissance? — There is a direct correlation between the exponential increase in model parameters and the increase in the computational…
S11
Multistakeholder Partnerships for Thriving AI Ecosystems — “And I would say it’s not an innovation gap, it’s a power gap.”[19]. “So all those things need framework and need govern…
S12
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Helbig suggested that current discussions about massive, power-hungry data centres might represent a similar blind spot….
S13
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — Antonia Gawel:I mean, I think very much a focus on decarbonization of the power sector is a critical input and a signifi…
S14
Part 7: ‘Converging realities: Embedding governance through digital twins’ — These are not hypothetical questions; they point to a growing gap between how governance is implemented through technolo…
S15
Top digital policy developments in 2019: A year in review — At the intersection of technology and biology,Neuralink’s work on brain-machine interfacessparked the imagination of man…
S16
Part 1: An introduction to digital twins — The mystery is kept alive through the way we talk about AI. Wereawaken ancient patterns of storytelling. We speak of sys…
S17
Bridging the Digital Divide: Achieving Universal and Meaningful Connectivity (ITU) — The analysis argues for a multi-stakeholder approach in policy-making to effectively address these issues. It is suggest…
S18
Democratizing AI: Open foundations and shared resources for global impact — Hartley specifically noted the challenge of competing with well-funded private sector initiatives, emphasising the need …
S19
The Global Power Shift India’s Rise in AI & Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S20
Global AI Policy Framework: International Cooperation and Historical Perspectives — Despite coming from different backgrounds (diplomatic/legal vs academic), both speakers advocate for patience and carefu…
S21
How to make AI governance fit for purpose? — The discussion maintained a collaborative and optimistic tone throughout, despite representing different national perspe…
S22
Policy Network on Artificial Intelligence | IGF 2023 — Both of these disciplines, both of these empirical starting points need to be able to talk to each other in a meaningful…
S23
Keynote-Surya Ganguli — Ganguly concluded with passionate advocacy for maintaining intelligence research within the academic sphere, emphasizing…
S24
Artificial intelligence (AI) – UN Security Council — The global focus on Artificial Intelligence (AI) capacity-building efforts has been a significant topic of discussion am…
S25
Keynote-Surya Ganguli — Professor Surya Ganguly from Stanford University presented his research on advancing the science and engineering of inte…
S26
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — AI is compressing discovery timelines and reducing development risk. And therefore, I believe that the next frontier is …
S27
https://dig.watch/event/india-ai-impact-summit-2026/keynote-surya-ganguli — So, I work in a unified science of intelligence across both brains and machines that seeks to both understand biological…
S28
The Foundation of AI Democratizing Compute Data Infrastructure — Okay, so first of all, I think the computing requirements for training modern AI systems is temporary. It’s temporary be…
S29
DatologyAI aims to revolutionize dataset curation for enhanced AI model training — Ari Morcos, an industry veteran with nearly a decade of experience in the AI sector, has founded DatologyAI to revolutio…
S30
Multistakeholder Partnerships for Thriving AI Ecosystems — “And I would say it’s not an innovation gap, it’s a power gap.”[19]. “So all those things need framework and need govern…
S31
Open Forum: Empowering Bytes / DAVOS 2025 — Bilel argues that while AI and data centers consume significant energy, they can also help optimize energy use in other …
S32
Brain-inspired networks boost AI performance and cut energy use — Researchers at the University of Surreyhave developeda new method to enhance AI by imitating how the human brain connect…
S33
Global Enterprises Show How to Scale Responsible AI — no and unfortunately even though you said it’s the lightweight question i have to answer it using big words so i think t…
S34
Digital twins gain momentum through AI — AI is accelerating thecreation of digital twinsby reducing the time and labour required to build complex models. Consult…
S35
“Re” Generative AI: Using Artificial and Human Intelligence in tandem for innovation — Eliezer Manor:If you regard human intelligence as IQ, yes, the computer is much better than we are. But if you regard th…
S36
What Proliferation of Artificial Intelligence Means for Information Integrity? — Peggy Hicks: Yes, in short order. No, I mean, I think there is a lot that’s happening to address these issues. So, you k…
S37
Democratizing AI: Open foundations and shared resources for global impact — The discussion showcased concrete applications, including medical AI models like Meditron for healthcare applications an…
S38
Any other business /Adoption of the report/ Closure of the session — An acknowledgment followed concerning the multitude of discussions in informal settings, illustrating the commitment of …
S39
Opening of the session — Belarus voices disappointment at an oversight in addressing juvenile crime and ICT crime prevention while the committee …
S40
Opening Ceremony — The transcript reveals surprisingly few direct disagreements among speakers, with most conflicts being implicit or repre…
S41
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S42
Laying the foundations for AI governance — The tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disag…
S43
How Humans Sense / Davos 2025 — These key comments shaped the discussion by guiding it from an introduction to the complexity of touch, through detailed…
S44
Discussion Report: Sovereign AI in Defence and National Security — The tone is academic and policy-focused, delivered as an expert briefing with urgency underlying the technical discussio…
S45
Advancing Scientific AI with Safety Ethics and Responsibility — The discussion maintained a collaborative and constructive tone throughout, characterized by technical expertise and pol…
S46
From brainwaves to breakthroughs: The future with brain-machine interfaces — **Major Discussion Points:** The tone is consistently inspirational and optimistic throughout, characterized by enthusi…
S47
AI Meets Agriculture Building Food Security and Climate Resilien — The discussion maintained an optimistic and collaborative tone throughout, characterized by visionary leadership and pra…
S48
AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence — The discussion maintained an optimistic and collaborative tone throughout, with speakers consistently emphasizing human …
S49
Keynote-Lars Reger — The tone is enthusiastic and visionary throughout, with Reger maintaining an optimistic, forward-looking perspective. He…
S50
AI in education: Leveraging technology for human potential — The tone is consistently optimistic and inspirational throughout, with Mills maintaining an enthusiastic and visionary a…
S51
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S52
Closing remarks – Charting the path forward — The tone throughout was consistently formal, diplomatic, and optimistic. It maintained a collaborative and forward-looki…
S53
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S54
Using AI to tackle our planet’s most urgent problems — The tone is passionate and advocacy-driven throughout, with the speaker maintaining an urgent, morally-charged perspecti…
S55
Conversation: 01 — Artificial intelligence
S56
LANGUAGE AND DIPLOMACY — Another aspect of the contrast between dialectic and creativity, or between standard arguing and serious joking is that …
S57
TO JOKE OR NOT TO JOKE: A DIPLOMATIC DILEMMA IN THE AGE OF INTERNET — Another aspect of the contrast between dialectic and creativity, or between standard arguing and serious joking is that …
S58
Rhetoric — Professor Peter Serracino Inglott, former rector at the University of Malta and lecturer in philosophy, suggests that hu…
S59
Networking Session #232 Bringing Safety Communities Together a Fishbowl Style Event — Tom Orrell: Thank you. Thank you both. And thank you all for listening to us, and now it’s your turn to also get involve…
S60
Keynote-Jeet Adani — She rises to stabilize, she rises to anchor a world searching for balance and she rises to build systems that are inclus…
S61
MahaAI Building Safe Secure & Smart Governance — Praveen Pardeshi from MITRA provided detailed insights into practical challenges and opportunities of AI implementation …
S62
Regional Leaders Discuss AI-Ready Digital Infrastructure — Arndt Husar emphasizes that digital infrastructure must be addressed through three inter‑linked pillars – Solutions, Sta…
S63
Welcome remarks | 30 May — Disparities exist in access to data, algorithms, computing power, and expertise.
S64
Limits of Rule-Based AI: Learning from the legacy of Douglas Lenant — On 31 August 2023, Douglas Lenant died. For 40 years, he was a prominent promoter of a rule-based approach to AI. Over t…
S65
Beyond the imitation game: GPT-4.5, the Turing Test, and what comes next — In March 2024, OpenAIreleased GPT-4.5, the latest iteration in its series of large language models (LLMs), pushing the b…
S66
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Tawfik Jelassi: Thank you, Davide. Ladies and gentlemen, colleagues, good morning to all of you and thank you for joinin…
S67
Re-evaluating the scaling hypothesis: The AI industry’s shift towards innovative strategies — In recent years, the AI industry has heavilyinvestedin the ‘scaling hypothesis,’ which posited that by expanding data se…
S68
AI agents offer major value but trust and data gaps remain — AI agents coulddrive up to $450 billion in economic value by 2028, according to new research by Capgemini. The gains wou…
S69
WS #219 Generative AI Llms in Content Moderation Rights Risks — Marlene Owizniak: And before I open it up to the floor, I just wanted to highlight a few of the key risks that we found,…
S70
Fireside Conversation: 02 — “But LLMs, to some extent, except for a few domains, are mostly information retrieval systems.”[42]. “So what’s, I mean,…
S71
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Another concern addressed was the inherent biases and limitations of large language models trained on skewed web data. T…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Surya Ganguly
16 arguments163 words per minute2114 words775 seconds
Argument 1
AI’s extreme data hunger compared to humans (Surya Ganguly)
EXPLANATION
Ganguly points out that artificial intelligence systems require vastly more language data than humans, citing a disparity of 100 million words of human experience versus 10 trillion words processed by AI, which would take humans 240 000 years to read.
EVIDENCE
He states that AI is “vastly more data hungry than humans” and quantifies the difference by noting humans acquire about 100 million words of language experience while AI consumes roughly 10 trillion, a volume that would require 240 000 years for a human to read [20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote notes that AI consumes roughly 10 trillion words of language data versus about 100 million words acquired by humans, illustrating the massive data hunger of current models [S1].
MAJOR DISCUSSION POINT
Data efficiency
Argument 2
Theory predicting neural scaling law slope linked to language statistics (Surya Ganguly)
EXPLANATION
He reports that his team derived a first‑principles theory that accurately predicts the shallow slope of neural scaling laws for large language models, linking it to the weak surface statistical structure of natural language.
EVIDENCE
Ganguly explains that they posted “the first theory… to analytically predict the slope of these neural scaling laws and reconnected their shallow slope to the weak surface statistical structure of natural language itself” and shows a good match between theory (black line) and experiments (colored lines) in modern LLMs [20-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ganguly’s first-principles theory that analytically predicts the shallow slope of neural scaling laws for large language models is described in the keynote presentation [S1].
MAJOR DISCUSSION POINT
Data efficiency
Argument 3
Non‑redundant training sets can turn slow power‑law decay into fast exponential improvement (Surya Ganguly)
EXPLANATION
He argues that because large random datasets contain a lot of redundancy, selecting non‑redundant, information‑rich examples can reshape the scaling curve from a slow power‑law to a much faster exponential decay.
EVIDENCE
He describes that “large random data sets are extremely redundant” and that by constructing a non-redundant training set where each new data point adds new information, they achieved a faster exponential drop in error, supported by theory and algorithms they developed [25-28].
MAJOR DISCUSSION POINT
Data efficiency
Argument 4
Evolutionary design of robot morphologies (morphological Baldwin effect) speeds up learning (Surya Ganguly)
EXPLANATION
Ganguly shows that evolving robot bodies across generations can make subsequent generations learn faster, demonstrating the morphological Baldwin effect, a long‑standing hypothesis in evolutionary theory now validated in simulation.
EVIDENCE
He reports evolving robot morphologies generation-to-generation, observing that successive generations learned faster because the bodies were designed to be easier to control, thereby providing the first simulation evidence of the morphological Baldwin effect [29-36].
MAJOR DISCUSSION POINT
Data efficiency
Argument 5
AI consumes orders of magnitude more power than the brain due to digital bit‑flip architecture (Surya Ganguly)
EXPLANATION
He contrasts the brain’s modest 20 W power consumption with modern AI systems that can draw up to 10 million watts, attributing the gap to the reliance on fast, reliable digital bit flips which are thermodynamically expensive.
EVIDENCE
Ganguly notes that “our brain only spends 20 watts of power, but modern AI can consume 10 million watts” and explains that the fault lies in using “very fast and reliable bit flips at every intermediate step of the computation,” which thermodynamics forces to consume a lot of energy [38-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The talk highlights the power gap (≈20 W for the brain vs. ≈10 MW for modern AI) and attributes it to the reliance on fast, reliable digital bit flips, which are thermodynamically costly [S1].
MAJOR DISCUSSION POINT
Energy efficiency
Argument 6
Biology achieves efficiency by using slow, unreliable steps and co‑designing computation with physical laws (e.g., Maxwell’s equations) (Surya Ganguly)
EXPLANATION
He highlights that biological systems obtain correct answers using slow, unreliable intermediate processes and by directly leveraging physical laws such as Maxwell’s equations for computation, thereby avoiding the energy waste of digital circuits.
EVIDENCE
He explains that biology “gets the right answer just in time using the slowest, most unreliable intermediate steps possible” and “directly uses Maxwell’s equations of electromagnetism to do addition, instead of using complex energy-hungry transistor circuits,” illustrating a co-design of computation and physics [43-48].
MAJOR DISCUSSION POINT
Energy efficiency
Argument 7
Fundamental limits on sensing computation reveal optimal chemical computers that resemble GPCRs (Surya Ganguly)
EXPLANATION
Ganguly describes solving for the theoretical limits of sensing accuracy under energy constraints, finding a red curve that defines the lowest achievable error, and showing that the family of optimal chemical computers closely matches the behavior of G‑protein‑coupled receptors found in cells.
EVIDENCE
He states that they “found fundamental limits on the lowest achievable error achieved by any chemical computer whatsoever” (the red curve) and identified a family of optimal computers that “behave a lot like something called G-protein coupled receptors” which perform sensing in every cell [50-56].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote reports that theoretical limits on sensing accuracy lead to a family of optimal chemical computers that behave similarly to G-protein-coupled receptors found in cells [S1].
MAJOR DISCUSSION POINT
Energy efficiency
Argument 8
The brain operates like a smart energy grid, predicting and delivering energy where and when needed (Surya Ganguly)
EXPLANATION
He reports that measurements of neural activity together with ATP consumption across the fly brain reveal that the brain anticipates future energy demands and supplies just the right amount of energy at the right place and time, functioning as an intelligent energy distribution system.
EVIDENCE
Using simultaneous recordings of neural dynamics and ATP usage across the entire fly brain, they discovered that “the brain actually works like a smart energy grid… can predict where and when energy will be needed in the future, and it produces just the right amount of energy at just the right time, at just the right location” [60-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Simultaneous recordings of neural activity and ATP consumption in the fly brain showed predictive, location-specific energy delivery, described as a “smart energy grid” in the presentation [S1].
MAJOR DISCUSSION POINT
Energy efficiency
Argument 9
Building accurate digital twins of brain circuits enables rapid in‑silico experiments (Surya Ganguly)
EXPLANATION
He proposes that by recording extensive neural activity and constructing digital replicas of brain circuits, researchers can conduct fast, simulated experiments, accelerating discovery without the constraints of live animal work.
EVIDENCE
Ganguly outlines a scenario where “we read lots and lots of neural activity from the brain then we use AI to build a model or a digital twin of brain circuits then we can do rapid in silico experiments on the digital twin” [78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ganguly proposes constructing AI-driven digital twins of neural circuits to run fast in-silico experiments, accelerating discovery beyond live-animal constraints [S1].
MAJOR DISCUSSION POINT
Neuroscience‑AI integration
Argument 10
Digital twin of the retina reproduced two decades of experiments in days, demonstrating accelerated neuroscience discovery (Surya Ganguly)
EXPLANATION
He cites the creation of the world’s most accurate digital twin of the biological retina, which was able to replicate twenty years of experimental results within a matter of days, showcasing the speed gains possible with AI‑driven simulation.
EVIDENCE
He states that the digital twin of the retina “could reproduce two decades’ worth of experiments in a matter of days,” illustrating a dramatic acceleration of neuroscience research [79-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The world’s most accurate digital twin of the retina replicated twenty years of experimental results within days, showcasing massive speed-ups [S1].
MAJOR DISCUSSION POINT
Neuroscience‑AI integration
Argument 11
AI decoding of mouse visual activity and injection of designed neural patterns can induce specific perceptual hallucinations (Surya Ganguly)
EXPLANATION
He describes using AI to read a mouse’s visual cortex activity, decode what the mouse is seeing, and then write carefully crafted neural patterns back into the brain to make the mouse experience a targeted hallucination, effectively speaking the brain’s native language.
EVIDENCE
Ganguly reports that they “could look directly at neural activity in the brain of a mouse, and we could decode what it was seeing… By writing in carefully designed neural activity patterns, we could make the mouse hallucinate a particular percept” and even “control the mouse brain’s soul” [81-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The team read mouse visual cortex activity, decoded perceived images, and wrote tailored neural patterns back to induce targeted hallucinations, effectively “speaking the brain’s native language” [S1].
MAJOR DISCUSSION POINT
Neuroscience‑AI integration
Argument 12
Digital twin of an epileptic brain allowed control of seizure amplitude both in simulation and in the living brain (Surya Ganguly)
EXPLANATION
He explains that a digital replica of an epileptic brain was built, used to understand seizure initiation, and then control signals derived from the twin were applied to the actual patient’s brain, successfully modulating seizure amplitude.
EVIDENCE
He notes that they “built a digital twin of the epileptic brain… could reproduce actual epileptic seizure dynamics… used explainable AI to understand how these seizures were starting… then we used control theory to be able to control the seizure amplitude in the digital twin… injected these same control signals into the actual brain and controlled seizure amplitude in the actual brain” [99-105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A digital twin of an epileptic brain was used to understand seizure initiation and generate control signals that successfully modulated seizure amplitude in both the model and the patient’s brain [S1].
MAJOR DISCUSSION POINT
Neuroscience‑AI integration
Argument 13
Startup Metamorphic (with Stanford’s Enigma project) aims to scale digital twins to the primate brain for bio‑hybrid AI and therapeutic applications (Surya Ganguly)
EXPLANATION
He announces the formation of a new company, Metamorphic, which will collaborate with Stanford’s Enigma project to expand digital twin technology to whole primate brains, enabling robust bio‑hybrid AI systems and novel treatments for brain disorders.
EVIDENCE
He says, “we’re actually creating a new startup called Metamorphic… will work closely with the Enigma project… together … will scale up the construction of digital twins to encompass the entire primate brain, starting with the visual brain… such scaled-up digital twins offer a powerful path forward to building robust bio-hybrid AI systems… and to treat brain disease in new AI-driven ways” [106-110].
MAJOR DISCUSSION POINT
Neuroscience‑AI integration
Argument 14
A unified science spanning brains and machines is needed to create more efficient, explainable, and powerful AI (Surya Ganguly)
EXPLANATION
He calls for a comprehensive science of intelligence that integrates insights from neuroscience and artificial intelligence to produce AI that is more efficient, transparent, and capable.
EVIDENCE
He concludes, “what I think we really need is a unified science of intelligence that spans both brains and machines to help us understand both biological and artificial intelligence and create more efficient, explainable, and powerful AI” [111-113].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote concludes with a call for a unified science of intelligence that bridges neuroscience and AI to achieve greater efficiency, transparency, and capability [S1].
MAJOR DISCUSSION POINT
Unified science of intelligence
AGREED WITH
Speaker 1
Argument 15
Academic research, publicly funded and openly shared, is essential because past academic work underpins today’s AI and will shape tomorrow’s breakthroughs (Surya Ganguly)
EXPLANATION
He argues that academia provides the foundational research that fuels current AI advances and that continued public investment is crucial for future progress.
EVIDENCE
He states that “the academic studies of yesterday laid the strong foundation for today’s AI technology, and it will be the academic studies of today that lay the foundation for tomorrow’s technology” and urges expanded public investment in academic intelligence research [114-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ganguly emphasizes that yesterday’s academic studies built the foundation for current AI and that continued public investment in academic research is crucial for future advances [S1].
MAJOR DISCUSSION POINT
Unified science of intelligence
Argument 16
Opening the pursuit of intelligence research to the public benefits society more than closed‑door corporate efforts (Surya Ganguly)
EXPLANATION
He emphasizes that intelligence research should be conducted openly, arguing that secretive corporate work limits societal benefit, whereas open science maximizes public good.
EVIDENCE
He notes that “despite the huge and exciting advances happening now increasingly, behind closed doors at companies, I’m extremely excited about what the science of intelligence can achieve out in the open for the public benefit of all” [115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He argues that open, publicly accessible intelligence research maximizes societal benefit compared with secretive corporate work, as highlighted in the keynote remarks [S1].
MAJOR DISCUSSION POINT
Unified science of intelligence
S
Speaker 1
1 argument118 words per minute91 words46 seconds
Argument 1
Recognition of Professor Ganguly’s interdisciplinary expertise and invitation to share his insights (Speaker 1)
EXPLANATION
Speaker 1 thanks the audience, introduces Professor Surya Ganguly, highlights his interdisciplinary work across AI, neuroscience, and physics, and invites him to present his insights at the summit.
EVIDENCE
The host says, “Ladies and gentlemen, I now take this opportunity to invite Professor Surya Ganguly, Professor of AI, Neuroscience and Physics, Stanford University… Professor Ganguly’s research sits at one of the most intellectually fertile intersections in science today… Using the mathematics of physics and the insights of neuroscience to understand how intelligence… Please welcome Professor Surya Ganguly from Stanford University” [1-6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The introductory remarks inviting Professor Ganguly to the summit are echoed in the event transcript, confirming the speaker’s role and the invitation wording [S2].
MAJOR DISCUSSION POINT
Speaker introduction
AGREED WITH
Surya Ganguly
Agreements
Agreement Points
Both speakers emphasize the importance of an interdisciplinary, unified approach that bridges AI, neuroscience, and physics to advance intelligence research.
Speakers: Speaker 1, Surya Ganguly
Recognition of Professor Ganguly’s interdisciplinary expertise and invitation to share his insights (Speaker 1) A unified science spanning brains and machines is needed to create more efficient, explainable, and powerful AI (Surya Ganguly)
Speaker 1 introduces Professor Ganguly by highlighting his work at the intersection of AI, neuroscience and physics [1-6], and Professor Ganguly later calls for a unified science of intelligence that spans brains and machines to build better AI [111-113]. Both points stress that progress requires integrating multiple disciplines.
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with the IGF 2023 policy network’s call for different scientific disciplines to communicate meaningfully and for inclusive AI decision-making [S22], and echoes Surya Ganguli’s advocacy for a unified, open science of intelligence that spans AI, neuroscience and physics within academia [S23].
Similar Viewpoints
All of Professor Ganguly’s arguments consistently advocate for improving AI by learning from biological principles, enhancing data and energy efficiency, leveraging digital twins, and promoting open, publicly funded research.
Speakers: Surya Ganguly
AI’s extreme data hunger compared to humans (Surya Ganguly) Theory predicting neural scaling law slope linked to language statistics (Surya Ganguly) Non‑redundant training sets can turn slow power‑law decay into fast exponential improvement (Surya Ganguly) Evolutionary design of robot morphologies (morphological Baldwin effect) speeds up learning (Surya Ganguly) AI consumes orders of magnitude more power than the brain due to digital bit‑flip architecture (Surya Ganguly) Biology achieves efficiency by using slow, unreliable steps and co‑designing computation with physical laws (Surya Ganguly) Fundamental limits on sensing computation reveal optimal chemical computers that resemble GPCRs (Surya Ganguly) The brain operates like a smart energy grid, predicting and delivering energy where and when needed (Surya Ganguly) Building accurate digital twins of brain circuits enables rapid in‑silico experiments (Surya Ganguly) Digital twin of the retina reproduced two decades of experiments in days (Surya Ganguly) AI decoding of mouse visual activity and injection of designed neural patterns can induce specific perceptual hallucinations (Surya Ganguly) Digital twin of an epileptic brain allowed control of seizure amplitude both in simulation and in the living brain (Surya Ganguly) Startup Metamorphic aims to scale digital twins to the primate brain for bio‑hybrid AI and therapeutic applications (Surya Ganguly) Academic research, publicly funded and openly shared, is essential because past academic work underpins today’s AI and will shape tomorrow’s breakthroughs (Surya Ganguly) Opening the pursuit of intelligence research to the public benefits society more than closed‑door corporate efforts (Surya Ganguly)
Unexpected Consensus
Both speakers, despite their different roles, converge on the necessity of open, interdisciplinary research to advance intelligence technologies.
Speakers: Speaker 1, Surya Ganguly
Recognition of Professor Ganguly’s interdisciplinary expertise and invitation to share his insights (Speaker 1) Opening the pursuit of intelligence research to the public benefits society more than closed‑door corporate efforts (Surya Ganguly)
While Speaker 1’s remarks are limited to an introductory endorsement, they implicitly support the same open, cross‑disciplinary collaboration that Professor Ganguly explicitly calls for later in his talk, which is an unexpected alignment given the brevity of the host’s comments.
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on openness and interdisciplinary collaboration reflects the IGF 2023 recommendation that empirical fields must interact to shape robust AI regulation [S22] and mirrors Ganguli’s keynote urging that intelligence research remain open and interdisciplinary in the academic sphere [S23].
Overall Assessment

The transcript shows limited direct interaction between speakers, with the primary point of agreement centered on the value of interdisciplinary, open research linking AI, neuroscience, and physics. Professor Ganguly expands this theme across many detailed arguments, but no other participant directly echoes his specific technical claims.

Low to moderate consensus: there is clear agreement on the overarching principle of a unified, open science of intelligence, but little substantive overlap on specific technical or policy arguments. This suggests that while the summit’s framing aligns participants around interdisciplinary collaboration, detailed policy or technical consensus remains to be built.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains an introductory remark by Speaker 1 and an extensive presentation by Professor Surya Ganguly. No opposing statements, counter‑arguments, or conflicting viewpoints are presented by either speaker. Consequently, there are no identifiable disagreement points, no instances where speakers share a goal but propose different means, and no surprising areas of conflict.

Minimal – the discussion is essentially a one‑sided exposition of Professor Ganguly’s perspective, with no evident contention. This implies that, for the topics covered (data efficiency, energy efficiency, neuroscience‑AI integration, and calls for open, academic research), the dialogue does not reveal any internal debate that would affect consensus building or policy formulation.

Takeaways
Key takeaways
AI systems are far more data‑hungry than humans; current scaling laws show a slow power‑law improvement with data. A new theory links the shallow slope of neural scaling laws to the weak statistical structure of natural language and predicts the slope analytically. Using non‑redundant, information‑rich training sets can transform the slow power‑law decay into a much faster exponential improvement. Evolutionary design of robot morphologies (morphological Baldwin effect) can accelerate learning by shaping bodies that are easier to control. Modern AI consumes orders of magnitude more power than the brain because digital computation relies on fast, reliable bit flips, which are thermodynamically costly. Biology achieves energy efficiency by employing slow, unreliable intermediate steps and by co‑designing computation with physical laws (e.g., using Maxwell’s equations directly). Fundamental limits on chemical sensing reveal optimal chemical computers that resemble G‑protein‑coupled receptors, linking cellular sensing to optimal physical computation. The brain functions like a smart energy grid, predicting where and when energy will be needed and delivering it precisely. Building accurate digital twins of brain circuits enables rapid in‑silico experiments, dramatically accelerating neuroscience discovery. Digital twins of the retina reproduced decades of experiments in days; digital twins of mouse visual cortex allowed decoding and injection of neural patterns to induce specific perceptual hallucinations. A digital twin of an epileptic brain, combined with explainable AI and control theory, enabled control of seizure amplitude both in simulation and in the living brain. A new startup, Metamorphic, in partnership with Stanford’s Enigma project, aims to scale digital twins to the primate brain for bio‑hybrid AI and therapeutic applications. A unified, open science of intelligence that spans brains and machines is essential for creating more efficient, explainable, and powerful AI, and public academic investment is crucial. Open, publicly shared research is advocated over closed‑door corporate efforts to maximize societal benefit.
Resolutions and action items
Proposal to create the startup Metamorphic to develop and commercialize brain‑machine digital twin technologies. Planned collaboration between Metamorphic and Stanford’s Enigma project to scale digital twins to the primate visual brain. Call for increased public and academic investment in an open, unified science of intelligence.
Unresolved issues
Lack of a comprehensive scientific theory explaining why neural scaling laws have the observed shallow power‑law form for modern large language models. How to systematically construct non‑redundant training datasets at the scale required for commercial AI systems. Practical pathways to redesign the entire AI technology stack (from hardware to algorithms) to achieve brain‑level energy efficiency. Methods to translate the theoretical limits of chemical sensing into scalable, engineered hardware beyond biological analogues. Technical and ethical challenges of deploying digital twins for direct brain control in humans, including safety, consent, and long‑term effects. Scalability of quantum neuromorphic computing architectures and their integration with existing AI frameworks. Funding mechanisms and policy frameworks needed to sustain open, interdisciplinary research on intelligence.
Suggested compromises
None identified
Thought Provoking Comments
Follow-up Questions
Develop a comprehensive scientific theory explaining why neural scaling laws for large language models exist and why their error reduction follows a slow power law
Understanding the underlying principles of scaling laws is essential to improve data efficiency and predict model performance as data scales.
Speaker: Surya Ganguly
Design and construct non‑redundant training datasets that enable exponential error decay rather than the observed slow power‑law decay
Identifying methods to eliminate redundancy in data could dramatically reduce the amount of data required to train high‑performing AI systems.
Speaker: Surya Ganguly
Investigate the morphological Baldwin effect in physical robots to determine how evolved body designs can accelerate learning in real‑world settings
Demonstrating this effect beyond simulations would validate evolutionary strategies for improving robot learning efficiency.
Speaker: Surya Ganguly
Determine the fundamental limits on speed and accuracy of arbitrary computations under strict energy constraints, extending beyond the sensing case already solved
Knowing these limits would guide the redesign of hardware and algorithms to achieve energy‑efficient AI across diverse tasks.
Speaker: Surya Ganguly
Explore the design of optimal chemical computers for a variety of sensing and computational tasks, building on the connection to G‑protein‑coupled receptors
Linking biological sensing mechanisms to engineered chemical computers could inspire ultra‑low‑power AI hardware.
Speaker: Surya Ganguly
Develop quantum neuromorphic computing architectures that implement neural algorithms using atoms for neurons and photons for synapses
Quantum hardware could provide capabilities beyond what evolution produced, offering higher capacity, robustness, and new computational paradigms.
Speaker: Surya Ganguly
Create and evaluate a quantum Hopfield associative memory built from atoms and photons, assessing its capacity, robustness, and recall performance
A quantum version of Hopfield networks may surpass classical limits, opening new applications for memory storage and retrieval.
Speaker: Surya Ganguly
Scale digital twin technology to model the entire primate brain, beginning with the visual system, to enable robust bio‑hybrid AI and advanced neuroscience research
Comprehensive brain twins would allow rapid in‑silico experimentation and could serve as a foundation for brain‑inspired AI systems.
Speaker: Surya Ganguly
Translate the digital‑twin‑based seizure‑control approach from mice to human epilepsy treatment, investigating safety and efficacy
Successful human application would demonstrate a powerful clinical use of AI‑driven brain modeling and control.
Speaker: Surya Ganguly
Develop bio‑hybrid AI systems that are directly taught by large‑scale brain data, leveraging digital twins and explainable AI
Such systems could combine the adaptability of biology with the scalability of engineering, leading to more efficient and explainable AI.
Speaker: Surya Ganguly
Expand public investment and open‑science initiatives for the unified study of intelligence across biology and artificial systems
Open, well‑funded research is needed to build the foundational knowledge that will drive future breakthroughs in both neuroscience and AI.
Speaker: Surya Ganguly
Identify and characterize the fundamental limits on the lowest achievable error for chemical computers in computational domains other than sensing
Extending the error‑limit analysis beyond sensing will inform the design of chemical computing substrates for broader AI tasks.
Speaker: Surya Ganguly
Co‑design computational algorithms and physical hardware to optimally match computational dynamics with underlying physical dynamics across the technology stack
A holistic redesign could close the energy efficiency gap between brains and machines by aligning algorithmic operations with the physics of the substrate.
Speaker: Surya Ganguly

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.