Keynote-Surya Ganguli
19 Feb 2026 16:30h - 16:45h
Keynote-Surya Ganguli
Summary
The session opened with Speaker 1 introducing Professor Surya Ganguly of Stanford, whose research bridges AI, neuroscience, and physics to build theoretical foundations for intelligence [2-5]. Ganguly noted that while the past decade has produced transformative AI systems, our understanding of how they operate remains minimal, and the brain still outperforms machines on many fronts [14-16]. He outlined a unified science of intelligence that targets three pillars: data efficiency, energy efficiency, and integration of brains with machines [17-20].
Regarding data efficiency, he explained that modern AI requires orders of magnitude more language exposure than humans and follows a slow power-law scaling that his team recently derived from first principles, matching experimental results [20-23]. By identifying redundancy in large datasets and selecting non-redundant training examples, his group demonstrated a shift from the slow power law to a much faster exponential decay in error [24-28]. He also showed that evolutionary design of robot morphologies can accelerate learning, providing empirical support for the morphological Baldwin effect [29-36].
On energy efficiency, Ganguly contrasted AI’s megawatt consumption with the brain’s 20-watt operation, attributing the gap to reliance on fast, reliable digital bit flips versus biology’s use of slow, unreliable steps that co-design computation with physical laws [38-46]. His work identified fundamental limits for chemical sensing, revealing that optimal chemical computers resemble G-protein-coupled receptors and linking neuronal function to physical sensing mechanisms [47-56]. Further analysis indicated that the brain operates like a smart energy grid, predicting and delivering energy precisely where and when needed [60-64].
To bridge the gap, he proposed quantum neuromorphic computing, replacing neurons with atoms and synapses with photons, enabling quantum Hopfield memories and photonic optimizers with superior capacity and robustness [65-78]. He illustrated the potential of melding brains and machines through digital twins, citing a highly accurate retinal twin that reproduced decades of experiments in days, and mouse-brain models that could read and write visual perception, even inducing controlled hallucinations [78-86]. Extending this approach, his team built a digital twin of an epileptic brain, used explainable AI and control theory to modulate seizure amplitude, and launched a startup, Metamorphic, in partnership with Stanford’s Enigma project to scale twins to the primate brain [99-110].
Ganguly concluded that an open, interdisciplinary science of intelligence is essential for creating more efficient, explainable AI and for advancing treatments of brain disorders, urging greater public investment in academic research [111-115]. The discussion closed with appreciation for Professor Ganguly’s contributions and a reaffirmation of the importance of collaborative, transparent research in shaping the future of intelligence [116-117].
Keypoints
Major discussion points
– A unified science of intelligence spanning brains and machines – Ganguly frames his talk around three pillars (data efficiency, energy efficiency, and brain-machine melding) and repeatedly stresses the need for a common theoretical framework that can explain both biological and artificial cognition [17-20][111-114].
– Improving data efficiency in AI – He explains why modern AI systems are extremely data-hungry, describes neural scaling laws, presents his group’s first-principles theory that predicts their shallow power-law slope, and shows how selecting non-redundant training data can bend the curve toward a much faster exponential decay [20-28][29-36].
– Closing the energy-efficiency gap – By contrasting the brain’s ~20 W power budget with AI’s megawatt consumption, he attributes the gap to digital bit-flipping, highlights how biology co-designs computation with physics (e.g., using Maxwell’s equations), and outlines recent work on fundamental limits of chemical sensing and quantum-neuromorphic hardware [38-49][50-66].
– Melding brains and machines through digital twins – He showcases concrete examples: a high-fidelity digital twin of the retina, AI-driven decoding and “hallucination” in mice, and a controllable digital twin of epileptic brain dynamics that was used to modulate seizures in vivo; these efforts are being commercialized via the startup Metamorphic [78-86][99-108].
– Call for open, academic-driven research and public investment – The talk concludes with a plea to expand public funding for an open, interdisciplinary science of intelligence, warning that most breakthroughs today occur behind corporate walls and urging academia to lead the next wave [112-116].
Overall purpose / goal
The presentation aims to persuade the audience that advancing AI responsibly requires a holistic, open scientific approach that unites insights from neuroscience, physics, and computer science. By highlighting recent theoretical and experimental breakthroughs in data and energy efficiency, as well as practical brain-machine integration, Ganguly argues for greater public and academic support to build a shared foundation for future intelligent systems.
Overall tone and its evolution
– The talk opens with a light-hearted, informal tone (“there’s going to be an exam at the end”) [11-13].
– It quickly shifts to a technical and authoritative tone, delivering dense scientific content on scaling laws, energy limits, and quantum neuromorphic concepts [20-66].
– As the discussion moves to brain-machine integration, the tone becomes optimistic and visionary, emphasizing transformative applications (digital twins, controlling perception, treating epilepsy) [78-106].
– The closing segment adopts a persuasive, advocacy-driven tone, urging open collaboration and increased public funding [112-116].
Overall, the tone remains enthusiastic and forward-looking, but it transitions from playful introduction to rigorous exposition, then to hopeful vision, and finally to a rallying call for collective action.
Speakers
– Surya Ganguly
– Role/Title: Professor of AI, Neuroscience and Physics, Stanford University
– Areas of Expertise: Artificial Intelligence, Neuroscience, Physics, Unified Science of Intelligence
– Affiliation: Stanford University [S2]
– Speaker 1
– Role/Title: Moderator / Host (introducing the keynote speaker)
– Areas of Expertise:
Additional speakers:
– (none)
Speaker 1 opened the session by thanking the audience and formally introducing Professor Surya Ganguly, a Stanford professor whose work sits at the intersection of artificial intelligence, neuroscience, and physics. He then light-heartedly warned that “the slides will change pace, there will be an exam at the end, and I might even throw in a joke about my own slides,” setting a playful tone for the talk [1-2].
Unified Science of Intelligence – Three Pillars
Professor Ganguly framed his presentation around a unified science of intelligence that simultaneously addresses biological brains and engineered machines. He identified three inter-related pillars-data-efficiency, energy-efficiency, and brain-machine melding-as the core challenges for creating more efficient, explainable, and powerful AI systems, and urged the community to pursue an open, interdisciplinary approach with long-term horizons and public support [13-20][111-114].
Data-efficiency. He highlighted the stark contrast between human and machine language exposure: humans acquire roughly 100 million words of language experience, whereas modern AI systems ingest about 10 trillion words-a volume that would take a human 240 000 years to read [14-15]. AI error rates decline only slowly with data, following a power-law scaling observed for over half a decade but lacking a solid theoretical basis [20-21]. Ganguly’s team recently derived a first-principles theory that predicts the shallow slope of this neural scaling law by linking it to the weak surface statistical structure of natural language. The black line is our theory and the colored lines are experiments in modern LLMs, and the theory (black line) matches the experimental results (colored lines) from large language models [22-23]. By recognizing that large random datasets contain extensive redundancy, they devised algorithms that select non-redundant training examples, each contributing novel information; this bends the original power-law decay into a much faster exponential drop in error [24-28]. In a separate line of inquiry, they showed that evolutionary design of robot morphologies-allowing bodies to evolve across generations-produces forms that are easier to control, thereby speeding up learning. This empirical validation of the morphological Baldwin effect provides the first concrete demonstration of a long-standing evolutionary hypothesis [29-36].
Energy-efficiency. He contrasted the brain’s modest 20 watts power budget with modern AI systems that can require up to 10 million watts, attributing the gap to the reliance on fast, reliable digital bit-flips, which thermodynamics dictates must consume substantial energy [38-40][41-43]. Biology, by contrast, achieves efficiency through slow, unreliable intermediate steps and by co-designing computation with the underlying physics of the universe-for example, using Maxwell’s equations directly for addition rather than energy-intensive transistor circuits [44-48]. He argued that bridging the energy gap demands a complete redesign of the technology stack, from electrons to algorithms, to match computational dynamics with physical dynamics [49-51]. His group recently solved the fundamental limits of chemical sensing under energy constraints, identifying a lower bound on achievable error for any chemical computer and characterising the family of optimal sensors that attain this bound. That’s the red curve, and the optimal chemical computers behave like G-protein-coupled receptors, linking neuronal function to optimal physical sensing mechanisms [52-56]. Further experiments measuring both neural activity and ATP consumption across the entire fly brain revealed that the brain operates like a smart energy grid, predicting future energy demand and delivering power precisely where and when needed [57-64].
Brain-machine melding. To move beyond the limits of evolution, Ganguly proposed quantum neuromorphic computing, wherein individual neurons are replaced by atoms whose firing states correspond to electronic excitations, and synapses are replaced by photons that mediate communication via emission and absorption [65-70]. This architecture enables the construction of a quantum Hopfield associative memory, a quantum analogue of the classic network that earned John Hopfield a Nobel Prize-he quoted, “John Hopfield the Nobel Prize in physics,”-offering superior capacity, robustness, and recall [71-75]. He also described photonic optimisers, fully optical computers that solve optimisation problems with novel energy-landscape dynamics. The convergence of neural algorithms with quantum hardware inaugurates a new field-quantum neuromorphic computing-that could surpass the capabilities of biologically evolved systems [76-78].
Melding Brains & Machines – Digital Twins
Ganguly illustrated practical potential through several digital-twin projects. A high-fidelity twin of the biological retina reproduced two decades of experimental results in a matter of days, dramatically accelerating neuroscience discovery [78-80]. In mice, AI decoded visual neural activity to reconstruct the animal’s perceived image at the resolution of its visual system, and, by injecting carefully designed neural patterns, induced specific perceptual hallucinations-effectively “writing” to the mouse’s mind [81-86]. Extending this approach to pathology, his team built a digital twin of an epileptic brain that faithfully reproduced seizure dynamics across the whole brain. Using explainable AI to pinpoint seizure origins and control theory to modulate amplitude, they successfully transferred the control signals from the twin to the living brain, thereby regulating seizure intensity in vivo [99-105]. These breakthroughs are being commercialised through a new startup, Metamorphic, which will work with Stanford’s Enigma project to scale digital twins from the visual cortex to the entire primate brain [106-110].
Call for Open, Interdisciplinary Research
In his concluding remarks, Ganguly stressed that advancing AI responsibly requires a unified, open science of intelligence that spans both brains and machines. He argued that academic research, publicly funded and freely shared, is essential because past academic work underpins today’s AI breakthroughs and will shape tomorrow’s technologies, warning that most current advances occur behind corporate walls and that an open, interdisciplinary approach will maximise societal benefit [111-115].
Speaker 1 closed the session by expressing gratitude to Professor Ganguly for his contributions [117].
also for contributing your expertise to this summit. Ladies and gentlemen, I now take this opportunity to invite Professor Surya Ganguly, Professor of AI, Neuroscience and Physics, Stanford University. Professor Ganguly’s research sits at one of the most intellectually fertile intersections in science today. Using the mathematics of physics and the insights of neuroscience to understand how intelligence, biological and artificial intelligence, actually works. His work is helping build the theoretical foundations that practice so urgently needs. Please welcome Professor Surya Ganguly from Stanford University.
Thank you. Great, we got the slides. So we went from a world leader to a VC to now a professor now. So we have a little bit of a change of pace. It’s going to get a little bit more technical. And because I’m a professor, there’s going to be an exam at the end. All right, so pay attention. All right, so I’m going to talk about advancing the science and engineering of intelligence. So, the last decade of AI research has led to stunning advances in the engineering of intelligence, yielding AI systems that stand poised to transform our society. Yet, alarmingly, we understand almost nothing about how they work, and we desperately need to. At the same time, our brain is the product of 500 million years of vertebrate brain evolution, and it is still orders of magnitude better than AI along several axes, and we also need to understand why.
So, I work in a unified science of intelligence across both brains and machines that seeks to both understand biological and artificial intelligence and create more efficient, explainable, and powerful AI. Today, I’ll work on understanding and improving intelligence along three lines. Data efficiency, energy efficiency, and melding brains and machines. First, data efficiency. so um ai is vastly more data hungry than humans we get about 100 million words of language experience ai gets 10 trillion it would take us 240 000 years to read everything that ai read so why is ai so data hungry well well in ai error falls off as a power law it falls off very slowly as a power law with the amount of data this is an example of a famous neural scaling law which captured the imagination of industry and motivated significant societal investments in data collection compute and energy but despite the importance of these neural scaling laws discovered over half a decade ago we lack any scientific theory for why they exist for any modern large language model and why they are so slow just last week we posted the first theory to do so from first principles, we could analytically predict the slope of these neural scaling laws and reconnected their shallow slope to the weak surface statistical structure of natural language itself.
The black line is our theory and the colored lines are experiments in modern LLMs. You can see there’s a good match. But can we make the scaling laws better? We actually can. We actually showed, both in theory and practice, that we can bend the slow power law down to a much faster exponential drop. The key idea is that large random data sets are extremely redundant. If you already have a billion random sentences, it’s unlikely that the next sentence is going to tell you very much that’s new. But what if you could find a non -redundant training set in which each new data point is carefully chosen to tell you something new compared to all the other data points?
We developed theory and algorithms to do just this, and that’s what got us the better exponential. In a completely different line of work, we asked if the process of evolution itself could speed up learning. And we showed it actually can. We evolved robot morphologies, shapes of bodies, from generation to generation. And we showed that successive generations could learn faster. They did so by designing the body to be easier to learn to control. This is an example of something called the morphological Baldwin effect. It’s an effect that has long been conjectured in evolutionary theory, but hard to test in the real world. We demonstrated it for the first time in our simulations. Okay, let’s go on to energy efficiency.
AI is vastly more energy hungry than humans. Our brain only spends 20 watts of power, but modern AI can consume 10 million watts. So why is AI so energy hungry? Well, the fault lies in the choice of digital computation itself, where we use very fast and reliable bit flips at every intermediate step of the computation. Now the laws of thermodynamics demand that every fast and reliable bit flip must consume a lot of energy. Biology chose a very different route. It gets the right answer just in time using the slowest, most unreliable intermediate steps possible. Biology does not rev its engine any more than it needs to. It also co -designs computation and physics much better. For example, it directly uses Maxwell’s equations of electromagnetism to do addition, instead of using complex energy -hungry transistor circuits.
So biology matches its computation directly to the native physics of the universe. So to bridge the vast energy gap between brains and machines, we need to rethink our entire technology stack. . from electrons to algorithms, and optimally match computational dynamics to physical dynamics. For example, given a particular computation, what are the fundamental limits on its speed and accuracy under energy constraints? We recently solved this question for the computation of sensing, which every cell has to do. We found fundamental limits on the lowest achievable error achieved by any chemical computer whatsoever. That’s the red curve. And we also found the family of optimal computers that hug this curve. And we showed, remarkably, that these optimal chemical computers behave a lot like something called G -protein coupled receptors, which hide in every single cell, and they do sensing.
So this yields a connection between what neurons do and what optimal physical sensors would do. Popping up a level, in neuroscience, we can now measure non -neural sensors. We can now measure non -neural sensors. We can now measure non -neural sensors. We can now measure not only neural activity, but also energy consumption in the form of ATP usage. the fundamental chemical fuel that powers all life’s processes. We can do this across the entire fly brain. So by analyzing the couple dynamics of neural computation and energy consumption, we discovered that the brain actually works like a smart energy grid, remarkably. The brain can predict where and when energy will be needed in the future, and it produces just the right amount of energy at just the right time, at just the right location.
So in summary, we still have a lot to learn from evolution in our quest to build more energy -efficient AI, but we don’t have to be limited by evolution. We can go beyond evolution to instantiate neural algorithms in quantum hardware that evolution could not discover. For example, we can replace individual neurons with individual atoms. A neuron in different states of firing correspond to atoms in different excited electronic states. So we can do this with the help of neural networks. We can also replace individual synapses between neurons with photons, quanta of light. Just as synapses allow two neurons to communicate, photons allow electronic states of atoms to communicate through photon emission and absorption. So what can we build with this?
As one example, we could build a Hopfield associative memory network. This is the same network that recently won John Hopfield the Nobel Prize in physics. But this is a quantum version this time that can be built with atoms and photons. And we can show that the quantum dynamics endows the memory with superior capacity, robustness, and recall. We can also go beyond this to build quantum optimizers made entirely out of photons. These photonic computers solve optimization problems in interesting new ways, and we can analyze their energy landscape. So the marriage of neurons… …and neural algorithms with quantum hardware leads to an entirely new field that I like to call quantum neuromorphic computing. okay now returning to the brain the marriage of neuroscience and ai enables a powerful new path forward by melding minds and machines as follows imagine a scenario where we read lots and lots of neural activity from the brain then we use ai to build a model or a digital twin of brain circuits then we can do rapid in silico experiments on the digital twin and use explainable ai to understand how it works but we don’t have to stop there we can control the brain too we can use control theory to learn specific neural patterns that we can write into the digital twin to control it then we can transfer these same neural patterns into the actual brain to write into the brain and control the brain in essence we can learn the language of the brain and then speak directly back to it in its own neural language you So, as one example of this program, we recently developed the world’s most accurate digital twin of the biological retina, and we used explainable AI to understand it.
And in Silico, we could reproduce two decades’ worth of experiments in a matter of days. So this shows a general path forward to dramatically accelerating neuroscience discovery using AI. We also carried out this program in mice, where we were able to use AI to read the mind of a mouse. We could look directly at neural activity in the brain of a mouse, and we could decode what it was seeing at the lower level of resolution that mice can see. This shows that we can learn the native language of the visual brain. But we can go further than that to write to the mind of a mouse. By writing in carefully designed neural activity patterns, we could make the mouse hallucinate a particular percept.
In fact, we could control the mouse brain’s soul. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. So in essence, we could control what the mouse saw by writing directly into the brain using the native language of its brain itself.
We also applied this to epilepsy. Sorry, we also carried out this program in epilepsy where we built a digital twin of the epileptic brain. Our twin could reproduce actual epileptic seizure dynamics across the entire brain. We then used explainable AI to understand how these seizures were starting. Then we used control theory to be able to control the seizure amplitude in the digital twin. Then we injected these same control signals into the actual brain and controlled seizure amplitude in the actual brain. This shows how to meld brains and machines to control epilepsy. Building on all this, we’re actually creating a new startup called Metamorphic. And we’re going to be using this to control the seizure amplitude in the digital twin.
It will work closely with the Enigma project at Stanford University and together Enigma and Metamorphic. will scale up the construction of digital twins to encompass the entire primate brain, starting with the visual brain. Such scaled -up digital twins offer a powerful path forward to building robust biohybrid AI systems that are taught directly by brain data and to treat brain disease in new AI -driven ways. More generally, the possibilities of melding brains and machines are limitless, both to advance AI and to understand, cure, and augment the brain. To close, what I think we really need is a unified science of intelligence that spans both brains and machines to help us understand both biological and artificial intelligence and create more efficient, explainable, and powerful AI.
Importantly, this pursuit must be done out in the open and shared with the world, and it must be done in a way that is both biological and artificial. It must be done with a long time horizon. This makes academia an ideal place to pursue a science of intelligence, and I believe it’s imperative to expand public investment in the academic study of intelligence, because the academic studies of yesterday laid the strong foundation for today’s AI technology, and it will be the academic studies of today that lay the foundation for tomorrow’s technology, enabling us to go beyond large language models and diffusion models and so forth. Despite the huge and exciting advances happening now increasingly, unfortunately, behind closed doors at companies, I’m extremely excited about what the science of intelligence can achieve out in the open for the public benefit of all.
Thank you.
Thank you so much, Professor Ganguly.
Professor Surya Ganguly from Stanford University presented his research on advancing the science and engineering of intelligence, focusing on understanding both biological and artificial intelligence …
EventAI is compressing discovery timelines and reducing development risk. And therefore, I believe that the next frontier is even more profound. The reprogramming of cells themselves to restore biological …
EventSo, I work in a unified science of intelligence across both brains and machines that seeks to both understand biological and artificial intelligence and create more efficient, explainable, and powerfu…
Event_reportingOkay, so first of all, I think the computing requirements for training modern AI systems is temporary. It’s temporary because the type of AI systems that we build at the moment, LLMs, essentially are …
EventAri Morcos, an industry veteran with nearly a decade of experience in the AI sector, has founded DatologyAI to revolutionise the process of AI dataset curation. The startup addresses the significant c…
Updates“And I would say it’s not an innovation gap, it’s a power gap.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/keynote-surya-ganguli?diplo-deep-link-text=Data+efficiency%2C+energy+efficie…
EventBilel argues that while AI and data centers consume significant energy, they can also help optimize energy use in other sectors. He suggests that the benefits may outweigh the costs in some cases.
EventResearchers at the University of Surreyhave developeda new method to enhance AI by imitating how the human brain connects information. The approach, called Topographical Sparse Mapping, links each art…
Updatesno and unfortunately even though you said it’s the lightweight question i have to answer it using big words so i think the main reason why i don’t see it is because i’m skeptical towards anthropomorph…
EventAI is accelerating thecreation of digital twinsby reducing the time and labour required to build complex models. Consulting firm McKinsey says specialised virtual replicas can take six months or more …
UpdatesEliezer Manor:If you regard human intelligence as IQ, yes, the computer is much better than we are. But if you regard the human intelligence as emotional intelligence, I don’t see when and how the com…
EventPeggy Hicks: Yes, in short order. No, I mean, I think there is a lot that’s happening to address these issues. So, you know, I can’t give a full overview of it now. But I mean, I think one of the key …
EventThe discussion showcased concrete applications, including medical AI models like Meditron for healthcare applications and Legitron for international humanitarian law, both developed in partnership wit…
EventAn acknowledgment followed concerning the multitude of discussions in informal settings, illustrating the commitment of all involved parties in seeking consensus. Further, the speaker recognised the e…
EventBelarus voices disappointment at an oversight in addressing juvenile crime and ICT crime prevention while the committee navigates the appropriate focus on human rights within the Convention, taking ca…
EventThe transcript reveals surprisingly few direct disagreements among speakers, with most conflicts being implicit or representing different emphases rather than fundamental opposition. Main areas of ten…
EventThe tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation. While Kurbalija maintains an expert, measured delivery, there’s a growing sens…
EventThe tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disagreeing. While acknowledging serious challenges and risks, the discussion maintai…
EventThese key comments shaped the discussion by guiding it from an introduction to the complexity of touch, through detailed explanations of proprioception and the piezo protein, to broader implications i…
EventThe tone is academic and policy-focused, delivered as an expert briefing with urgency underlying the technical discussion. The speaker maintains a scholarly approach while emphasizing the critical nat…
EventThe discussion maintained a collaborative and constructive tone throughout, characterized by technical expertise and policy-oriented pragmatism. Panelists demonstrated mutual respect and built upon ea…
Event**Major Discussion Points:** The tone is consistently inspirational and optimistic throughout, characterized by enthusiasm for technological possibilities and social impact. The speakers maintain an …
EventThe discussion maintained an optimistic and collaborative tone throughout, characterized by visionary leadership and practical problem-solving. Speakers demonstrated enthusiasm for AI’s transformative…
EventThe discussion maintained an optimistic and collaborative tone throughout, with speakers consistently emphasizing human resilience and adaptability. While acknowledging legitimate concerns about AI’s …
EventThe tone is enthusiastic and visionary throughout, with Reger maintaining an optimistic, forward-looking perspective. He uses accessible analogies (comparing humans to “biological robots,” referencing…
EventThe tone is consistently optimistic and inspirational throughout, with Mills maintaining an enthusiastic and visionary approach. He balances this optimism with measured acknowledgment of challenges, b…
EventThe discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mutual respect. While there were some tensions around specific content (particularl…
EventThe tone throughout was consistently formal, diplomatic, and optimistic. It maintained a collaborative and forward-looking atmosphere, with speakers expressing mutual respect and shared commitment to …
EventThe tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and complexity of the challenges. Speakers maintained a pragmatic optimism, recognizing si…
EventThe tone is passionate and advocacy-driven throughout, with the speaker maintaining an urgent, morally-charged perspective. It begins with a personal, somber story but transitions to an increasingly o…
Event“Speaker 1 introduced Professor Surya Ganguly, a Stanford professor, and provided only an introductory opening before Ganguli’s presentation.”
The knowledge base notes that Speaker 1 only provides an introduction while Professor Ganguli presents his research, confirming the introductory role described in the report [S1].
“The brain’s power budget is about 20 watts, whereas modern AI systems can require up to 10 million watts, a gap attributed to the use of fast, reliable digital bit‑flips which consume substantial energy.”
The source explicitly states that the brain uses ~20 W and modern AI can consume ~10 million W, and attributes the high consumption to the choice of fast, reliable digital computation [S2].
“The transcript represents a single academic presentation by Professor Surya Ganguli rather than a multi‑speaker discussion or debate.”
The knowledge base clarifies that the document is a single-speaker keynote, providing context that the report’s format is a presentation, not a multi-person dialogue [S1].
The transcript shows limited direct interaction between speakers, with the primary point of agreement centered on the value of interdisciplinary, open research linking AI, neuroscience, and physics. Professor Ganguly expands this theme across many detailed arguments, but no other participant directly echoes his specific technical claims.
Low to moderate consensus: there is clear agreement on the overarching principle of a unified, open science of intelligence, but little substantive overlap on specific technical or policy arguments. This suggests that while the summit’s framing aligns participants around interdisciplinary collaboration, detailed policy or technical consensus remains to be built.
The transcript contains an introductory remark by Speaker 1 and an extensive presentation by Professor Surya Ganguly. No opposing statements, counter‑arguments, or conflicting viewpoints are presented by either speaker. Consequently, there are no identifiable disagreement points, no instances where speakers share a goal but propose different means, and no surprising areas of conflict.
Minimal – the discussion is essentially a one‑sided exposition of Professor Ganguly’s perspective, with no evident contention. This implies that, for the topics covered (data efficiency, energy efficiency, neuroscience‑AI integration, and calls for open, academic research), the dialogue does not reveal any internal debate that would affect consensus building or policy formulation.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

