Keynote-Surya Ganguli

19 Feb 2026 16:30h - 16:45h

Session at a glance

Summary

Professor Surya Ganguly from Stanford University presented his research on advancing the science and engineering of intelligence, focusing on understanding both biological and artificial intelligence systems. He highlighted a critical paradox: while AI has achieved stunning advances in the past decade, we understand almost nothing about how these systems work, and the human brain remains orders of magnitude more efficient than AI in several key areas. Ganguly’s research addresses three main challenges in intelligence: data efficiency, energy efficiency, and melding brains and machines.


Regarding data efficiency, he explained that AI systems require vastly more data than humans—AI processes 10 trillion words compared to humans’ 100 million words of language experience. His team developed the first scientific theory to explain neural scaling laws in large language models and demonstrated methods to improve learning efficiency by using non-redundant training sets and evolutionary approaches. On energy efficiency, Ganguly noted that human brains consume only 20 watts while modern AI can use 10 million watts, attributing this gap to AI’s reliance on fast, reliable digital computation versus biology’s approach of using slow, unreliable intermediate steps that arrive at correct answers just in time.


His most ambitious work involves creating “digital twins” of brain circuits using AI, which can accelerate neuroscience discovery and enable direct brain control. His team successfully demonstrated reading and writing to mouse brains, controlling visual perception, and managing epileptic seizures through this approach. Ganguly concluded by emphasizing the need for a unified science of intelligence pursued openly in academic settings, arguing that public investment in academic research is crucial for developing the foundation of tomorrow’s AI technology beyond current large language models.


Keypoints

Major Discussion Points:


Data Efficiency Gap Between AI and Humans: AI requires vastly more data than humans (10 trillion words vs 100 million), with Professor Ganguly presenting the first theoretical explanation for neural scaling laws and demonstrating methods to improve learning efficiency through non-redundant training sets and evolutionary approaches.


Energy Efficiency Challenges in AI: Modern AI systems consume millions of times more energy than the human brain (10 million watts vs 20 watts), leading to research into biological computation methods that match computational dynamics to physical dynamics and exploration of quantum neuromorphic computing.


Brain-Machine Integration and Digital Twins: Development of AI-powered digital twins of brain circuits that can read neural activity, conduct rapid experiments, and write back to the brain in its native neural language, with applications demonstrated in retinal modeling, mouse vision control, and epilepsy treatment.


Quantum Neuromorphic Computing: A new field combining neural algorithms with quantum hardware, replacing neurons with atoms and synapses with photons to create superior memory networks and optimization systems that go beyond what biological evolution could achieve.


The Need for Open Academic Research: Advocacy for expanded public investment in academic intelligence research, emphasizing that foundational studies should be conducted openly rather than behind closed corporate doors to benefit all of humanity.


Overall Purpose:


The discussion aims to present a unified science of intelligence that bridges biological and artificial systems, addressing critical limitations in current AI (data hunger, energy consumption, lack of understanding) while demonstrating how neuroscience-AI collaboration can advance both fields and lead to practical applications in brain treatment and more efficient AI systems.


Overall Tone:


The tone is academic and technical but accessible, maintaining an enthusiastic and optimistic outlook throughout. Professor Ganguly begins with a light, humorous note about the format change and exam joke, then transitions to serious scientific content while consistently emphasizing exciting possibilities and breakthroughs. The tone becomes more urgent when discussing the need for open research and public investment, but remains fundamentally hopeful about the future of intelligence research.


Speakers

Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be a moderator or host introducing the speaker)


Surya Ganguly: Role/Title: Professor of AI, Neuroscience and Physics at Stanford University, Area of expertise: AI, Neuroscience, Physics, intelligence research (both biological and artificial), quantum neuromorphic computing, digital twins of brain circuits


Additional speakers:


None identified beyond those in the speakers names list.


Full session report

Professor Surya Ganguly from Stanford University delivered a comprehensive presentation on advancing the science of intelligence, opening with humor about his transition “from a world leader to a VC to now a professor” and joking about giving an exam at the end. His research sits “at one of the most intellectually fertile intersections in science today,” addressing one of the most pressing paradoxes in modern artificial intelligence: whilst AI has achieved remarkable breakthroughs over the past decade, our understanding of how these systems actually function remains alarmingly limited. His work tackles this challenge through a unified approach that bridges biological and artificial intelligence, focusing on three critical areas where current AI systems fall dramatically short of biological efficiency.


The Data Efficiency Crisis and Theoretical Breakthroughs


Ganguly began by highlighting a stark disparity in learning efficiency between humans and AI systems. Whilst humans acquire language competency through approximately 100 million words of experience, modern AI systems require 10 trillion words—an amount that would take humans 240,000 years of reading time to process. This inefficiency stems from neural scaling laws, which demonstrate that AI error rates decrease only slowly as a power law function of data volume, rather than improving rapidly.


Despite the fundamental importance of these scaling laws, which were discovered over half a decade ago, the scientific community has lacked any theoretical framework to explain why they exist or why they exhibit such sluggish improvement rates. Ganguly’s team has addressed this critical knowledge gap by developing the first principled theory to predict neural scaling law behavior from first principles—work that was posted “just last week.” Their theoretical predictions closely match the observed performance of modern large language models across multiple systems.


More significantly, Ganguly demonstrated that these scaling laws need not represent fundamental limits. His team proved both theoretically and practically that the slow power law decline can be transformed into a much more efficient exponential improvement. The key insight centers on the extreme redundancy present in large random datasets—once a system has processed billions of random sentences, each additional sentence provides diminishing marginal value. However, by carefully curating non-redundant training sets where each data point contributes novel information, they achieved superior exponential scaling performance.


In parallel research, Ganguly’s team explored whether evolutionary processes could accelerate learning efficiency. They demonstrated that robot morphologies could be evolved across generations to become progressively easier to control, with successive generations learning faster by virtue of their improved body designs. This work provided the first demonstration in their simulations of the morphological Baldwin effect, a long-theorized but difficult-to-test concept in evolutionary biology.


Energy Efficiency: Learning from Biological Computation


The energy consumption disparity between biological and artificial intelligence presents an even more dramatic challenge. Human brains operate on merely 20 watts of power, whilst modern AI systems can consume 10 million watts—a difference of several orders of magnitude. Ganguly attributed this vast gap to fundamental differences in computational philosophy between digital systems and biological networks.


Digital computation relies on fast, reliable bit flips at every intermediate computational step. However, the laws of thermodynamics dictate that such rapid and reliable operations necessarily consume substantial energy. Biology has evolved a radically different approach: it achieves correct final answers using the slowest and most unreliable intermediate steps possible, arriving at solutions “just in time” without expending unnecessary energy. As Ganguly eloquently described it, “Biology does not rev its engine any more than it needs to.”


Furthermore, biological systems co-design computation and physics far more elegantly than artificial systems. Rather than using complex, energy-hungry transistor circuits, biological systems directly harness Maxwell’s equations of electromagnetism to perform operations like addition. This represents a fundamental matching of computational dynamics to the native physics of the universe.


To bridge this energy gap, Ganguly argued for rethinking the entire technology stack from electrons to algorithms. His team has begun this work by solving fundamental questions about the limits of speed and accuracy under energy constraints for specific computations. For sensing—a computation every cell must perform—they derived the theoretical minimum achievable error for any chemical computer and identified the family of optimal computers that achieve this limit. Remarkably, these optimal chemical computers closely resemble G-protein coupled receptors, which “hide in every single cell” and perform sensing functions, suggesting deep connections between optimal physical computation and evolved biological systems.


At a higher level, Ganguly’s team has leveraged new capabilities to measure both neural activity and energy consumption in the form of ATP usage—”the fundamental chemical fuel that powers all life’s processes”—across entire fly brains. This analysis revealed that brains function like sophisticated smart energy grids, predicting where and when energy will be needed and producing precisely the right amount at the optimal time and location.


Quantum Neuromorphic Computing: Transcending Biological Limitations


Whilst biological systems offer valuable lessons for energy efficiency, Ganguly emphasized that artificial systems “don’t have to be limited by evolution” and “can go beyond evolution.” His most ambitious work involves instantiating neural algorithms in quantum hardware that evolution could never discover. In this paradigm, individual neurons are replaced by individual atoms in different excited electronic states, whilst synapses are replaced by photons that enable communication between atomic states through emission and absorption processes.


This approach has enabled the construction of quantum versions of classical neural networks, such as Hopfield associative memory networks—the same type of network for which John Hopfield recently won the Nobel Prize in Physics. These quantum implementations demonstrate superior capacity, robustness, and recall compared to their classical counterparts. The team has also developed photonic computers that “solve optimization problems in interesting new ways,” creating what Ganguly terms “quantum neuromorphic computing”—an entirely new field emerging from the marriage of neural algorithms and quantum hardware.


Brain-Machine Integration and Digital Twins


Perhaps the most transformative aspect of Ganguly’s research involves creating AI-powered digital twins of brain circuits. This approach begins by recording extensive neural activity from biological brains, then using AI to construct detailed computational models that serve as digital twins of the original circuits. These digital twins enable rapid in silico experimentation and the application of explainable AI techniques to understand brain function.


The process extends beyond mere observation to active control. Using control theory, researchers can identify specific neural patterns that influence the digital twin’s behavior, then transfer these same patterns to the actual brain, effectively learning to “speak” the brain’s native neural language.


Ganguly’s team has demonstrated this approach across multiple applications. They developed the world’s most accurate digital twin of the biological retina, using explainable AI to reproduce “two decades’ worth of experiments in a matter of days”—a dramatic acceleration of neuroscience discovery. In mouse studies, they successfully decoded visual perception directly from neural activity, then demonstrated the reverse capability by inducing specific visual experiences through carefully designed neural stimulation patterns.


The clinical applications have proven equally promising. Working with epileptic patients, the team created digital twins that could “reproduce actual epileptic seizure dynamics across the entire brain.” Using explainable AI to understand seizure initiation mechanisms, they developed control strategies that could modulate seizure amplitude in the digital twin, then successfully transferred these control signals to actual brains to manage real seizures.


Commercial Translation and Future Scaling


Building on these research successes, Ganguly announced they are creating a new startup called Metamorphic, which will work closely with Stanford University’s Enigma project. Together, these initiatives aim to scale digital twin construction to encompass entire primate brains, beginning with visual systems. Such scaled-up digital twins promise to enable robust biohybrid AI systems that learn directly from brain data whilst offering new AI-driven approaches to treating neurological diseases.


The Imperative for Open Academic Research


Ganguly concluded with passionate advocacy for maintaining intelligence research within the academic sphere, emphasizing his excitement about “what the science of intelligence can achieve out in the open.” He stressed that the pursuit of a unified science of intelligence must be conducted openly “for the public benefit of all,” with the long time horizons that academic institutions can provide.


His argument rests on historical precedent: the academic studies of previous decades laid the essential foundations for today’s AI breakthroughs, and current academic research will similarly enable tomorrow’s technological advances. Ganguly called for expanded public investment in academic intelligence research to ensure that future developments serve broad societal benefit.


The presentation ultimately advocates for moving “beyond large language models and diffusion models and so forth” through a fundamentally interdisciplinary approach that combines insights from neuroscience, physics, and computer science. Ganguly’s vision of a unified science of intelligence offers a roadmap for creating more efficient, explainable, and powerful AI systems whilst simultaneously advancing our understanding of biological intelligence and developing new treatments for neurological conditions.


This comprehensive approach—spanning from quantum hardware to neural networks, from theoretical physics to clinical applications—represents a paradigm shift in how we approach intelligence research. Rather than pursuing incremental improvements to existing AI architectures, Ganguly’s work suggests that the next major breakthroughs will emerge from fundamental scientific understanding that bridges the artificial and biological domains, conducted openly for the benefit of humanity.


Session transcript

Speaker 1

also for contributing your expertise to this summit. Ladies and gentlemen, I now take this opportunity to invite Professor Surya Ganguly, Professor of AI, Neuroscience and Physics, Stanford University. Professor Ganguly’s research sits at one of the most intellectually fertile intersections in science today. Using the mathematics of physics and the insights of neuroscience to understand how intelligence, biological and artificial intelligence, actually works. His work is helping build the theoretical foundations that practice so urgently needs. Please welcome Professor Surya Ganguly from Stanford University.

Surya Ganguly

Thank you. Great, we got the slides. So we went from a world leader to a VC to now a professor now. So we have a little bit of a change of pace. It’s going to get a little bit more technical. And because I’m a professor, there’s going to be an exam at the end. All right, so pay attention. All right, so I’m going to talk about advancing the science and engineering of intelligence. So, the last decade of AI research has led to stunning advances in the engineering of intelligence, yielding AI systems that stand poised to transform our society. Yet, alarmingly, we understand almost nothing about how they work, and we desperately need to. At the same time, our brain is the product of 500 million years of vertebrate brain evolution, and it is still orders of magnitude better than AI along several axes, and we also need to understand why.

So, I work in a unified science of intelligence across both brains and machines that seeks to both understand biological and artificial intelligence and create more efficient, explainable, and powerful AI. Today, I’ll work on understanding and improving intelligence along three lines. Data efficiency, energy efficiency, and melding brains and machines. First, data efficiency. so um ai is vastly more data hungry than humans we get about 100 million words of language experience ai gets 10 trillion it would take us 240 000 years to read everything that ai read so why is ai so data hungry well well in ai error falls off as a power law it falls off very slowly as a power law with the amount of data this is an example of a famous neural scaling law which captured the imagination of industry and motivated significant societal investments in data collection compute and energy but despite the importance of these neural scaling laws discovered over half a decade ago we lack any scientific theory for why they exist for any modern large language model and why they are so slow just last week we posted the first theory to do so from first principles, we could analytically predict the slope of these neural scaling laws and reconnected their shallow slope to the weak surface statistical structure of natural language itself.

The black line is our theory and the colored lines are experiments in modern LLMs. You can see there’s a good match. But can we make the scaling laws better? We actually can. We actually showed, both in theory and practice, that we can bend the slow power law down to a much faster exponential drop. The key idea is that large random data sets are extremely redundant. If you already have a billion random sentences, it’s unlikely that the next sentence is going to tell you very much that’s new. But what if you could find a non -redundant training set in which each new data point is carefully chosen to tell you something new compared to all the other data points?

We developed theory and algorithms to do just this, and that’s what got us the better exponential. In a completely different line of work, we asked if the process of evolution itself could speed up learning. And we showed it actually can. We evolved robot morphologies, shapes of bodies, from generation to generation. And we showed that successive generations could learn faster. They did so by designing the body to be easier to learn to control. This is an example of something called the morphological Baldwin effect. It’s an effect that has long been conjectured in evolutionary theory, but hard to test in the real world. We demonstrated it for the first time in our simulations. Okay, let’s go on to energy efficiency.

AI is vastly more energy hungry than humans. Our brain only spends 20 watts of power, but modern AI can consume 10 million watts. So why is AI so energy hungry? Well, the fault lies in the choice of digital computation itself, where we use very fast and reliable bit flips at every intermediate step of the computation. Now the laws of thermodynamics demand that every fast and reliable bit flip must consume a lot of energy. Biology chose a very different route. It gets the right answer just in time using the slowest, most unreliable intermediate steps possible. Biology does not rev its engine any more than it needs to. It also co -designs computation and physics much better.

For example, it directly uses Maxwell’s equations of electromagnetism to do addition, instead of using complex energy -hungry transistor circuits. So biology matches its computation directly to the native physics of the universe. So to bridge the vast energy gap between brains and machines, we need to rethink our entire technology stack. . from electrons to algorithms, and optimally match computational dynamics to physical dynamics. For example, given a particular computation, what are the fundamental limits on its speed and accuracy under energy constraints? We recently solved this question for the computation of sensing, which every cell has to do. We found fundamental limits on the lowest achievable error achieved by any chemical computer whatsoever. That’s the red curve. And we also found the family of optimal computers that hug this curve.

And we showed, remarkably, that these optimal chemical computers behave a lot like something called G -protein coupled receptors, which hide in every single cell, and they do sensing. So this yields a connection between what neurons do and what optimal physical sensors would do. Popping up a level, in neuroscience, we can now measure non -neural sensors. We can now measure non -neural sensors. We can now measure non -neural sensors. We can now measure not only neural activity, but also energy consumption in the form of ATP usage. the fundamental chemical fuel that powers all life’s processes. We can do this across the entire fly brain. So by analyzing the couple dynamics of neural computation and energy consumption, we discovered that the brain actually works like a smart energy grid, remarkably.

The brain can predict where and when energy will be needed in the future, and it produces just the right amount of energy at just the right time, at just the right location. So in summary, we still have a lot to learn from evolution in our quest to build more energy -efficient AI, but we don’t have to be limited by evolution. We can go beyond evolution to instantiate neural algorithms in quantum hardware that evolution could not discover. For example, we can replace individual neurons with individual atoms. A neuron in different states of firing correspond to atoms in different excited electronic states. So we can do this with the help of neural networks. We can also replace individual synapses between neurons with photons, quanta of light.

Just as synapses allow two neurons to communicate, photons allow electronic states of atoms to communicate through photon emission and absorption. So what can we build with this? As one example, we could build a Hopfield associative memory network. This is the same network that recently won John Hopfield the Nobel Prize in physics. But this is a quantum version this time that can be built with atoms and photons. And we can show that the quantum dynamics endows the memory with superior capacity, robustness, and recall. We can also go beyond this to build quantum optimizers made entirely out of photons. These photonic computers solve optimization problems in interesting new ways, and we can analyze their energy landscape. So the marriage of neurons… …and neural algorithms with quantum hardware leads to an entirely new field that I like to call quantum neuromorphic computing.

okay now returning to the brain the marriage of neuroscience and ai enables a powerful new path forward by melding minds and machines as follows imagine a scenario where we read lots and lots of neural activity from the brain then we use ai to build a model or a digital twin of brain circuits then we can do rapid in silico experiments on the digital twin and use explainable ai to understand how it works but we don’t have to stop there we can control the brain too we can use control theory to learn specific neural patterns that we can write into the digital twin to control it then we can transfer these same neural patterns into the actual brain to write into the brain and control the brain in essence we can learn the language of the brain and then speak directly back to it in its own neural language you So, as one example of this program, we recently developed the world’s most accurate digital twin of the biological retina, and we used explainable AI to understand it.

And in Silico, we could reproduce two decades’ worth of experiments in a matter of days. So this shows a general path forward to dramatically accelerating neuroscience discovery using AI. We also carried out this program in mice, where we were able to use AI to read the mind of a mouse. We could look directly at neural activity in the brain of a mouse, and we could decode what it was seeing at the lower level of resolution that mice can see. This shows that we can learn the native language of the visual brain. But we can go further than that to write to the mind of a mouse. By writing in carefully designed neural activity patterns, we could make the mouse hallucinate a particular percept.

In fact, we could control the mouse brain’s soul. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. We could even tell it to do this. So in essence, we could control what the mouse saw by writing directly into the brain using the native language of its brain itself.

We also applied this to epilepsy. Sorry, we also carried out this program in epilepsy where we built a digital twin of the epileptic brain. Our twin could reproduce actual epileptic seizure dynamics across the entire brain. We then used explainable AI to understand how these seizures were starting. Then we used control theory to be able to control the seizure amplitude in the digital twin. Then we injected these same control signals into the actual brain and controlled seizure amplitude in the actual brain. This shows how to meld brains and machines to control epilepsy. Building on all this, we’re actually creating a new startup called Metamorphic. And we’re going to be using this to control the seizure amplitude in the digital twin.

It will work closely with the Enigma project at Stanford University and together Enigma and Metamorphic. will scale up the construction of digital twins to encompass the entire primate brain, starting with the visual brain. Such scaled -up digital twins offer a powerful path forward to building robust biohybrid AI systems that are taught directly by brain data and to treat brain disease in new AI -driven ways. More generally, the possibilities of melding brains and machines are limitless, both to advance AI and to understand, cure, and augment the brain. To close, what I think we really need is a unified science of intelligence that spans both brains and machines to help us understand both biological and artificial intelligence and create more efficient, explainable, and powerful AI.

Importantly, this pursuit must be done out in the open and shared with the world, and it must be done in a way that is both biological and artificial. It must be done with a long time horizon. This makes academia an ideal place to pursue a science of intelligence, and I believe it’s imperative to expand public investment in the academic study of intelligence, because the academic studies of yesterday laid the strong foundation for today’s AI technology, and it will be the academic studies of today that lay the foundation for tomorrow’s technology, enabling us to go beyond large language models and diffusion models and so forth. Despite the huge and exciting advances happening now increasingly, unfortunately, behind closed doors at companies, I’m extremely excited about what the science of intelligence can achieve out in the open for the public benefit of all.

Thank you.

Speaker 1

Thank you so much, Professor Ganguly.

S

Speaker 1

Speech speed

118 words per minute

Speech length

91 words

Speech time

46 seconds

Academic expertise for the summit

Explanation

Speaker 1 stresses that bringing academic knowledge, especially from physics and neuroscience, is essential for the summit and for fostering interdisciplinary dialogue on intelligence. The speaker highlights the need for contributors to share their expertise at the event.


Evidence

“Using the mathematics of physics and the insights of neuroscience to understand how intelligence, biological and artificial intelligence, actually works.” [5]. “also for contributing your expertise to this summit.” [16].


Major discussion point

Unified Science of Intelligence (Brains & Machines) – Importance of academic expertise


Topics

Artificial intelligence | Capacity development | The enabling environment for digital development


S

Surya Ganguly

Speech speed

163 words per minute

Speech length

2114 words

Speech time

775 seconds

Unified science of intelligence

Explanation

Ganguly advocates for a unified science that bridges biological and artificial intelligence to produce AI that is more efficient, explainable, and powerful. This vision underpins the need for interdisciplinary research across brains and machines.


Evidence

“So, I work in a unified science of intelligence across both brains and machines that seeks to both understand biological and artificial intelligence and create more efficient, explainable, and powerful AI.” [1]. “To close, what I think we really need is a unified science of intelligence that spans both brains and machines to help us understand both biological and artificial intelligence and create more efficient, explainable, and powerful AI.” [2].


Major discussion point

Unified Science of Intelligence (Brains & Machines) – Need for unified science


Topics

Artificial intelligence | Capacity development | The enabling environment for digital development


AI data hunger and scaling laws

Explanation

Ganguly points out that AI consumes vastly more data than humans and that existing neural scaling laws are slow and lack a solid theoretical basis. He proposes a first‑principles theory to predict scaling‑law slopes and improve data efficiency.


Evidence

“so um ai is vastly more data hungry than humans we get about 100 million words of language experience ai gets 10 trillion it would take us 240 000 years to read everything that ai read so why is ai so data hungry … despite the importance of these neural scaling laws … we lack any scientific theory for why they exist … just last week we posted the first theory to do so from first principles, we could analytically predict the slope of these neural scaling laws and reconnected their shallow slope to the weak surface statistical structure of natural language itself.” [35]. “But what if you could find a non -redundant training set in which each new data point is carefully chosen to tell you something new compared to all the other data points?” [44]. “And we showed that successive generations could learn faster.” [45].


Major discussion point

Data Efficiency in AI – Data hunger and scaling theory


Topics

Artificial intelligence


Energy consumption gap

Explanation

Ganguly highlights that modern AI consumes orders of magnitude more power than the brain, largely because digital computation relies on fast, reliable bit flips that are energy‑intensive. He calls for rethinking the technology stack to close this gap.


Evidence

“Our brain only spends 20 watts of power, but modern AI can consume 10 million watts.” [38]. “the fault lies in the choice of digital computation itself, where we use very fast and reliable bit flips at every intermediate step of the computation.” [51]. “Now the laws of thermodynamics demand that every fast and reliable bit flip must consume a lot of energy.” [52].


Major discussion point

Energy Efficiency and Bio‑Inspired Computation – Power gap


Topics

Environmental impacts | Artificial intelligence


Biological energy‑efficient computation

Explanation

Ganguly explains that biology achieves energy efficiency by using slow, unreliable intermediate steps and by matching computation directly to the physics of the substrate, offering a model for future AI hardware design.


Evidence

“It gets the right answer just in time using the slowest, most unreliable intermediate steps possible.” [57]. “So biology matches its computation directly to the native physics of the universe.” [58].


Major discussion point

Energy Efficiency and Bio‑Inspired Computation – Biological model


Topics

Environmental impacts | Artificial intelligence


Quantum neuromorphic computing

Explanation

Ganguly proposes replacing biological components with quantum hardware: neurons become atoms and synapses become photons, enabling quantum Hopfield memories and photonic optimizers with superior capacity, robustness, and recall.


Evidence

“For example, we can replace individual neurons with individual atoms.” [61]. “We can also replace individual synapses between neurons with photons, quanta of light.” [60]. “And we can show that the quantum dynamics endows the memory with superior capacity, robustness, and recall.” [65]. “So the marriage of neurons… and neural algorithms with quantum hardware leads to an entirely new field that I like to call quantum neuromorphic computing.” [66].


Major discussion point

Quantum Neuromorphic Computing


Topics

Artificial intelligence | The enabling environment for digital development


Digital twins for brain‑machine melding

Explanation

Ganguly describes building highly accurate digital twins of the retina, mouse visual cortex, and epileptic brain, enabling rapid in‑silico experiments, decoding of neural activity, and direct control of perception and seizures.


Evidence

“imagine a scenario where we read lots and lots of neural activity from the brain then we use ai to build a model or a digital twin of brain circuits then we can do rapid in silico experiments on the digital twin and use explainable ai to understand how it works … we can control the brain too we can use control theory to learn specific neural patterns that we can write into the digital twin to control it then we can transfer these same neural patterns into the actual brain …” [28]. “Our twin could reproduce actual epileptic seizure dynamics across the entire brain.” [70]. “Sorry, we also carried out this program in epilepsy where we built a digital twin of the epileptic brain.” [71]. “We could look directly at neural activity in the brain of a mouse, and we could decode what it was seeing at the lower level of resolution that mice can see.” [75].


Major discussion point

Brain‑Machine Melding via Digital Twins


Topics

Artificial intelligence | Capacity development


Reading and writing mouse perception

Explanation

Using the digital twin, Ganguly’s team could decode what a mouse sees and induce specific hallucinations by writing designed neural patterns, demonstrating precise control over perception.


Evidence

“By writing in carefully designed neural activity patterns, we could make the mouse hallucinate a particular percept.” [10]. “We could look directly at neural activity in the brain of a mouse, and we could decode what it was seeing at the lower level of resolution that mice can see.” [75]. “So in essence, we could control what the mouse saw by writing directly into the brain using the native language of its brain itself.” [77].


Major discussion point

Brain‑Machine Melding via Digital Twins – Perception control


Topics

Artificial intelligence | Capacity development


Controlling seizures with AI‑derived signals

Explanation

Ganguly’s work shows that AI can generate control signals in a digital twin of an epileptic brain and then transfer those signals to the real brain to modulate seizure amplitude.


Evidence

“And we’re going to be using this to control the seizure amplitude in the digital twin.” [72]. “Then we used control theory to be able to control the seizure amplitude in the digital twin.” [73]. “Then we injected these same control signals into the actual brain and controlled seizure amplitude in the actual brain.” [78].


Major discussion point

Brain‑Machine Melding via Digital Twins – Seizure control


Topics

Artificial intelligence | Capacity development


Open, public‑funded research

Explanation

Ganguly calls for expanding public investment and open‑access research in the science of intelligence, arguing that today’s academic work underpins tomorrow’s breakthroughs beyond proprietary AI models.


Evidence

“Despite the huge and exciting advances happening now increasingly, unfortunately, behind closed doors at companies, I’m extremely excited about what the science of intelligence can achieve out in the open for the public benefit of all.” [11]. “This makes academia an ideal place to pursue a science of intelligence, and I believe it’s imperative to expand public investment in the academic study of intelligence, because the academic studies of yesterday laid the strong foundation for today’s AI technology, and it will be the academic studies of today that lay the foundation for tomorrow’s technology, enabling us to go beyond large language models and diffusion models and so forth.” [17]. “Importantly, this pursuit must be done out in the open and shared with the world, and it must be done in a way that is both biological and artificial.” [25].


Major discussion point

Advocacy for Open, Academic Research Investment


Topics

The enabling environment for digital development | Capacity development | Artificial intelligence


Agreements

Agreement points

Similar viewpoints

Unexpected consensus

Overall assessment

Summary

This transcript represents a single academic presentation by Professor Surya Ganguly rather than a multi-speaker discussion or debate. Speaker 1 only provides an introduction, while Professor Ganguly presents his research on the science of intelligence across biological and artificial systems. There are no opposing viewpoints, disagreements, or multiple perspectives presented that would allow for analysis of consensus or agreement points.


Consensus level

Not applicable – this is a monologue presentation format rather than a discussion requiring consensus analysis. The content represents one expert’s comprehensive overview of his research spanning data efficiency, energy efficiency, quantum neuromorphic computing, brain-machine interfaces, and the need for unified intelligence science. Any assessment of agreement would require multiple speakers presenting different viewpoints on these topics.


Differences

Different viewpoints

Unexpected differences

Overall assessment

Summary

No disagreements identified – this is a single-speaker academic presentation


Disagreement level

No disagreement present. This transcript contains a solo academic presentation by Professor Surya Ganguly about advancing the science and engineering of intelligence, with only brief introductory remarks by a moderator. The professor presents his research findings and proposals without any opposing viewpoints, counterarguments, or debate from other speakers. The format is that of a conference presentation rather than a discussion or debate among multiple parties with differing perspectives.


Partial agreements

Partial agreements

Similar viewpoints

Takeaways

Key takeaways

AI systems are vastly less efficient than biological intelligence in both data and energy consumption, requiring 10 trillion words vs humans’ 100 million words and consuming 10 million watts vs the brain’s 20 watts


The first scientific theory explaining neural scaling laws in large language models has been developed, showing that slow power law scaling is connected to the weak statistical structure of natural language


AI efficiency can be dramatically improved by using non-redundant training datasets and evolutionary approaches, potentially changing power law scaling to exponential improvement


Biology achieves superior energy efficiency by using the slowest, most unreliable intermediate steps possible and co-designing computation directly with physics rather than using energy-hungry digital computation


Quantum neuromorphic computing represents a new field that can surpass biological limitations by replacing neurons with atoms and synapses with photons, creating superior memory networks and optimizers


Brain-machine interfaces can create digital twins of brain circuits that enable both reading neural activity (decoding what mice see) and writing to brains (inducing specific hallucinations and controlling seizures)


A unified science of intelligence spanning both biological and artificial systems is essential for creating more efficient, explainable, and powerful AI


Academic research conducted openly with long time horizons is crucial for foundational AI advances, requiring increased public investment as today’s academic work will enable tomorrow’s breakthroughs


Resolutions and action items

Launch of new startup called Metamorphic to work with Stanford’s Enigma project to scale digital twin construction to entire primate brains, starting with the visual brain


Development of biohybrid AI systems taught directly by brain data for treating brain diseases using AI-driven approaches


Expansion of public investment in academic intelligence research to maintain open, long-term foundational work


Unresolved issues

How to bridge the vast energy gap between biological and artificial intelligence across the entire technology stack from electrons to algorithms


Scaling quantum neuromorphic computing beyond proof-of-concept demonstrations to practical applications


Ethical implications and safety considerations of brain-machine interfaces that can read and write neural patterns


Timeline and specific mechanisms for transitioning from current AI models to more biologically-inspired efficient systems


Regulatory and safety frameworks needed for brain-computer interface technologies and digital brain twins


Suggested compromises

None identified


Thought provoking comments

Despite the importance of these neural scaling laws discovered over half a decade ago we lack any scientific theory for why they exist for any modern large language model and why they are so slow just last week we posted the first theory to do so from first principles

Speaker

Surya Ganguly


Reason

This comment is profoundly insightful because it exposes a fundamental gap in our understanding of AI systems that have already transformed society. Despite billions invested based on neural scaling laws, we’ve been operating without theoretical foundations. Ganguly’s claim of developing the first principled theory represents a potential breakthrough in making AI more scientifically grounded rather than purely empirical.


Impact

This comment establishes the central tension of the presentation – the paradox of having powerful AI systems we don’t understand. It sets up the entire framework for why we need a ‘unified science of intelligence’ and transitions the discussion from celebrating AI achievements to examining their fundamental limitations.


Biology chose a very different route. It gets the right answer just in time using the slowest, most unreliable intermediate steps possible. Biology does not rev its engine any more than it needs to.

Speaker

Surya Ganguly


Reason

This observation fundamentally reframes how we think about computational efficiency. It challenges the tech industry’s assumption that faster and more reliable is always better, suggesting instead that biology’s ‘lazy’ approach might be superior. This insight has profound implications for rethinking our entire computational paradigm.


Impact

This comment pivots the discussion from data efficiency to energy efficiency and introduces a completely different philosophy of computation. It challenges conventional wisdom and opens up new avenues for AI development that could be orders of magnitude more efficient.


We can learn the language of the brain and then speak directly back to it in its own neural language

Speaker

Surya Ganguly


Reason

This metaphor is deeply thought-provoking because it reframes brain-computer interfaces not as crude electrical stimulation, but as sophisticated communication systems. The idea of ‘learning’ and ‘speaking’ the brain’s language suggests a level of precision and understanding that could revolutionize neuroscience and medicine.


Impact

This comment transitions the discussion into the most futuristic and potentially transformative territory – direct brain-machine communication. It elevates the conversation from theoretical improvements to practical applications that could treat disease and augment human capabilities.


This pursuit must be done out in the open and shared with the world… I believe it’s imperative to expand public investment in the academic study of intelligence, because the academic studies of yesterday laid the strong foundation for today’s AI technology

Speaker

Surya Ganguly


Reason

This comment is particularly insightful because it addresses the growing concern about AI development happening behind closed doors at private companies. Ganguly makes a compelling case for open science and public investment, connecting historical academic contributions to current AI breakthroughs and arguing for the same approach for future developments.


Impact

This closing comment shifts the discussion from technical achievements to policy and societal implications. It introduces urgency around ensuring AI development serves public benefit rather than private interests, effectively calling for a fundamental change in how we approach AI research funding and openness.


We can go beyond evolution to instantiate neural algorithms in quantum hardware that evolution could not discover… leading to an entirely new field that I like to call quantum neuromorphic computing

Speaker

Surya Ganguly


Reason

This comment is revolutionary because it suggests we can transcend biological limitations by combining insights from neuroscience with quantum physics. The coining of ‘quantum neuromorphic computing’ represents the birth of an entirely new field that could lead to computational capabilities far beyond both current AI and biological intelligence.


Impact

This comment introduces the most speculative but potentially transformative direction, showing how the marriage of multiple disciplines (neuroscience, AI, quantum physics) could create entirely new technological paradigms. It demonstrates the far-reaching implications of interdisciplinary approaches to intelligence.


Overall assessment

Professor Ganguly’s presentation fundamentally reframes the AI discussion from celebrating current achievements to exposing critical knowledge gaps and proposing radically new directions. His comments create a compelling narrative arc: first revealing our lack of theoretical understanding of current AI, then showing how biological principles could lead to vastly more efficient systems, and finally demonstrating how brain-machine interfaces could create unprecedented capabilities. The presentation’s power lies in its interdisciplinary approach, connecting physics, neuroscience, and AI to propose solutions that transcend the limitations of each field individually. His closing call for open, publicly-funded research adds crucial urgency around ensuring these transformative technologies serve humanity broadly rather than narrow commercial interests. The discussion effectively argues that the next breakthrough in AI won’t come from scaling current approaches, but from fundamentally new scientific understanding that bridges biological and artificial intelligence.


Follow-up questions

How can we scale up the construction of digital twins to encompass the entire primate brain beyond just the visual brain?

Speaker

Surya Ganguly


Explanation

This represents a major technical challenge that needs to be addressed to fully realize the potential of brain-machine interfaces and digital twin technology for understanding and treating neurological conditions


What are the practical implementation challenges of quantum neuromorphic computing using atoms and photons?

Speaker

Surya Ganguly


Explanation

While the theoretical framework was presented, the practical engineering challenges of building quantum neural networks with atoms as neurons and photons as synapses require further investigation


How can we go beyond current AI models like large language models and diffusion models using insights from the unified science of intelligence?

Speaker

Surya Ganguly


Explanation

This represents a fundamental research direction for developing next-generation AI systems that could be more efficient and powerful than current approaches


What are the fundamental limits on speed and accuracy under energy constraints for computations beyond sensing?

Speaker

Surya Ganguly


Explanation

The speaker solved this for sensing computation but indicated this approach could be extended to other types of computations, which would be valuable for energy-efficient AI design


How can we develop robust biohybrid AI systems that are taught directly by brain data?

Speaker

Surya Ganguly


Explanation

This represents a new paradigm for AI development that could lead to more biologically-inspired and efficient artificial intelligence systems


What are the broader applications of brain-machine control beyond epilepsy treatment?

Speaker

Surya Ganguly


Explanation

The speaker demonstrated success with epilepsy control but suggested the approach could be applied to other brain diseases and conditions, requiring further research


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.