Keynote-Sam Altman
19 Feb 2026 12:45h - 13:00h
Keynote-Sam Altman
Summary
The event featured Sam Altman, CEO of OpenAI, who was introduced as a leading figure in bringing artificial general intelligence into public discourse [2-4]. Altman highlighted rapid advances since his last visit to India, noting that AI systems have moved from struggling with high-school math to performing research-level mathematics and generating novel theoretical-physics results [10]. He emphasized India’s significant AI adoption, with over 100 million weekly ChatGPT users, a third of whom are students, and rapid growth of the Codex coding tool in the country [13-15]. Altman argued that India’s position as the world’s largest democracy makes it well-suited to both build and shape AI’s future, urging swift action [16-18].
He warned that true superintelligence could emerge within a few years, potentially concentrating most of the world’s intellectual capacity in data centres by the end of 2028, though he acknowledged uncertainty [19-22]. Altman outlined three guiding beliefs: first, that democratizing AI is essential for safety and human flourishing, while centralizing power risks ruin [24-28]. Second, AI resilience requires a broader societal safety approach beyond technical alignment, including defenses against misuse such as open-source biomodels that could create pathogens [33-41]. Third, the development of AI will be unpredictable, so many stakeholders must have a say in shaping outcomes, and iterative deployment of increasingly capable systems is a key strategic insight [42-53]. He noted that this iterative approach has so far allowed society to integrate new capabilities while preparing for future surprises [53-55].
Altman projected that AI will drive cheaper production, faster economic growth, and improved access to healthcare and education, but also disrupt many current jobs as machines outperform human effort in many tasks [56-61]. He suggested that while technology inevitably displaces work, new opportunities will arise, and it is a moral imperative to ensure future generations retain agency and fulfillment [65-71]. For a democratic AI future, Altman stressed that providing tools alone is insufficient; people need agency and collective power, and global coordination mechanisms akin to the IAEA may be required to manage AI risks [76-78]. The discussion concluded with thanks to Altman for his remarks, underscoring the significance of his vision for AI’s societal impact [79].
Keypoints
Major discussion points
– Rapid AI progress and the approaching possibility of superintelligence – Altman highlights how AI has moved from struggling with high-school math to performing research-level mathematics and even generating novel theoretical-physics results, and warns that “we may be only a couple of years away from early versions of true superintelligence” with a majority of the world’s intellectual capacity potentially residing in data centres by 2028 [10][19-21].
– Democratization of AI as a safeguard against concentration of power – He argues that “democratization of AI is the only fair and safe path forward” and that centralising the technology could lead to ruin; a democratic AI future must give people agency and power, not just tools or wealth [25-28][30-32][76-78].
– AI safety must include societal resilience, not just technical alignment – Altman stresses that “AI resilience is a core safety strategy,” calling for broader societal measures to defend against threats such as open-source biomodels that could be misused to create pathogens [33-41].
– Economic transformation and job disruption – He notes that AI will drive massive cost reductions and faster economic growth, making products cheaper and automating supply chains, while also acknowledging that many current jobs will be displaced as “it’ll be very hard to outwork a GPU” [55-61][62-68].
– Iterative deployment and global governance are essential – Altman promotes “iterative deployment” as a way for society to adapt to each new AI capability, and calls for international coordination mechanisms (e.g., an “IAEA-like” body) to manage AI’s rapid evolution and prevent power concentration [53-55][76-78].
Overall purpose / goal of the discussion
The talk aims to inform and inspire the Indian audience about the extraordinary advances in AI, while urging policymakers, industry leaders, and the public to adopt a democratic, safety-first approach. Altman seeks to rally support for responsible, inclusive AI development, highlight the economic opportunities and challenges, and call for coordinated global governance to steer AI toward a beneficial future.
Overall tone
The tone begins with enthusiasm and pride in AI’s rapid breakthroughs and India’s role [6-10][13-16]. It then shifts to a more sober, cautionary tone when addressing risks, safety, and the need for democratic safeguards [25-32][33-41]. Toward the end, it becomes persuasive and hopeful, emphasizing collective agency, iterative deployment, and the possibility of a flourishing, equitable future [53-55][76-78]. Throughout, Altman balances optimism about AI’s potential with a serious call to action on governance and safety.
Speakers
– Sam Altman
– Role/Title: CEO of OpenAI[S1]
– Area of Expertise: Artificial intelligence, artificial general intelligence development, technology leadership[S1][S3]
– Speaker 1
– Role/Title: Event moderator / host introducing the main speaker[S4]
– Area of Expertise: (not specified)
Additional speakers:
– (none identified beyond the listed speakers)
The event opened with Speaker 1 introducing Sam Altman as a pivotal figure who has brought artificial general intelligence from science-fiction speculation into mainstream discourse and launched ChatGPT [1-4].
Altman began by thanking the audience, noting that he was last in India a little over a year ago [1], and highlighted the rapid progress of AI-from systems that struggled with high-school mathematics to ones capable of research-level mathematics and novel theoretical-physics results [2-4].
He then turned to India’s AI trajectory, stating that more than 100 million Indians use ChatGPT each week, with over a third of them being students [5-7], and that India is the fastest-growing market for OpenAI’s Codex coding assistant [8-9]. He argued that, as the world’s largest democracy, India is uniquely positioned to both build AI and shape its future, urging swift action [10-13].
Altman projected that early versions of true superintelligence could appear within a few years and that, if his estimate holds, the majority of the world’s intellectual capacity might reside in data centres by the end of 2028. He acknowledged that the claim is extraordinary and could be wrong, but said it deserves serious consideration [10-13].
Altman outlined three guiding principles for OpenAI’s approach [14-20]:
1. Democratization of AI – the only fair and safe path forward; concentrating AI power in a single company or nation would be ruinous, and societies must avoid “effective totalitarianism in exchange for a cure for cancer” [14-18][19-20].
2. AI resilience – safety must extend beyond technical alignment to a societal-wide strategy, including defenses against risks such as open-source biomodels that could be misused to create pathogens [21-26].
3. Unpredictability of AI’s trajectory – many stakeholders must shape outcomes, and iterative deployment-releasing increasingly capable systems while giving society time to integrate, understand, and decide on each step-has been working surprisingly well so far [27-30].
He described the economic impact of AI as a driver of substantial cost reductions, faster growth, improved access to high-quality healthcare and education, and automation of supply chains that will make physical goods cheaper [31-34]. He warned that many current occupations will be disrupted because “it will be very hard to out-work a GPU,” yet noted humanity’s historical ability to create new, more fulfilling roles after technological upheavals [31-38].
Altman emphasized that each generation builds on the previous one, creating an ever-taller “external lattice” of tools that enables achievements unimaginable to earlier generations [39-42].
He concluded with a moral imperative: future generations must retain agency and fulfillment, which requires not only tools and wealth but also genuine power; sharing control entails accepting some failures to avoid a single catastrophic concentration of authority [43-46].
Altman called for an international coordination body-akin to the IAEA-to oversee AI safety and provide rapid response to emerging risks [47-49].
Finally, he noted that the next few years will test global society, presenting a choice between empowering people or concentrating power [50-52]. Speaker 1 thanked Altman for his compelling remarks [52].
level to change the lives of human beings. Ladies and gentlemen, few individuals have done more to bring artificial general intelligence from the realm of science fiction into boardrooms, into parliaments and living rooms than our next speaker, Sam Altman, CEO, OpenAI. Under his leadership, OpenAI launched ChatGPT and forced the world to re -evaluate his relationship with artificial intelligence. So ladies and gentlemen, please welcome CEO of OpenAI, Mr. Sam Altman.
Thank you so much. It’s really a treat to be here in India, and it’s incredible to see the country’s leadership in advanced AI. I was last here a little over a year ago, and it’s striking that I’m here today. I’m here to talk to you about the future of AI. I’m here to talk to you about how much progress has happened since then. We’ve gone from AI systems that struggled with high school level math to systems that can do research level mathematics now and derive novel results in theoretical physics. It’s also striking how much progress India has made in its mission to put AI to work for more people in more parts of the country.
And India’s leadership in sovereign AI, building on infrastructure, SLMs, and much more has been great to watch. More than 100 million people in India use ChatGPT every week. More than a third of them are students. India is also the fastest growing market now for Codex, our coding agent that works to help people develop software faster and better. India, the world’s largest democracy, is well positioned to lead in AI, not just to build it, but to shape it and decide what our future is going to look like. And it’s important to move quickly. On our current trajectory. We believe we may be only a couple of years away from early versions of true superintelligence. If we are right, by the end of 2028, more of the world’s intellectual capacity could reside inside of data centers than outside of them.
This is an extraordinary statement to make, and of course we could be wrong. But I think it really bears serious consideration. A superintelligence, at some point on its development curve, would be capable of doing a better job being the CEO of a major company than any executive, certainly me, or doing better research than our best scientists. As we prepare for this possibility, we are guided by three core beliefs. Number one, we believe that democratization of AI is the only fair and safe path forward. Democratization of AI is the best way to ensure that humanity flourishes. On the other hand, centralization of this technology in one company or country could lead to ruin. The desirable future a couple of decades from now has got to look like a world of liberty, democracy, widespread flourishing, and an increase in human agency.
Some people want effective totalitarianism in exchange for a cure for cancer. I don’t think we should accept that trade -off, nor do I think we need to. AI should extend individual human will. We’ll probably need superintelligence to help us figure out the new governance mechanisms to ensure that this happens fairly at scale, and to avoid problems like extremely unbalanced compute, access, or something else. Second, we believe that AI resilience is a core safety strategy. We don’t mean that this is the only safety strategy. We will continue to need to build safe systems and solve difficult technical alignment challenges. But increasingly, we need to start broadening how we think about safety to include societal resilience. No AI lives in a world where we don’t have to worry about safety.
We need to build a system where we can do that. No AI system can deliver a good future on their own. For an obvious example, there’ll be extremely capable biomodels available open source that could help people create new pathogens. We need a society -wide approach about how we’re going to defend against this. And third, the future of AI is not going to unfold exactly like anyone predicts. And we believe that many people need to have a stake in shaping the outcome. The development of AI has already held many surprises, and I assume there are bigger ones to come. We understand that with technology this powerful, people want answers. But it’s important to be humble about what we don’t know, and always remember that sometimes our best guesses are wrong.
Most of the important discoveries happen when technology and society meet, sometimes have some friction, and co -evolve. For example, we don’t yet know how to think about some superhuman problem. We don’t know how to think about superintelligence being aligned with dictators in totalitarian countries. We don’t know how to think about countries using AI to fight new kinds of war with each other. We don’t know how to think about when and whether countries are going to have to think about new forms of social contracts. But we think it’s important to have more understanding and society -wide debate before we’re all surprised. Of special note, and related to all three points, we continue to believe that iterative deployment is a key strategic insight, and that society needs to contend with and use each successive new level of AI capability, have time to integrate it, understand it, and decide how to move forward.
This has been working surprisingly well so far. If we are right, and systems continue to improve at this pace, it’s going to change the economics of a lot of things. A really great thing about AI progress is that it looks like many things are going to get much cheaper and have much faster economic growth. We’re already seeing what AI is doing, for access to high -quality healthcare, education, and more. In the coming years, we expect to see robots make many products and physical goods cheaper as supply chains get automated. The limit to how far this cost reduction can go may only be government policy. But the other side of this coin is that current jobs are going to get disrupted, as AI can do more and more of the things that drive our economy today.
It’ll be very hard to outwork a GPU in many ways. It’ll be easy in some other ways. For example, we really seem hardwired to care about other people much more than we care about machines. We’re somewhat less concerned about the long -term future. Technology always disrupts jobs. We always find new and better things to do. The people of 500 years ago would have thought that our current jobs often look silly, like ways to entertain ourselves, create stress. And the people 500 years from now hopefully will look at us, hopefully look to us, like impossibly rich people playing games, trying to find ways to pass their times. But we should all hope that they feel much more fulfilled.
We should all hope that they feel much more fulfilled. than we do today. I’m confident we will keep being driven to be useful to each other, to express our creativity, to gain status, to compete, and much more. But the specifics of what we do day to day will probably look very different. Each generation has built on the work of the generations before, and with new tools, the scaffolding gets a little taller. This collective external lattice, the set of tools that we have built up around ourselves, is remarkable, and we are capable of doing things that our great -great -grandparents couldn’t have dreamed possible. It is a moral imperative to make sure that our great -great -grandchildren can stay the same, and technology, and especially AI, is how we’re going to get there.
For a democratic AI future, it is not enough to just give people tools and wealth. We also need to give them agency and power. The vision that AI companies lay out fundamentally reduced to either unilateral control or decentralized power. sharing control means accepting that some things are going to go wrong in exchange for not having one thing go mega wrong cemented totalitarian control this is a fundamental trade -off of democracy and it is one that we believe in very strongly as the way to give everyone collective agency over the future of course this is not to suggest that we won’t need any regulation or safeguards we obviously do urgently like we have for other powerful technologies in particular we expect the world may need something like the IAEA for international coordination of AI and especially for it to have the ability to rapidly respond to change in circumstances the next few years will test global society as this technology continues to improve at a rapid pace we can choose to either empower people or concentrate power thank you very much
thank you mr. Sam Altman for your very interesting and compelling remarks
We may be only a couple of years away from early versions of true superintelligence
EventSecretary-General – Antonio Guterres:Mr. President, Excellencies, I thank the United States for convening the Meeting on Artificial Intelligence and the Maintenance of International Peace and Security…
EventDespite significant progress, artificial intelligence development is still in its early stages. While agentic AI represents the current technological highlight, the future points toward general AI, wh…
EventDespite AI and internet technologies being designed to decentralize power, Papandreou observes that power has actually become more centralized in the hands of big tech companies and oligarchs. This co…
EventMensch advocates for a decentralized approach to AI development, where multiple actors have access to AI technology. He believes this approach can help prevent the concentration of power in a few hand…
EventGlobal governance of AI is a precursor for a democratic development and evolution. And we need to continue to develop and they’re still being concentrated in a few, very few companies and even less co…
EventHowever, there is significant apprehension surrounding the perceived industrial domination in the AI policymaking process. Such dominance is viewed negatively as it could erode democratic values, pote…
EventThis comment introduced a crucial tension between the massive scale of change and the need for distributed, democratic approaches to managing it. It acknowledged the limitations of centralized control…
EventSafety should focus on protection of people, not just systems, requiring continuous human oversight and institutional accountability
EventThese technological disparities will coincide with massive job displacement and economic disruption across all sectors simultaneously. Unlike previous technological revolutions that affected industrie…
EventIt is changing how people get jobs and how they get hired for jobs. So an example of that is entrepreneurs often now are not putting job titles, explicit job titles down, job titles that we would have…
EventAI also creates a number of challenges. One of them is there will be some job losses, particularly sort of entry-level jobs that are very much knowledge or document-based Labor market disruption is t…
EventThe discussion aims to advocate for comprehensive, inclusive AI governance that ensures the benefits of AI are shared globally while minimizing risks. The speaker is calling for coordinated internatio…
EventThe rapid development of AI technology has outpaced existing regulatory frameworks, creating challenges in areas such as generative AI and synthetic disinformation. These advancements have had a negat…
UpdatesWhile disagreeing that governance is dead, Curioni acknowledges that governance and regulation must evolve significantly to keep pace with AI developments. He emphasizes the need for governance system…
Event“of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control every layer of the resources that is there and while foundational resources the …
EventThis comment introduces the concept of ‘intelligent divide’ as distinct from the digital divide, recognizing that AI creates new forms of inequality beyond mere access to technology. It emphasizes the…
Event“Speaker 1 introduced Sam Altman as a pivotal figure who has brought artificial general intelligence from science‑fiction speculation into mainstream discourse and launched ChatGPT”
The knowledge base explicitly states that Sam Altman has brought artificial general intelligence from science fiction into mainstream discussion [S1].
“More than 100 million Indians use ChatGPT each week, with over a third of them being students”
S67 reports that over 100 million people in India use ChatGPT weekly and more than a third are students, confirming the claim [S67].
“India is the fastest‑growing market for OpenAI’s Codex coding assistant”
S67 also notes that India is the fastest-growing market for Codex, corroborating the statement [S67].
“Altman outlined three guiding principles for OpenAI’s approach: democratization of AI, AI resilience, and iterative deployment to manage unpredictability”
S71 describes three guiding principles or “sutras” presented by Altman, providing additional detail on the themes of people, safety, and deployment, which aligns with but expands on the report’s summary [S71].
The transcript shows limited substantive interaction, with the only clear point of agreement being mutual recognition of the importance of AI developments. Sam Altman presents multiple arguments about AI breakthroughs, democratization, safety, and governance, while Speaker 1 provides a brief expression of appreciation.
Low consensus: agreement is confined to a general acknowledgment of the speech’s relevance, with no substantive debate or alignment on specific policy proposals. This suggests that, within this short exchange, there is minimal convergence on detailed AI governance or regulatory positions, limiting the immediate impact on broader policy discussions.
The transcript consists of an introductory welcome by Speaker 1 and an extended presentation by Sam Altman. No other speaker offered a contrasting viewpoint, and Speaker 1 only expressed gratitude after the remarks. Consequently, there are no identifiable points of disagreement, either explicit or implicit, among the participants.
Minimal – the discussion was essentially a one‑sided exposition. The lack of dissent means that the presented arguments face no immediate contestation within this session, limiting the need for negotiation or compromise on the topics addressed.
Sam Altman’s remarks introduced a series of forward‑looking, high‑stakes ideas—imminent superintelligence, the moral imperative of democratization, societal resilience, geopolitical uncertainty, iterative deployment, and a concrete proposal for an IAEA‑like AI body. Each of these comments acted as a pivot, shifting the conversation from a celebratory overview of AI progress to a nuanced debate about safety, governance, and societal impact. By repeatedly reframing technical advances as societal challenges, Altman steered the audience toward recognizing the urgency of policy action and the need for broad, democratic participation in shaping AI’s future.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

