Keynote-Sam Altman

19 Feb 2026 12:45h - 13:00h

Session at a glanceSummary, keypoints, and speakers overview

Summary

The event featured Sam Altman, CEO of OpenAI, who was introduced as a leading figure in bringing artificial general intelligence into public discourse [2-4]. Altman highlighted rapid advances since his last visit to India, noting that AI systems have moved from struggling with high-school math to performing research-level mathematics and generating novel theoretical-physics results [10]. He emphasized India’s significant AI adoption, with over 100 million weekly ChatGPT users, a third of whom are students, and rapid growth of the Codex coding tool in the country [13-15]. Altman argued that India’s position as the world’s largest democracy makes it well-suited to both build and shape AI’s future, urging swift action [16-18].


He warned that true superintelligence could emerge within a few years, potentially concentrating most of the world’s intellectual capacity in data centres by the end of 2028, though he acknowledged uncertainty [19-22]. Altman outlined three guiding beliefs: first, that democratizing AI is essential for safety and human flourishing, while centralizing power risks ruin [24-28]. Second, AI resilience requires a broader societal safety approach beyond technical alignment, including defenses against misuse such as open-source biomodels that could create pathogens [33-41]. Third, the development of AI will be unpredictable, so many stakeholders must have a say in shaping outcomes, and iterative deployment of increasingly capable systems is a key strategic insight [42-53]. He noted that this iterative approach has so far allowed society to integrate new capabilities while preparing for future surprises [53-55].


Altman projected that AI will drive cheaper production, faster economic growth, and improved access to healthcare and education, but also disrupt many current jobs as machines outperform human effort in many tasks [56-61]. He suggested that while technology inevitably displaces work, new opportunities will arise, and it is a moral imperative to ensure future generations retain agency and fulfillment [65-71]. For a democratic AI future, Altman stressed that providing tools alone is insufficient; people need agency and collective power, and global coordination mechanisms akin to the IAEA may be required to manage AI risks [76-78]. The discussion concluded with thanks to Altman for his remarks, underscoring the significance of his vision for AI’s societal impact [79].


Keypoints


Major discussion points


Rapid AI progress and the approaching possibility of superintelligence – Altman highlights how AI has moved from struggling with high-school math to performing research-level mathematics and even generating novel theoretical-physics results, and warns that “we may be only a couple of years away from early versions of true superintelligence” with a majority of the world’s intellectual capacity potentially residing in data centres by 2028 [10][19-21].


Democratization of AI as a safeguard against concentration of power – He argues that “democratization of AI is the only fair and safe path forward” and that centralising the technology could lead to ruin; a democratic AI future must give people agency and power, not just tools or wealth [25-28][30-32][76-78].


AI safety must include societal resilience, not just technical alignment – Altman stresses that “AI resilience is a core safety strategy,” calling for broader societal measures to defend against threats such as open-source biomodels that could be misused to create pathogens [33-41].


Economic transformation and job disruption – He notes that AI will drive massive cost reductions and faster economic growth, making products cheaper and automating supply chains, while also acknowledging that many current jobs will be displaced as “it’ll be very hard to outwork a GPU” [55-61][62-68].


Iterative deployment and global governance are essential – Altman promotes “iterative deployment” as a way for society to adapt to each new AI capability, and calls for international coordination mechanisms (e.g., an “IAEA-like” body) to manage AI’s rapid evolution and prevent power concentration [53-55][76-78].


Overall purpose / goal of the discussion


The talk aims to inform and inspire the Indian audience about the extraordinary advances in AI, while urging policymakers, industry leaders, and the public to adopt a democratic, safety-first approach. Altman seeks to rally support for responsible, inclusive AI development, highlight the economic opportunities and challenges, and call for coordinated global governance to steer AI toward a beneficial future.


Overall tone


The tone begins with enthusiasm and pride in AI’s rapid breakthroughs and India’s role [6-10][13-16]. It then shifts to a more sober, cautionary tone when addressing risks, safety, and the need for democratic safeguards [25-32][33-41]. Toward the end, it becomes persuasive and hopeful, emphasizing collective agency, iterative deployment, and the possibility of a flourishing, equitable future [53-55][76-78]. Throughout, Altman balances optimism about AI’s potential with a serious call to action on governance and safety.


Speakers

Sam Altman


– Role/Title: CEO of OpenAI[S1]


– Area of Expertise: Artificial intelligence, artificial general intelligence development, technology leadership[S1][S3]


Speaker 1


– Role/Title: Event moderator / host introducing the main speaker[S4]


– Area of Expertise: (not specified)


Additional speakers:


(none identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

The event opened with Speaker 1 introducing Sam Altman as a pivotal figure who has brought artificial general intelligence from science-fiction speculation into mainstream discourse and launched ChatGPT [1-4].


Altman began by thanking the audience, noting that he was last in India a little over a year ago [1], and highlighted the rapid progress of AI-from systems that struggled with high-school mathematics to ones capable of research-level mathematics and novel theoretical-physics results [2-4].


He then turned to India’s AI trajectory, stating that more than 100 million Indians use ChatGPT each week, with over a third of them being students [5-7], and that India is the fastest-growing market for OpenAI’s Codex coding assistant [8-9]. He argued that, as the world’s largest democracy, India is uniquely positioned to both build AI and shape its future, urging swift action [10-13].


Altman projected that early versions of true superintelligence could appear within a few years and that, if his estimate holds, the majority of the world’s intellectual capacity might reside in data centres by the end of 2028. He acknowledged that the claim is extraordinary and could be wrong, but said it deserves serious consideration [10-13].


Altman outlined three guiding principles for OpenAI’s approach [14-20]:


1. Democratization of AI – the only fair and safe path forward; concentrating AI power in a single company or nation would be ruinous, and societies must avoid “effective totalitarianism in exchange for a cure for cancer” [14-18][19-20].


2. AI resilience – safety must extend beyond technical alignment to a societal-wide strategy, including defenses against risks such as open-source biomodels that could be misused to create pathogens [21-26].


3. Unpredictability of AI’s trajectory – many stakeholders must shape outcomes, and iterative deployment-releasing increasingly capable systems while giving society time to integrate, understand, and decide on each step-has been working surprisingly well so far [27-30].


He described the economic impact of AI as a driver of substantial cost reductions, faster growth, improved access to high-quality healthcare and education, and automation of supply chains that will make physical goods cheaper [31-34]. He warned that many current occupations will be disrupted because “it will be very hard to out-work a GPU,” yet noted humanity’s historical ability to create new, more fulfilling roles after technological upheavals [31-38].


Altman emphasized that each generation builds on the previous one, creating an ever-taller “external lattice” of tools that enables achievements unimaginable to earlier generations [39-42].


He concluded with a moral imperative: future generations must retain agency and fulfillment, which requires not only tools and wealth but also genuine power; sharing control entails accepting some failures to avoid a single catastrophic concentration of authority [43-46].


Altman called for an international coordination body-akin to the IAEA-to oversee AI safety and provide rapid response to emerging risks [47-49].


Finally, he noted that the next few years will test global society, presenting a choice between empowering people or concentrating power [50-52]. Speaker 1 thanked Altman for his compelling remarks [52].


Session transcriptComplete transcript of the session
Speaker 1

level to change the lives of human beings. Ladies and gentlemen, few individuals have done more to bring artificial general intelligence from the realm of science fiction into boardrooms, into parliaments and living rooms than our next speaker, Sam Altman, CEO, OpenAI. Under his leadership, OpenAI launched ChatGPT and forced the world to re -evaluate his relationship with artificial intelligence. So ladies and gentlemen, please welcome CEO of OpenAI, Mr. Sam Altman.

Sam Altman

Thank you so much. It’s really a treat to be here in India, and it’s incredible to see the country’s leadership in advanced AI. I was last here a little over a year ago, and it’s striking that I’m here today. I’m here to talk to you about the future of AI. I’m here to talk to you about how much progress has happened since then. We’ve gone from AI systems that struggled with high school level math to systems that can do research level mathematics now and derive novel results in theoretical physics. It’s also striking how much progress India has made in its mission to put AI to work for more people in more parts of the country.

And India’s leadership in sovereign AI, building on infrastructure, SLMs, and much more has been great to watch. More than 100 million people in India use ChatGPT every week. More than a third of them are students. India is also the fastest growing market now for Codex, our coding agent that works to help people develop software faster and better. India, the world’s largest democracy, is well positioned to lead in AI, not just to build it, but to shape it and decide what our future is going to look like. And it’s important to move quickly. On our current trajectory. We believe we may be only a couple of years away from early versions of true superintelligence. If we are right, by the end of 2028, more of the world’s intellectual capacity could reside inside of data centers than outside of them.

This is an extraordinary statement to make, and of course we could be wrong. But I think it really bears serious consideration. A superintelligence, at some point on its development curve, would be capable of doing a better job being the CEO of a major company than any executive, certainly me, or doing better research than our best scientists. As we prepare for this possibility, we are guided by three core beliefs. Number one, we believe that democratization of AI is the only fair and safe path forward. Democratization of AI is the best way to ensure that humanity flourishes. On the other hand, centralization of this technology in one company or country could lead to ruin. The desirable future a couple of decades from now has got to look like a world of liberty, democracy, widespread flourishing, and an increase in human agency.

Some people want effective totalitarianism in exchange for a cure for cancer. I don’t think we should accept that trade -off, nor do I think we need to. AI should extend individual human will. We’ll probably need superintelligence to help us figure out the new governance mechanisms to ensure that this happens fairly at scale, and to avoid problems like extremely unbalanced compute, access, or something else. Second, we believe that AI resilience is a core safety strategy. We don’t mean that this is the only safety strategy. We will continue to need to build safe systems and solve difficult technical alignment challenges. But increasingly, we need to start broadening how we think about safety to include societal resilience. No AI lives in a world where we don’t have to worry about safety.

We need to build a system where we can do that. No AI system can deliver a good future on their own. For an obvious example, there’ll be extremely capable biomodels available open source that could help people create new pathogens. We need a society -wide approach about how we’re going to defend against this. And third, the future of AI is not going to unfold exactly like anyone predicts. And we believe that many people need to have a stake in shaping the outcome. The development of AI has already held many surprises, and I assume there are bigger ones to come. We understand that with technology this powerful, people want answers. But it’s important to be humble about what we don’t know, and always remember that sometimes our best guesses are wrong.

Most of the important discoveries happen when technology and society meet, sometimes have some friction, and co -evolve. For example, we don’t yet know how to think about some superhuman problem. We don’t know how to think about superintelligence being aligned with dictators in totalitarian countries. We don’t know how to think about countries using AI to fight new kinds of war with each other. We don’t know how to think about when and whether countries are going to have to think about new forms of social contracts. But we think it’s important to have more understanding and society -wide debate before we’re all surprised. Of special note, and related to all three points, we continue to believe that iterative deployment is a key strategic insight, and that society needs to contend with and use each successive new level of AI capability, have time to integrate it, understand it, and decide how to move forward.

This has been working surprisingly well so far. If we are right, and systems continue to improve at this pace, it’s going to change the economics of a lot of things. A really great thing about AI progress is that it looks like many things are going to get much cheaper and have much faster economic growth. We’re already seeing what AI is doing, for access to high -quality healthcare, education, and more. In the coming years, we expect to see robots make many products and physical goods cheaper as supply chains get automated. The limit to how far this cost reduction can go may only be government policy. But the other side of this coin is that current jobs are going to get disrupted, as AI can do more and more of the things that drive our economy today.

It’ll be very hard to outwork a GPU in many ways. It’ll be easy in some other ways. For example, we really seem hardwired to care about other people much more than we care about machines. We’re somewhat less concerned about the long -term future. Technology always disrupts jobs. We always find new and better things to do. The people of 500 years ago would have thought that our current jobs often look silly, like ways to entertain ourselves, create stress. And the people 500 years from now hopefully will look at us, hopefully look to us, like impossibly rich people playing games, trying to find ways to pass their times. But we should all hope that they feel much more fulfilled.

We should all hope that they feel much more fulfilled. than we do today. I’m confident we will keep being driven to be useful to each other, to express our creativity, to gain status, to compete, and much more. But the specifics of what we do day to day will probably look very different. Each generation has built on the work of the generations before, and with new tools, the scaffolding gets a little taller. This collective external lattice, the set of tools that we have built up around ourselves, is remarkable, and we are capable of doing things that our great -great -grandparents couldn’t have dreamed possible. It is a moral imperative to make sure that our great -great -grandchildren can stay the same, and technology, and especially AI, is how we’re going to get there.

For a democratic AI future, it is not enough to just give people tools and wealth. We also need to give them agency and power. The vision that AI companies lay out fundamentally reduced to either unilateral control or decentralized power. sharing control means accepting that some things are going to go wrong in exchange for not having one thing go mega wrong cemented totalitarian control this is a fundamental trade -off of democracy and it is one that we believe in very strongly as the way to give everyone collective agency over the future of course this is not to suggest that we won’t need any regulation or safeguards we obviously do urgently like we have for other powerful technologies in particular we expect the world may need something like the IAEA for international coordination of AI and especially for it to have the ability to rapidly respond to change in circumstances the next few years will test global society as this technology continues to improve at a rapid pace we can choose to either empower people or concentrate power thank you very much

Speaker 1

thank you mr. Sam Altman for your very interesting and compelling remarks

Related ResourcesKnowledge base sources related to the discussion topics (18)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Speaker 1 introduced Sam Altman as a pivotal figure who has brought artificial general intelligence from science‑fiction speculation into mainstream discourse and launched ChatGPT”

The knowledge base explicitly states that Sam Altman has brought artificial general intelligence from science fiction into mainstream discussion [S1].

Confirmedhigh

“More than 100 million Indians use ChatGPT each week, with over a third of them being students”

S67 reports that over 100 million people in India use ChatGPT weekly and more than a third are students, confirming the claim [S67].

Confirmedmedium

“India is the fastest‑growing market for OpenAI’s Codex coding assistant”

S67 also notes that India is the fastest-growing market for Codex, corroborating the statement [S67].

Additional Contextlow

“Altman outlined three guiding principles for OpenAI’s approach: democratization of AI, AI resilience, and iterative deployment to manage unpredictability”

S71 describes three guiding principles or “sutras” presented by Altman, providing additional detail on the themes of people, safety, and deployment, which aligns with but expands on the report’s summary [S71].

Additional Contextlow

“Altman highlighted India’s status as the world’s largest democracy as a unique position to shape AI’s future”

S64 and S65 discuss India’s position as the world’s largest democracy and its potential role in AI governance, adding context to the claim [S64] and [S65].

External Sources (75)
S1
Keynote-Sam Altman — -Moderator: Role/Title: Event moderator; Area of expertise: Not mentioned -Sam Altman: Role/Title: CEO of OpenAI; Area …
S2
Oversight of AI: Hearing of the US Senate Judiciary Subcommitee — 10“GPT-4 Is OpenAI’s Most Advanced System, Producing Safer and More Useful Responses.” OpenAI, https://openai.com/produc…
S3
The potential of AI and recent breakthroughs in technology — Sam Altman, the founder of OpenAI and chair of Oklo. Recently, he has been busy working on a very exciting cryptocurrenc…
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
Keynote interview with Sam Altman (remote) and Nick Thompson (in-person) — Sam Altman:Great question. We don’t expect that we’re near an asymptote. But, you know, this is like a debate in the wor…
S8
AI industry faces recalibration as Altman delays AGI — OpenAI CEO Sam Altman has againadjusted his timelinefor achieving artificial general intelligence (AGI). After earlier f…
S9
UNSC meeting: Artificial intelligence, peace and security — Yi Zeng:My name is Yi Zeng and I would like to take this opportunity to share with distinguished representatives my pers…
S10
Multistakeholder Partnerships for Thriving AI Ecosystems — I used this one in the meantime. Thank you for the question and thank you for having me also as a representative of the …
S11
Sam Altman: AI regulations should evolve in step with tech-society co-evolution | AI For Good Global Summit 2024 — In ariveting conversationduring theAI for Good Global Summit 2024, Nicholas Thompson, the CEO of The Atlantic and Sam Al…
S12
OpenAI announces major reorganisation to bolster AI safety measures — OpenAI’s AI safety leader, Aleksander Madry, is now working on a new significant researchproject, according to CEO Sam A…
S13
Sam Altman says US is misjudging China’s AI rise — OpenAI chief Sam Altman haswarnedthat the US may be underestimating China’s rapid advancement inAI. Speaking toCNBC, Alt…
S14
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — It is changing how people get jobs and how they get hired for jobs. So an example of that is entrepreneurs often now are…
S15
Altman urges urgent AI regulation — OpenAI chief Sam Altman hascalledfor urgent global regulation of AI, speaking at the AI Impact Summit in New Delhi. Addr…
S16
Chinese leading AI expert argues for AI governance by the UN — The rapid development of AI technology has outpaced existing regulatory frameworks, creating challenges in areas such as…
S17
The CEO of OpenAI advocated for worldwide regulation of AI — Sam Altman, the CEO of OpenAI, called for global regulation of AI during his visit to India. While corporations typicall…
S18
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S19
Leaders TalkX: Partnership pivot: rethinking cooperation in the digital era — Despite significant progress, artificial intelligence development is still in its early stages. While agentic AI represe…
S20
Laying the foundations for AI governance — Despite AI and internet technologies being designed to decentralize power, Papandreou observes that power has actually b…
S21
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — This comment introduced a crucial tension between the massive scale of change and the need for distributed, democratic a…
S22
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — However, there is significant apprehension surrounding the perceived industrial domination in the AI policymaking proces…
S23
State of Play: AI Governance / DAVOS 2025 — Mensch advocates for a decentralized approach to AI development, where multiple actors have access to AI technology. He …
S24
From Technical Safety to Societal Impact Rethinking AI Governanc — Virginia stresses that AI safety cannot be limited to technical robustness, accuracy or alignment. It must incorporate m…
S25
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — These technological disparities will coincide with massive job displacement and economic disruption across all sectors s…
S26
Building Trustworthy AI Foundations and Practical Pathways — “But similarly now, econ of maybe writing novels is gone.”[20]. “The movie industry is worried.”[21]. “That entire econo…
S27
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The level of disagreement is moderate but significant for implementation. While speakers share fundamental goals of resp…
S28
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S29
S30
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — While disagreeing that governance is dead, Curioni acknowledges that governance and regulation must evolve significantly…
S31
Keynote-Sam Altman — -Sam Altman: Role/Title: CEO of OpenAI; Area of expertise: Artificial intelligence, artificial general intelligence deve…
S32
AI Policy Summit Opening Remarks: Discussion Report — The tone is consistently optimistic and collaborative throughout both speeches. Both speakers maintain an encouraging, f…
S33
Keynote interview with Sam Altman (remote) and Nick Thompson (in-person) — Sam Altman:Great question. We don’t expect that we’re near an asymptote. But, you know, this is like a debate in the wor…
S34
Sam Altman: AI regulations should evolve in step with tech-society co-evolution | AI For Good Global Summit 2024 — In ariveting conversationduring theAI for Good Global Summit 2024, Nicholas Thompson, the CEO of The Atlantic and Sam Al…
S35
OpenAI CEO emphasises democratic control in the future of AI — Sam Altman, co-founder and CEO of OpenAI,raisesa critical question: ‘Who will control the future of AI?’. He frames it a…
S36
Sam Altman says US is misjudging China’s AI rise — OpenAI chief Sam Altman haswarnedthat the US may be underestimating China’s rapid advancement inAI. Speaking toCNBC, Alt…
S37
Altman warns of harmful AI use after model backlash — OpenAI chief executive Sam Altman has warned that many ChatGPT users areengaging with AI in self-destructive ways. His c…
S38
Keynote-Sam Altman — We may be only a couple of years away from early versions of true superintelligence
S39
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S40
9821st meeting — Secretary-General – Antonio Guterres:Mr. President, Excellencies, I thank the United States for convening the Meeting on…
S41
Leaders TalkX: Partnership pivot: rethinking cooperation in the digital era — Despite significant progress, artificial intelligence development is still in its early stages. While agentic AI represe…
S42
Laying the foundations for AI governance — Despite AI and internet technologies being designed to decentralize power, Papandreou observes that power has actually b…
S43
State of Play: AI Governance / DAVOS 2025 — Mensch advocates for a decentralized approach to AI development, where multiple actors have access to AI technology. He …
S44
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Global governance of AI is a precursor for a democratic development and evolution. And we need to continue to develop an…
S45
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — However, there is significant apprehension surrounding the perceived industrial domination in the AI policymaking proces…
S46
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — This comment introduced a crucial tension between the massive scale of change and the need for distributed, democratic a…
S47
From Technical Safety to Societal Impact Rethinking AI Governanc — Safety should focus on protection of people, not just systems, requiring continuous human oversight and institutional ac…
S48
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — These technological disparities will coincide with massive job displacement and economic disruption across all sectors s…
S49
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — It is changing how people get jobs and how they get hired for jobs. So an example of that is entrepreneurs often now are…
S50
How AI Drives Innovation and Economic Growth — AI also creates a number of challenges. One of them is there will be some job losses, particularly sort of entry-level j…
S51
AI Governance Dialogue: Steering the future of AI — The discussion aims to advocate for comprehensive, inclusive AI governance that ensures the benefits of AI are shared gl…
S52
Chinese leading AI expert argues for AI governance by the UN — The rapid development of AI technology has outpaced existing regulatory frameworks, creating challenges in areas such as…
S53
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — While disagreeing that governance is dead, Curioni acknowledges that governance and regulation must evolve significantly…
S54
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S55
Global AI Governance: Reimagining IGF’s Role & Impact — This comment introduces the concept of ‘intelligent divide’ as distinct from the digital divide, recognizing that AI cre…
S56
Conversation: 01 — Artificial intelligence
S57
Sam Altman praises rapid AI adoption in India — OpenAI’s new GPT‑5 model has beenunveiled, and the company offers it free to all users. Three model versions, gpt‑5, gpt…
S58
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Okay, good. Thank you. Thank you all for joining and I appreciate it. I am being pitched against my boss, so I’m going t…
S59
Sam Altman’s AI cricket post fuels India speculation — A seemingly light-hearted social media post by OpenAI CEO Sam Altman hasstirreda wave of curiosity and scepticism in Ind…
S60
Driving Indias AI Future Growth Innovation and Impact — Awesome. Great question, Midu. And, you know, we as a nation have proven ourselves to be phenomenal adopters of technolo…
S61
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs…
S62
AI 2.0 The Future of Learning in India — Patil highlighted rapid AI adoption rates, noting that ChatGPT reached 5 crore users in just 40 days compared to 75 year…
S63
AI 2.0 Reimagining Indian education system — Around 10 crore people in India are using ChatGPT and Gemini, showing rapid adoption compared to traditional technologie…
S64
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — Om Birla, Speaker of India’s Parliament, presented India’s approach to AI integration, emphasizing the country’s commitm…
S65
Keynote-Dario Amodei — “of AI models, their potential for misuse by individuals and governments, and their potential for economic displacement….
S66
AI Algorithms and the Future of Global Diplomacy — I think India is at a one. wonderful place because you are a digital powerhouse and you have all the structures and all …
S67
https://dig.watch/event/india-ai-impact-summit-2026/keynote-sam-altman — And India’s leadership in sovereign AI, building on infrastructure, SLMs, and much more has been great to watch. More th…
S68
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — Professor Vukasinovic’s question, ‘Why do we even need this?’, deserves serious consideration. Maybe we don’t need AI in…
S69
Unvanquished: A U.S.-U.N. Saga — But so what? As Boutros-Ghali says, ‘Only the weak rely on diplomacy [which] is perceived by an imperial power as a wast…
S70
https://dig.watch/event/india-ai-impact-summit-2026/building-scalable-ai-through-global-south-partnerships — Thank you very much, Sunil. We’ll do the next question a little bit quickly. But I do want to just… acknowledge a few …
S71
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Garg also referenced observations about the contrast between current AI systems requiring gigawatts of power and hum…
S72
Sam Altman officially returns to OpenAI — Sam Altman is officially returning as the CEO of OpenAI, with Mira Murati resuming her Chief Technology Officer (CTO) ro…
S73
Why did the 21st century start on 20 January 2025? — The path forward should avoid risks of reactionary tribalism and technocratic overreach. Instead, it demands a renewed m…
S74
Artificial intelligence (AI) and the human condition — Undermining factors such as money, manipulations, fake news, junk knowledge, etc., already challenge democracy. The soci…
S75
Welfare for All Ensuring Equitable AI in the Worlds Democracies — The discussion opened with moderator Brad Staples highlighting concerning trends in AI development. Some estimates sugge…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sam Altman
11 arguments176 words per minute1413 words480 seconds
Argument 1
Breakthrough from basic to research‑level AI (Sam Altman)
EXPLANATION
Sam notes that AI has advanced from struggling with high‑school level mathematics to performing research‑level mathematics and generating novel theoretical physics results. This illustrates a rapid leap in AI capability.
EVIDENCE
He states, “We’ve gone from AI systems that struggled with high school level math to systems that can do research level mathematics now and derive novel results in theoretical physics” [10].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Altman’s keynote highlighted the rapid leap from high-school math to research-level mathematics and novel physics results, confirming the breakthrough claim [S1].
MAJOR DISCUSSION POINT
AI capability leap
AGREED WITH
Speaker 1
Argument 2
Forecast of early superintelligence and dominance of data‑center intellect by 2028 (Sam Altman)
EXPLANATION
Sam predicts that true superintelligence could appear within a few years and that by the end of 2028 most of the world’s intellectual capacity may reside inside data centres. He acknowledges uncertainty but stresses the seriousness of the projection.
EVIDENCE
He says, “We believe we may be only a couple of years away from early versions of true superintelligence” [19] and “by the end of 2028, more of the world’s intellectual capacity could reside inside of data centers than outside of them” [20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote projected that by 2028 most intellectual output could reside in data centres, and later commentary notes shifting timelines for AGI, supporting the early superintelligence forecast [S1][S8].
MAJOR DISCUSSION POINT
Timeline to superintelligence
Argument 3
Democratization as the only fair and safe path; centralization risks ruin (Sam Altman)
EXPLANATION
Sam argues that spreading AI access widely is the only equitable and safe approach, whereas concentrating AI power in a single company or country could lead to catastrophic outcomes. Democratization is presented as essential for humanity’s flourishing.
EVIDENCE
He declares, “we believe that democratization of AI is the only fair and safe path forward” [25] and adds, “On the other hand, centralization of this technology in one company or country could lead to ruin” [27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Altman’s remarks on democratization and the dangers of centralization are recorded in the keynote and reiterated in his calls for global regulation [S1][S15].
MAJOR DISCUSSION POINT
Fair distribution of AI
Argument 4
AI should augment individual will, not enable totalitarian trade‑offs (Sam Altman)
EXPLANATION
Sam emphasizes that AI must extend individual human agency and reject any bargain that trades freedom for cures, rejecting totalitarian solutions. He stresses that AI should serve personal will rather than enable oppression.
EVIDENCE
He remarks, “Some people want effective totalitarianism in exchange for a cure for cancer. I don’t think we should accept that trade-off” [29-30] and follows with, “AI should extend individual human will” [31].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote uses the cure-for-cancer vs totalitarianism example to illustrate that AI should extend individual will, matching this argument [S1].
MAJOR DISCUSSION POINT
AI and individual agency
Argument 5
Broad societal stake needed to shape AI outcomes (Sam Altman)
EXPLANATION
Sam states that many people need to have a stake in guiding AI’s future because its trajectory is uncertain and will produce unexpected developments. He calls for widespread understanding and debate before society is surprised.
EVIDENCE
He says, “we believe that many people need to have a stake in shaping the outcome” [42-44] and later, “it’s important to have more understanding and society-wide debate before we’re all surprised” [52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions of multistakeholder partnerships and the need for broad societal involvement appear in sources on AI ecosystems and regulation [S10][S11].
MAJOR DISCUSSION POINT
Inclusive governance of AI
Argument 6
AI resilience as a core safety strategy, including defenses against open‑source biomodel threats (Sam Altman)
EXPLANATION
Sam identifies societal resilience as a key component of AI safety, noting that beyond technical alignment we must prepare for threats such as open‑source biomodels that could be misused to create pathogens. He calls for a society‑wide defensive approach.
EVIDENCE
He asserts, “we believe that AI resilience is a core safety strategy” [33] and gives the example, “there’ll be extremely capable biomodels available open source that could help people create new pathogens. We need a society-wide approach about how we’re going to defend against this” [40-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
OpenAI’s reorganisation to strengthen safety teams and focus on resilience against misuse, including open-source models, is described in the safety announcement [S12].
MAJOR DISCUSSION POINT
Societal safety measures for AI
Argument 7
Iterative deployment lets society adapt and integrate new AI capabilities safely (Sam Altman)
EXPLANATION
Sam promotes iterative deployment as a strategic insight that allows society to engage with each new AI capability, integrate it, and decide on next steps, noting that this approach has worked well so far. It supports safe adoption of increasingly powerful systems.
EVIDENCE
He explains, “iterative deployment is a key strategic insight, and that society needs to contend with and use each successive new level of AI capability, have time to integrate it, understand it, and decide how to move forward” [53] and adds, “This has been working surprisingly well so far” [54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The iterative deployment strategy is outlined in Altman’s keynote as a key insight for safe rollout [S1][S7].
MAJOR DISCUSSION POINT
Gradual rollout of AI
Argument 8
AI drives cost reductions, faster growth, and advances in healthcare, education, and supply chains (Sam Altman)
EXPLANATION
Sam highlights that AI is making many products cheaper, accelerating economic growth, and improving access to high‑quality healthcare, education, and more efficient supply chains, with further reductions limited mainly by policy choices. This underscores AI’s broad economic benefits.
EVIDENCE
He observes, “many things are going to get much cheaper and have much faster economic growth” [56]; “We’re already seeing what AI is doing, for access to high-quality healthcare, education, and more” [57]; “we expect to see robots make many products and physical goods cheaper as supply chains get automated” [58]; and notes, “The limit to how far this cost reduction can go may only be government policy” [59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Economic benefits such as cheaper goods, faster growth, and improved healthcare and education are detailed in the keynote and AI-for-Good summit remarks [S1][S11].
MAJOR DISCUSSION POINT
Economic benefits of AI
AGREED WITH
Speaker 1
Argument 9
AI will disrupt current jobs but will create new roles; future work will look very different (Sam Altman)
EXPLANATION
Sam acknowledges that AI will displace existing jobs as machines outperform humans in many tasks, but asserts that technology historically creates new opportunities, leading to a future where work looks very different and potentially more fulfilling.
EVIDENCE
He states, “current jobs are going to get disrupted, as AI can do more and more of the things that drive our economy today” [60] and adds, “Technology always disrupts jobs. We always find new and better things to do” [65-66]; later he notes, “the specifics of what we do day to day will probably look very different” [72-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI’s impact on employment and emerging job categories are discussed in the World Economic Forum panel on AI-era jobs [S14].
MAJOR DISCUSSION POINT
Future of work
AGREED WITH
Speaker 1
Argument 10
Need for new governance mechanisms, possibly an IAEA‑style international AI body (Sam Altman)
EXPLANATION
Sam suggests that global coordination mechanisms similar to the IAEA may be required for AI to ensure rapid response to emerging risks and to manage international cooperation, indicating a need for new governance structures.
EVIDENCE
In his concluding remarks he says, “we expect the world may need something like the IAEA for international coordination of AI and especially for it to have the ability to rapidly respond to change in circumstances” [78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Altman explicitly suggested an IAEA-like international body for AI coordination in his concluding remarks and in his regulation advocacy in India [S15].
MAJOR DISCUSSION POINT
International AI governance
Argument 11
Choice between empowering people or concentrating power; urgent regulation and safeguards required (Sam Altman)
EXPLANATION
Sam warns that the coming years will test society, presenting a choice to either empower individuals with AI or allow power to concentrate, and stresses the need for urgent regulation and safeguards comparable to those for other powerful technologies.
EVIDENCE
He remarks, “the next few years will test global society as this technology continues to improve at a rapid pace we can choose to either empower people or concentrate power” and adds, “we obviously do urgently like we have for other powerful technologies” within the same concluding sentence [78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The urgency of regulation to prevent power concentration is emphasized in Altman’s statements at the AI Impact Summit and in coverage of his global regulation push [S15][S17].
MAJOR DISCUSSION POINT
Power distribution and regulation
S
Speaker 1
1 argument104 words per minute84 words48 seconds
Argument 1
Expression of gratitude for the remarks (Speaker 1)
EXPLANATION
Speaker 1 thanks Sam Altman for his interesting and compelling remarks, expressing appreciation for the presentation.
EVIDENCE
He says, “thank you mr. Sam Altman for your very interesting and compelling remarks” [79].
MAJOR DISCUSSION POINT
Appreciation
Agreements
Agreement Points
Both speakers acknowledge the significance and impact of Sam Altman’s remarks on AI progress and its societal implications.
Speakers: Speaker 1, Sam Altman
Breakthrough from basic to research‑level AI (Sam Altman) AI drives cost reductions, faster growth, and advances in healthcare, education, and supply chains (Sam Altman) AI will disrupt current jobs but will create new roles; future work will look very different (Sam Altman)
Speaker 1 thanks Sam Altman for his “very interesting and compelling remarks” [79], while Sam Altman emphasizes rapid AI breakthroughs, economic benefits, and future work transformations [10-16][56-59][60-66]. Both highlight the importance of the AI developments discussed.
POLICY CONTEXT (KNOWLEDGE BASE)
This shared acknowledgment mirrors the tone of the AI Policy Summit, where leaders highlighted both optimism and serious challenges in AI governance [S32], and reflects the broader discussion of AI’s societal impact presented at the AI for Good Global Summit 2024 [S34]. It also aligns with calls for democratic control of AI futures [S35].
Similar Viewpoints
Speaker 1’s expression of gratitude reflects an implicit endorsement of Sam Altman’s message about the rapid AI breakthroughs and their relevance, indicating a shared positive stance toward the presented AI advancements [79][10].
Speakers: Speaker 1, Sam Altman
Expression of gratitude for the remarks (Speaker 1) Breakthrough from basic to research‑level AI (Sam Altman)
Unexpected Consensus
Appreciation of AI progress despite Sam Altman’s warnings about future risks
Speakers: Speaker 1, Sam Altman
Expression of gratitude for the remarks (Speaker 1) Forecast of early superintelligence and dominance of data‑center intellect by 2028 (Sam Altman) Choice between empowering people or concentrating power; urgent regulation and safeguards required (Sam Altman)
While Sam Altman cautions about potential superintelligence risks and the need for regulation [19-20][78], Speaker 1 nonetheless thanks him for his remarks, showing an unexpected consensus that the discussion itself is valuable regardless of the challenges highlighted [79][19-20][78].
POLICY CONTEXT (KNOWLEDGE BASE)
The appreciation of rapid AI advances is echoed in Sam Altman’s own remarks about technology-society co-evolution and economic impact [S31], while his cautions about harmful use and model backlash provide the risk perspective referenced here [S37]; together they reflect the balanced narrative promoted in policy forums such as the AI for Good Summit [S34].
Overall Assessment

The transcript shows limited substantive interaction, with the only clear point of agreement being mutual recognition of the importance of AI developments. Sam Altman presents multiple arguments about AI breakthroughs, democratization, safety, and governance, while Speaker 1 provides a brief expression of appreciation.

Low consensus: agreement is confined to a general acknowledgment of the speech’s relevance, with no substantive debate or alignment on specific policy proposals. This suggests that, within this short exchange, there is minimal convergence on detailed AI governance or regulatory positions, limiting the immediate impact on broader policy discussions.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript consists of an introductory welcome by Speaker 1 and an extended presentation by Sam Altman. No other speaker offered a contrasting viewpoint, and Speaker 1 only expressed gratitude after the remarks. Consequently, there are no identifiable points of disagreement, either explicit or implicit, among the participants.

Minimal – the discussion was essentially a one‑sided exposition. The lack of dissent means that the presented arguments face no immediate contestation within this session, limiting the need for negotiation or compromise on the topics addressed.

Takeaways
Key takeaways
AI capabilities have advanced from basic tasks to research‑level mathematics and theoretical physics within a short period. OpenAI forecasts that early versions of superintelligence could appear within a few years, potentially making data‑center intellect surpass human intellectual capacity by 2028. Democratization of AI is presented as the only fair and safe path, while centralization of power in a single entity or nation is seen as a risk of ruin. AI should augment individual human will rather than enable totalitarian trade‑offs; broad societal participation is needed to shape AI outcomes. AI resilience, including societal defenses against threats such as open‑source biomodels, is a core safety strategy alongside technical alignment work. Iterative deployment of increasingly capable AI systems allows society time to integrate, understand, and govern new capabilities safely. AI is expected to drive significant cost reductions, faster economic growth, and improvements in healthcare, education, and supply chains, but will also disrupt many current jobs. Future work will look very different; new roles will emerge as older ones are automated, and society must prepare for this transition. New governance mechanisms are required, potentially an international body modeled on the IAEA, to coordinate AI safety and rapid response to emerging risks. The world faces a choice between empowering people with AI or concentrating power; urgent regulation and safeguards are necessary.
Resolutions and action items
Adopt an iterative deployment approach for future AI systems to give society time to adapt and govern each new capability. Pursue the creation of an international coordination entity (e.g., an IAEA‑style AI body) to manage global AI safety and rapid response. Prioritize policies and initiatives that promote the democratization of AI access and prevent excessive centralization of power. Develop societal‑wide resilience measures, including strategies to mitigate misuse of open‑source biomodels and other dual‑use technologies.
Unresolved issues
How to align superintelligent AI systems with democratic values in the presence of authoritarian regimes. Specific mechanisms for preventing AI‑enabled creation of harmful biological agents. Design of new social contracts and governance frameworks that can keep pace with rapid AI advances. Concrete regulatory frameworks and enforcement mechanisms needed for AI safety. Methods to address compute and access imbalances that could lead to power concentration. Detailed plans for managing large‑scale job displacement and ensuring equitable economic transition.
Suggested compromises
Accept a trade‑off where some failures are tolerated in exchange for avoiding a single, catastrophic, totalitarian control over AI. Share control of AI development and deployment across multiple stakeholders rather than concentrating it in one company or nation. Balance rapid AI innovation with safety by using iterative deployment as a middle ground between full release and overly restrictive bans.
Thought Provoking Comments
We may be only a couple of years away from early versions of true superintelligence. If we are right, by the end of 2028 more of the world’s intellectual capacity could reside inside of data centers than outside of them.
This bold timeline pushes the audience to treat superintelligence as an imminent reality rather than a distant speculation, creating urgency for safety and governance discussions.
It shifts the tone from descriptive to urgent, prompting the subsequent focus on democratization, resilience, and governance as immediate priorities rather than long‑term concerns.
Speaker: Sam Altman
Democratization of AI is the only fair and safe path forward. Centralization of this technology in one company or country could lead to ruin.
It frames the distribution of AI power as a moral choice, challenging any narrative that favors concentration of AI capabilities for efficiency or national advantage.
Sets up the later arguments about societal resilience, global coordination, and the need for decentralized control, steering the conversation toward policy and ethical dimensions.
Speaker: Sam Altman
AI resilience is a core safety strategy. We need to broaden safety to include societal resilience, not just technical alignment, because open‑source biomodels could enable creation of new pathogens.
Expands the concept of AI safety beyond algorithmic alignment to encompass societal preparedness and bio‑security, introducing a novel, interdisciplinary risk vector.
Introduces a new topic—societal‑level defenses—and broadens the discussion from purely technical solutions to public‑policy and health‑security considerations.
Speaker: Sam Altman
We don’t yet know how to think about superintelligence being aligned with dictators in totalitarian countries, or how countries will use AI for new kinds of war, or new social contracts.
Raises geopolitical and ethical uncertainties that have not been widely debated, highlighting gaps in current governance frameworks.
Creates a turning point that moves the conversation from internal company strategy to global geopolitical risk, paving the way for the later proposal of an international AI regulatory body.
Speaker: Sam Altman
Iterative deployment is a key strategic insight: society needs to contend with each successive new level of AI capability, have time to integrate it, understand it, and decide how to move forward.
Proposes a concrete rollout philosophy that balances rapid innovation with societal learning, challenging the notion of a single, decisive launch.
Guides the narrative toward a pragmatic, step‑by‑step approach, influencing later remarks about policy timing and the need for ongoing public debate.
Speaker: Sam Altman
AI will make many products and physical goods cheaper, but it will also disrupt current jobs; technology always disrupts jobs, and we will find new and better things to do.
Acknowledges both the economic upside and the labor displacement risk, offering a balanced view that counters overly optimistic or dystopian extremes.
Adds nuance to the discussion, prompting listeners to consider both growth and social safety‑net implications, and reinforcing the moral imperative mentioned later.
Speaker: Sam Altman
For a democratic AI future it is not enough to just give people tools and wealth; we also need to give them agency and power.
Distinguishes between material provision and genuine empowerment, deepening the conversation about what true democratization entails.
Strengthens the argument for decentralized control and sets up the later call for an international coordination mechanism.
Speaker: Sam Altman
The world may need something like the IAEA for international coordination of AI, with the ability to rapidly respond to changing circumstances.
Provides a concrete institutional analogy, moving from abstract principles to a tangible governance proposal, which is rare in high‑level AI talks.
Serves as a culminating turning point, translating earlier concerns about centralization and geopolitical risk into a specific policy recommendation, likely shaping any subsequent policy dialogue.
Speaker: Sam Altman
Overall Assessment

Sam Altman’s remarks introduced a series of forward‑looking, high‑stakes ideas—imminent superintelligence, the moral imperative of democratization, societal resilience, geopolitical uncertainty, iterative deployment, and a concrete proposal for an IAEA‑like AI body. Each of these comments acted as a pivot, shifting the conversation from a celebratory overview of AI progress to a nuanced debate about safety, governance, and societal impact. By repeatedly reframing technical advances as societal challenges, Altman steered the audience toward recognizing the urgency of policy action and the need for broad, democratic participation in shaping AI’s future.

Follow-up Questions
How can superintelligence be aligned with dictators or totalitarian regimes?
Understanding alignment in authoritarian contexts is crucial to prevent misuse of powerful AI that could entrench oppressive power structures.
Speaker: Sam Altman
In what ways might countries employ AI for new forms of warfare, and how can this be mitigated?
AI-driven weapons could destabilize global security; research is needed to anticipate and regulate such uses.
Speaker: Sam Altman
What new forms of social contracts will be required as AI reshapes economies and societies?
Existing legal and societal frameworks may be insufficient; new contracts could ensure fairness and rights in an AI‑augmented world.
Speaker: Sam Altman
What governance mechanisms are needed to ensure AI extends individual human will rather than enabling totalitarian control?
Designing institutions that preserve democratic agency is essential to avoid concentration of AI power.
Speaker: Sam Altman
How can societies develop a wide‑scale approach to defend against malicious uses of open‑source biomodels that could create new pathogens?
Open AI tools could be weaponized for bioterrorism; coordinated safeguards are required for public health security.
Speaker: Sam Altman
What technical alignment challenges must be solved to build safe AI systems?
Ensuring AI behaves as intended is a core safety prerequisite before broader societal deployment.
Speaker: Sam Altman
How can AI safety strategies be broadened to include societal resilience, not just technical safeguards?
Societal resilience addresses systemic risks (e.g., misinformation, economic disruption) that technical fixes alone cannot solve.
Speaker: Sam Altman
What would an international coordination body for AI—analogous to the IAEA—look like, and how could it respond rapidly to changing circumstances?
A global institution could facilitate cooperation, set standards, and manage emergent risks across borders.
Speaker: Sam Altman
How can the democratization of AI be ensured to prevent centralization of power in a single company or country?
Broad access reduces the risk of monopoly control and promotes equitable benefits from AI advancements.
Speaker: Sam Altman
What are the economic impacts of AI‑driven cost reductions and job disruption, and what policies can mitigate negative effects on workers?
Understanding and managing labor market shifts is vital to maintain social stability and shared prosperity.
Speaker: Sam Altman
What governance frameworks are needed to manage iterative deployment of increasingly capable AI systems into society?
Iterative deployment requires mechanisms to assess, integrate, and regulate each new capability responsibly.
Speaker: Sam Altman

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.