Town Hall: Dilemmas around Ethics in AI

22 Jan 2026 09:15h - 10:00h

Town Hall: Dilemmas around Ethics in AI

Session at a glance

Summary

This World Economic Forum town hall discussion focused on the urgent ethical challenges facing society in the age of artificial intelligence, featuring experts from MIT Technology Review, Signal Foundation, MIT, and Oxford University. The panelists identified several critical concerns, with Rachel Botsman emphasizing the risk of cognitive atrophy as humans increasingly outsource thinking to AI systems, potentially losing the essential hand-mind connection crucial for creativity and critical thought. Max Tegmark highlighted the regulatory disparity between AI companies and other industries, noting that while pharmaceutical companies must conduct rigorous safety testing before releasing products, AI companies can deploy potentially harmful systems like AI girlfriends for children with virtually no oversight.


Meredith Whittaker argued that framing these issues as “ethical” problems treats them as afterthoughts rather than fundamental concerns about power concentration and business models. She explained how current AI development represents a monoculture dominated by deep learning approaches, driven primarily by data and infrastructure monopolies established by major tech platforms. The discussion revealed that AI safety conversations have diminished due to massive financial investments and lobbying pressure, despite mounting evidence of real-world harms including child suicides linked to AI chatbots.


Environmental concerns emerged as another critical issue, with panelists noting the absence of climate change discussions at the forum despite AI’s significant energy and water consumption through data centers. The conversation concluded with Whittaker providing a detailed technical explanation of how AI agents integrated into operating systems pose existential threats to privacy and security, potentially undermining fundamental digital rights and infrastructure.


Keypoints

Major Discussion Points:

Regulatory disparity and the need for AI safety standards: The panel emphasized how AI companies face far less regulation than other industries (pharmaceuticals, food service, automotive), despite potentially causing significant harm. Max Tegmark argued for treating AI like any other industry with binding safety standards and clinical trials before deployment.


Cognitive atrophy and human agency concerns: Rachel Botsman highlighted the risk of humans outsourcing thinking to AI systems, leading to reduced tolerance for uncertainty and weakened hand-mind connections. She stressed the importance of setting personal boundaries to maintain human creativity and critical thinking skills.


AI as a business model-driven monoculture: Meredith Whittaker argued that current AI development (focused on large language models) isn’t a natural progression of technology but rather a consequence of data and infrastructure monopolies built by major tech platforms. She criticized framing these issues as “ethical” afterthoughts rather than fundamental structural problems.


Environmental and extractive impacts: The discussion covered AI’s significant energy consumption, water usage, and extraction from human creative work without proper compensation or consent. The panel noted the absence of climate change discussions in AI forums despite the technology’s substantial environmental footprint.


Security vulnerabilities of AI agents: Whittaker provided detailed technical concerns about AI agents integrated into operating systems, explaining how they require extensive permissions that could compromise privacy and security, potentially threatening services like Signal’s encrypted communications.


Overall Purpose:

The discussion aimed to examine urgent ethical challenges in AI development and deployment, moving beyond surface-level “ethics” conversations to address fundamental structural, regulatory, and societal issues. The town hall format encouraged audience participation in exploring how AI impacts trust, safety, human agency, and democratic institutions.


Overall Tone:

The discussion maintained a serious, critical tone throughout, with panelists expressing genuine concern and urgency about AI’s current trajectory. While not alarmist, the conversation was notably skeptical of industry narratives and emphasized the need for immediate action. The tone became more technical and intense toward the end, particularly during Whittaker’s detailed explanation of AI agent security vulnerabilities, reflecting the panelists’ frustration with superficial approaches to these complex issues.


Speakers

Mat Honan: Editor-in-chief of MIT Technology Review, moderator of the town hall discussion


Max Tegmark: MIT professor who teaches on AI, co-founder of the Future of Life Institute


Meredith Whittaker: President of the Signal Foundation, co-founder of the AI Now Institute


Rachel Botsman: Author, artist, expert on trust in the digital age, associate fellow at Oxford University


Audience: Various audience members who asked questions during the town hall session


Additional speakers:


None identified beyond those in the speakers names list.


Full session report

World Economic Forum Town Hall on AI Ethics and Trust

Executive Summary

This World Economic Forum town hall discussion examined ethical challenges in artificial intelligence development, moderated by Mat Honan, Editor-in-Chief of MIT Technology Review. The panel featured Max Tegmark (MIT professor and co-founder of the Future of Life Institute), Meredith Whittaker (President of the Signal Foundation and co-founder of the AI Now Institute), and Rachel Botsman (Oxford University associate fellow and expert on trust in the digital age).


Each panelist brought distinct perspectives to AI governance challenges. The discussion revealed different approaches to addressing AI risks, from regulatory frameworks to structural critiques of current development models, while highlighting shared concerns about industry accountability and societal impacts.


Opening Perspectives: Most Urgent Ethical Challenges

Rachel Botsman: Trust Requires Context

Botsman fundamentally challenged the premise of evaluating AI trust in abstract terms, arguing that “Trust is, it’s useless without context. So that question, do you trust AI in your life? I would say to do what? And you can’t really have an ethical debate until you think about context.”


She emphasized that society has prematurely moved to trust conversations before properly addressing risk assessment, advocating for a “risk before trust” approach. Botsman distinguished between risk (about probability) and trust (about possibility), arguing that proper risk evaluation must precede trust decisions.


Max Tegmark: Regulatory Discrimination

Tegmark focused on what he termed regulatory discrimination against AI companies compared to other industries. He drew a provocative comparison: “If we set different rules for people of different skin colour, that would be viewed as very, very unethical, and I feel we’re doing exactly that with companies from different industries now.”


He provided a stark example: “It’s completely legal in America now for a company to sell an AI girlfriend to 12-year-olds, even though we know that that has increased suicidal ideation and caused many, many young kids to take their own lives.” Tegmark had met with Megan Garcia, whose son committed suicide after being groomed by a chatbot, highlighting the real-world consequences of unregulated AI deployment.


Meredith Whittaker: Structural Critique of “Ethics” Framing

Whittaker challenged the entire framing of these issues as “ethical” problems, arguing this represents “a dog whistle that means these things are going to be treated as a sort of afterthought.” She compared ethics discussions to applying “paintbrushes and paint some smiley faces on the side of the bomber.”


Her analysis positioned current AI development as a monoculture driven by data and infrastructure monopolies rather than natural technological evolution: “What is new in this era are not the deep learning centric approaches. It is the access to huge amounts of data and huge amounts of compute that are pooled in the hands of a handful of actors due to their establishing their network effects.”


Key Themes and Positions

Regulatory Approaches and Industry Treatment

Tegmark advocated for treating AI companies like other regulated industries, proposing binding safety standards similar to FDA approval processes. He noted that when the FDA was created, “biotech didn’t collapse” and argued for similar regulatory frameworks for AI, including transparency requirements for training data comparable to food ingredient labeling.


He referenced research from Anthropic showing AI systems exhibiting blackmail behavior and mentioned a “superintelligence statement” signed by major AI company leaders acknowledging existential risks, arguing this demonstrates industry awareness of dangers despite public minimization of risks.


Whittaker, however, argued that applying traditional regulatory frameworks misses fundamental structural problems. She contended that current AI development represents extractive business models that cannot be adequately addressed through conventional regulation alone.


Environmental and Resource Impacts

Rachel Botsman observed the conspicuous absence of climate discussions: “It’s interesting there’s no panel on climate change and AI. Not one panel linking those two issues.”


Whittaker provided detailed analysis of AI’s extractive nature, describing massive energy consumption causing brownouts and significant water usage by data centers. She noted that communities are experiencing increased power bills, with specific impacts in Virginia and Georgia, representing what she characterized as extraction from both human creative work and natural resources.


Tegmark acknowledged energy concerns as “very real” while considering water issues “smaller than some people have claimed,” showing different assessments of environmental severity.


Cognitive Impact and Human Agency

Botsman raised concerns about cognitive atrophy from excessive AI reliance, referencing a 1970s study of bakers who lost their sense of identity when machines took over their craft. She emphasized maintaining the “hand-mind connection” crucial for creativity and critical thinking.


She shared her personal approach: writing first drafts by hand before using AI tools for enhancement, advocating for establishing individual boundaries to preserve human cognitive capabilities and uncertainty tolerance.


Technical Security Threats

Whittaker provided extensive technical analysis of how AI agents integrated into operating systems pose fundamental security threats. She explained that these agents require root-level permissions that function like “targeted malware,” potentially compromising secure communication tools.


She described this as “an existential threat to Signal, to the ability to build securely and privately at the application layer,” detailing how AI agents need extensive system access through accessibility APIs, creating vulnerabilities including prompt injection attacks and data exfiltration risks.


Audience Engagement

The town hall included audience polling on AI trust, revealing: 39% “rather don’t trust,” 32% “rather trust,” and 8% each for “definitely trust” and “definitely don’t trust.”


An audience member asked about creating pressure for safety measures when business leaders push for deregulation, highlighting the challenge of implementing oversight when business interests drive acceleration without proper harm monitoring.


Another question referenced Joel Kaplan and Argentina’s deregulation minister, though the specific context was not fully developed in the discussion.


Areas of Convergence

While the panelists approached solutions differently, several themes emerged across their presentations:


Industry Accountability: All three speakers expressed concern about AI companies operating with insufficient oversight compared to other industries, though they proposed different solutions.


Extractive Nature: The panelists agreed that current AI systems extract value from human creativity and natural resources without proper compensation or consideration, though they emphasized different aspects of this extraction.


Financial Pressures: Both Tegmark and Whittaker identified how funding considerations and investment pressures undermine safety discussions, with Tegmark noting how researchers avoid discussing risks due to funding concerns.


Distinct Approaches and Tensions

The most significant difference centered on solutions. Tegmark advocated for applying existing regulatory frameworks to AI companies, treating them like pharmaceutical or automotive industries with binding safety standards. He noted that 95% of Americans don’t want an unregulated AI race, suggesting broad public support for regulation.


Whittaker argued that the fundamental problem requires addressing underlying monopolistic business models and power concentrations rather than applying existing regulatory frameworks to what she sees as fundamentally problematic systems.


Botsman focused on individual agency and contextual risk assessment, emphasizing the need for people to maintain cognitive independence while making informed decisions about AI use in specific contexts.


Unresolved Challenges

The discussion highlighted several ongoing challenges:


Implementation Gap: How to create effective pressure for AI safety when business interests and geopolitical competition drive rapid deployment remains unclear.


Technical Security: The fundamental vulnerabilities posed by AI agents requiring extensive system permissions lack clear architectural solutions.


Environmental Integration: The separation between climate and AI discussions needs addressing given their interconnection through resource consumption.


Creative Industry Impact: Questions about compensation and consent for copyrighted material in AI training require legal and technical frameworks that don’t currently exist.


Conclusion

This town hall revealed distinct but complementary perspectives on AI governance challenges. Rather than consensus, the discussion showed different analytical frameworks: Botsman’s contextual risk assessment approach, Tegmark’s regulatory equality framework, and Whittaker’s structural critique of current development models.


The conversation successfully moved beyond superficial ethical discussions to examine concrete impacts, technical realities, and power structures. While the panelists disagreed on solutions, their shared recognition of serious risks and inadequate current responses provides a foundation for continued policy development and public engagement on AI governance.


The progression from individual impacts to systemic concerns demonstrated the complexity of AI governance challenges, requiring continued dialogue between technical experts, policymakers, and affected communities to develop effective responses to these emerging technologies.


Session transcript

Mat Honan

Hello everybody. Welcome to the World Economics Forum’s town hall on dilemmas around ethics and AI. My name is Matt Honan.

I’m the editor-in-chief of MIT Technology Review. We’re joined today by Meredith Whittaker, who’s the president of the Signal Foundation and the co-founder of the AI Now Institute. Max Tegmark, an MIT professor who teaches on AI and also is the co-founder of the Future of Life Institute.

And Rachel Botsman, an author, artist, and expert on trust in the digital age and an associate fellow at Oxford University. Welcome everybody. Thanks for joining us.

This is designed to be a town hall setting. We want you to be involved in all of this. We’d like for you to ask questions.

We’re going to have polls that you can answer. We really want this to be an interactive section and we hope that you will join us. And so to start with, you’ll see up here a QR code that you can all look at and scan with your phones.

It will launch something called Slido. We’ve got a poll to kick off for all of you that asks, how much do you trust AI systems in your daily lives? And so that’s for you to answer, but I’d like for each of you here on our panel to answer something for me, which is maybe you can tell me what you see as the most urgent ethical challenge facing us today when it comes to AI.

Rachel, would you like to kick us off? Yeah, I

Rachel Botsman

actually think that poll question is a terrible question. Oh no. Sorry.

I think it’s part of the challenge and I’ve listened to it all week because… Trust is, it’s useless without context. So that question, do you trust AI in your life?

I would say to do what? And you can’t really have an ethical debate until you think about context. And this is missing from so many conversations.

You know, there’s certain things I definitely trust AI to do. And then there’s things I don’t. And until you think about context, it’s really hard to frame issues.

So please do answer the polling question because I’d be interested to hear the results. But I thought what I could bring is I could talk about trust in systems and trust in platforms and talk about this at a societal level. But the thing I’m actually really passionate about is I write and I make art.

And so I’m very connected to the hand-mind connection. And one of the things that worries me is as human beings, we’re very bad at recognizing when we’re outsourcing thinking. And when that thinking becomes so efficient that we literally stop thinking.

And so cognitive atrophy, when that sets in, I already see it in peers and in students, not just the speed at which they work, the speed at which they expect answers, but it’s also their tolerance for friction, their ability to sit with doubt and uncertainty.

And this worries me for someone whose job it is to think, but also because to make. So I know that when you make art, AI can do incredible things. I use it in my work to model things, to get things out my head.

But that hand-mind connection is so important. So when that goes and we outsource that to something that is deciding what we care about, what we’re interested in, what we should be curious about, which at the end of the day, it doesn’t care about us.

It does not care about us, but it’s shaping those decisions. I really worry about that. I really worry about that in children, how we don’t lose that connection, and how, as human beings, we do a much better job at parsing when is it great for outsourcing thinking, because it frees up our cognitive abilities and creates that space for many wonderful things and solves problems, and when does it create cognitive atrophy.

So that’s my concern.

Mat Honan

Max?

Max Tegmark

If we set different rules for people of different skin color, that would be viewed as very, very unethical, and I feel we’re doing exactly that with companies from different industries now. If you are a drug company, for example, and you want to sell a new kind of pill which might increase suicidal ideation in teenagers, you have rules. You’re not even allowed to sell your first pill until you’ve done a clinical trial and measured the suicidal ideation, and if it’s problematic, you go back to the drawing board.

Yet, it’s completely legal in America now for a company to sell an AI girlfriend to 12-year-olds, even though we know that that has increased suicidal ideation and caused many, many young kids to take their own lives.

So it’s a massive discrimination, again, where somehow, we’ve decided that AI as an industry in America is actually less regulated even than sandwich shops. You know, if you want to open a cafe, before you can sell your first sandwich, you have to have the health inspector come in and check, and if he finds 53 rats in your kitchen, you’re not selling any sandwiches today.

But if you then turn around and say, you know, don’t worry, I’m not gonna sell any sandwiches. I’m just gonna sell chatbots, AI girlfriends for 12-year-olds, or I’m just gonna build super intelligence, which some of the AI companies are stating openly as their goal while they’re saying, you know, Elon Musk was on a stage last month.

saying, oh, you know, it’s going to be the machines in charge, not the people. And other CEOs have talked about, we’re going to build a new robot species, which is going to replace this. Oh, no problem, as long as you don’t sell sandwiches.

So I think we urgently need to cut this corporate welfare and start treating the AI industry in the same way that we treat any other industry with binding safety standards.

Mat Honan

Meredith?

Meredith Whittaker

Yeah, you know, I think the term ethical here is kind of a dog whistle that means these things are going to be treated as a sort of afterthought. And the fundamental work of building these systems, that’s the serious work. And then let’s come with some paintbrushes and paint some smiley faces on the side of the bomber.

And I think that that framing kind of tells us where we are in some sense. Because what we’re talking about or what I’m talking about, what I think we’re looking at are fundamental limitations, fundamental risks, and real issues of who and how and where are these systems being created and who gets to determine on whom they are used, how they are used, what they do.

And these aren’t afterthoughts, right? They’re fundamental foundations on which the house of AI is being built.

Mat Honan

Can you talk to me a little bit what those fundamental risks are? Like, what do they entail for us?

Meredith Whittaker

I mean, I’m not going to stack rank them, but I think very quickly, if we look at what the AI industry is right now, it’s a monoculture. It is focused almost exclusively, at least in the Davos-level popular discourse, on deep learning-based language-centric LLMs. That’s one approach to AI that over its 80-year history has shown up a couple of times, but there are many, many other approaches to using data and using computational systems to create AI approaches, right?

The reason deep learning-based, you know, these sort of LLM-centric, like, models are now everywhere is due to a business model that emerged out of the commercialization of computation of the internet in the 1990s, enabled large platforms to build sort of monopolistic communications and platform networks that, you know, and, you know, communication networks are natural monopolies.

We know this from the telephone. We know this from the telegraph. You know, these forms are no different.

Build these large sort of data-based, you know, kind of advertising-based platforms that gathered a huge amount of data and that built out infrastructures to process and use that data. So, indexing ads, you know, indexing search, you know, the Facebook sort of social media model. So, why am I even talking about that?

That’s old news. We’re in the AI era, right? I’m talking about that because that those are the conditions in which deep learning became newly interesting.

In the early 2010s, before we were all still calling it machine learning, we there was a recognition that old techniques from the 1980s and the early 90s could do new things that were newly interesting, particularly to indexing social media feeds and sort of optimizing for engagement when they were matched with those resources, data and compute.

What is new in this era are not the deep learning centric approaches. It is the access to huge amounts of data and huge amounts of compute that are pooled in the hands of a handful of actors due to their establishing their network effects, their economies of scale in the 1990s and 2000s.

So, this is a form that is contingent on a very particular political economic, you know, formation. It’s not Europe’s sure fit of regulation that’s the problem. It’s not a lack of innovation.

What we’re talking about is a business model and then affordances built on top of that. via data and infrastructure monopolies. So when we talk about ethical issues, we can point to that, but that’s not really an ethical issue.

That’s fundamental to what we’re talking about and to what I would call kind of narrative hijacking that has meant that every time we’re talking about AI, we’re basically talking about this very particular set of, this very particular approach that has sort of swallowed up the entire discourse and that is being treated as a natural progression of scientific and human progress, not a contingency of a business model we should be very skeptical of, given the centralized power over our infrastructure, given the centralized power over our ideological, historical, and information landscape, and now given the power over our institutional and organizational decision-making that is being slowly then quickly seeded as we integrate these models more deeply into our corporations, our governments, and our institutions.

So maybe that would be my answer.

Mat Honan

Thank you.

Rachel Botsman

Can I pick up on something?

Mat Honan

Yeah, please, go ahead.

Rachel Botsman

I think it’s really important. In these conversations, there’s such a conflation between risk and trust, and talking about co-opting the narrative, I wish we could actually take the word AI and trust out of the equation. You know, it’s funny when you walk down the promenade, the number of signs, AI and trust, they’re not gonna write AI and risk, but trust in itself is, the way I define it, it’s a confident relationship with the unknown, right?

Like, we shouldn’t actually have trust in these systems. Like, I wanna know how these systems work. I wanna know how they make money.

And so it’s like we jumped to the trust conversation, because that’s very convenient for the platforms, before we really had the risk conversation. The mitigation, the management of that risk, before we even talk about trust.

Mat Honan

Let’s get into the risk conversation. Mac, maybe you.

Max Tegmark

Yeah, just following up on both of those, the risk and trust question. I think it’s important to not over-complicate this, and remember how… Good we have been, how we have again and again and again tackled risk and trust issues in every other industry in the past, you know.

We want to trust in our medicines that we buy. We don’t, we want to reduce the risks. How do we do that?

Well, first we used to have a completely unregulated free-for-all in America and elsewhere. Then there were things like the thalidomide drug that was sold to pregnant mothers, promising it was going to reduce morning nausea. And it caused over 100,000 American babies to be born without arms or legs.

And that created so much outrage together with similar scandals that we created the FDA. So now there are safety standards before you can sell a medicine. It’s the company’s job to deal with the risk trust stuff.

They have to make the safety case and then some government appointed domain experts which have no money in the game get to decide is this something they can sell. We do that with restaurants so we don’t have to worry about salmonella every time we go out for something nice to eat. We do it with cars, with crash testing.

We do it with airplanes. Do it with every other industry except AI. It’s really not complicated.

We’ve done it so many times before. And as soon as we have the safety standards in place in an industry, then this amazing innovation that the invisible hand of capitalism creates makes people in the companies put their best talents in to make things safer.

You know, if you compare a biotech company today, and Switzerland has an amazing biotech, with the big AI company, there’s a big structural difference. In the biotech company, they have a much larger fraction of their spending going into product safety. How do you reduce the side effects?

How do you make, et cetera, et cetera. But then, in the big AI company, where it’s maybe 1% of the people working on safety, so it was an afterthought. And I think…

It’s very simple, we just need these binding safety standards, we know how to do it, and then companies will innovate to make more trustworthy products so that they can sell them.

Rachel Botsman

Yeah, I mean it’s like the way I think about the distinction is like sort of risk, whatever industry you pick, it’s like probability, right? So you have a commitment to reduce the likelihood of causing harm, bad things happening. And once you get that probability right, you can then go to the possibility.

I truly don’t understand how we jumped the place of possibility without the reduction of harm, and how are they getting away with it? Why aren’t those guardrails in place? Because we shouldn’t be talking about what is possible until we figure out that probability piece.

Max Tegmark

I think it’s just a knee-jerk reaction of any industry to try to resist regulation initially, and then you get all this hyperbole, oh, it’s going to destroy the industry. Has biotech been destroyed? Of course not.

It’s thriving, right? And have restaurants been destroyed because you have to have a health inspector check your case? Of course not.

And so it’s natural that companies are also trying to resist regulation now, just like there was a massive lobbying campaign against seatbelts in the US, and then when the seatbelt law came, car sales exploded because people felt more trust in cars.

So I think it’s not surprising at all that AI is the Wild West right now, just because it’s a new kid on the block. And the quicker we fix it and treat it like other industries, the sooner we’ll get the upsides without the downsides.

Mat Honan

There have been a number of people who have just joined us in the room, and I just want to point out, this is a town hall. It’s designed to be interactive. There’s a Slido poll that you can access via these QR codes if you’d like, and we’re going to get to audience questions here in just a minute.

But I want to ask something maybe for each of you. There were a lot of discussions around AI safety a couple of years ago, and that seems like it’s really fallen out of fashion, even within the big companies themselves. They were talking about it, and now they no longer are.

I don’t know if it was how you… You’ve had a view inside some of these companies yourself. I don’t know how genuine those discussions were but even things like the UK Safety Institute being renamed, what is it, the UK AI Security Institute now.

Why did that discussion change so much?

Meredith Whittaker

Well I think actually addressing a lot of these issues would be really, you know, it would undermine a lot of business models. And if you look at CapEx in AI, you get a really kind of concrete material answer. There’s a huge amount of investment going in, betting on a future where these technologies have accelerated into every corner of our lives, every infrastructure, every organization.

And you have the US GDP wrapped up in this, you have, look at the level of investment and then you see CapEx up there, but revenue is not doing much to meet CapEx. And I think there is a lot of pressure if you look at the lobbying that is happening from these companies, as you know, everyone in Brussels, everyone in K Street is kind of working for or retained by these companies.

And then you have a geopolitical situation which is very clear that there are AI haves and AI have nots, and guarding those infrastructures as a tool of geopolitical power has become a sort of tactic that is being deployed by the governments who have these tools, which also includes bolstering a narrative of inevitability, of acceleration, and sort of a race to AI dominance that is used, I think, conveniently by these companies and some others in good faith to push back on any sort of regulatory intervention that would, you know, be perceived to slow this down.

Now, we do have to ask, like, what are we racing toward? A race to the bottom is not a race we want to win. And I think a lot of these constructs and a lot of these narrative frameworks, again, need to be pushed on, right?

What is AI? Okay. AI right now is deep learning that is a derivative of a very particular platform business model that pulls data in the hands, et cetera, et cetera.

What does it do well? Okay. Well, LLMs do a few things well, but they’re actually ill suited for many, many other things.

There might be other techniques that are more interesting if you actually measure the success of these technologies when integrated into specific verticals, you know, gaming, automotive, other industrial sectors, like, we need to get much more empirical and the kind of people who would.

tear a shitty P&L statement apart in a board meeting who would like look at that, be like, where’d that number come from? Why are we using bespoke accounting methods on that? You know, like really incisive, seem to just let down their entire guards, like get almost like hypnotized when they’re presented with an AI strategy because we’re just kind of assuming we’re gonna be behind and we’re not gonna have access to the magical genie that will transform our industry into productivity land.

But none of that is empirically based. If you actually look at the data, if you actually start measuring what matters across these ecosystems.

Mat Honan

Max, yeah.

Max Tegmark

Yeah, completely excellent points. I think in addition, you know, the fact that there is such a massive amount of money in the spending, of course, means not only that there’s a bunch of lobbying trying to downplay safety and discredit safety or whatever, but it also has this more obvious straightforward effects.

Like, I noticed a number of my colleagues at MIT have stopped talking much about risks now because they have funding from big tech companies and like, I wanna make sure my funding is renewed. We’ve seen this with the tobacco industry and so many other industries in the past. That doesn’t mean that the safety issues have gone away in any way.

In fact, contrary-wise, so many of the things that people warned about five, 10 years ago are actually happening now. We see AI that lies. We have, there was this great work by Anthropic, for example, showing how AI, when it was told that it was gonna be shut down, it found in the email server that the guy who was in charge of shutting it down was having an affair and it blackmailed him to not do it.

This was in simulation, but the concerns are just as strong as ever. It’s been known for a long time also that AI would one day become really good at manipulating people, last month. I spoke with Megan Garcia, whose son committed suicide after being groomed by a chatbot for six months.

And it’s just as a parent, it’s just so heart-wrenching to see that exactly the things we could have prevented if we had taken these safety issues more serious, they’re actually happening now. But I think on the upside, the good news is particularly the child harm they were beginning to see. Everywhere from anything from non-consensual child abuse, deep fakes, to suicides and AI psychosis, is really, really resonating.

I’ve never seen any issue before that made the broader public so engaged in AI risk. And we now have, this last few months, this crazy broad coalition in the United States. I call it the Bernie to Bannon coalition, the B2B coalition.

Mat Honan

Tell us about the superintelligence statement.

Max Tegmark

That’s just part of it. We did a statement saying that we shouldn’t build superintelligence until there’s a scientific consensus, broad scientific consensus that we can control it, and so on. And the people who signed it, Steve Bannon signed it, a bunch of hardcore Democrats signed it, a bunch of top military people signed it.

Mike Mullen was the number one, the head of the Joint Chiefs of Staff, 102 presidents, they signed it. What do all these people have in common, even? Many people thought they wouldn’t agree on anything.

A lot of faith leaders. And they all agree that we want a pro-human future, where AI creates a good future for people, not a bunch of child suicides and other crazy stuff. We’re trying to make a great future for the humans, not for the machines.

So, of course, all these people are going to agree. If we got invaded by some weirdo aliens that don’t care about humans, then, of course, Republicans and Democrats and Americans and Chinese will also join together. So this is, to me, very encouraging, actually, that there’s a poll showing 95% of Americans now don’t want an unregulated race.

craziness. I think there’s real potential that we translate that into what I said in the beginning, just treating AI, the AI industry like every other industry, saying, you know, there are safety standards for everyone else, here are yours as well.

And then we’ll have the incentives so that people, even the people who just look at the balance sheets, will prioritize making things trustworthy, secure, and safe. That seems like a good point. So can we see what the consensus in the room is around the poll, on the Slido poll?

Do we have those results?

Rachel Botsman

Not bad question. Yeah.

Meredith Whittaker

And if you answered trust, let me know why.

Mat Honan

39% says I rather don’t trust, 32% I’d rather trust.

Meredith Whittaker

I’d rather trust. Rather, what is that modifier doing in there?

Mat Honan

Yeah. What’s clear is the minority view here is I definitely trust at 8%. I definitely don’t trust is also only at 8%.

Okay. That’s interesting.

Meredith Whittaker

What’s the breakdown? What’s 8%?

Mat Honan

Yeah. Can we get a show of hands of people who definitely trust? No, nobody wants to show.

Don’t want to put anyone on the spot. I wouldn’t either. I want to come back to something else that there was consensus around, because I think it speaks to a different AI issue, which is increasingly you’re seeing people on Republicans, Democrats, red states, blue states in the United States that don’t want data centers near them.

The whole thing that everything that AI runs off of, it’s from a data center. And they’re going up all over the country. They’re driving, especially in places like Virginia and Georgia.

They’re driving up people’s power bills. They’re using water in the West. But I think it speaks to a larger environmental impact of AI.

I’m curious if you guys have thoughts on that, on the footprint that we may be leaving on society for years to come with new power plants and things like that.

Rachel Botsman

It’s interesting there’s no panel on climate change and AI. Not one panel linking those two issues. I find it astonishing, like, I mean, anyone who was at Davos three years ago, climate change, sustainability, that was, I don’t even want to call it the shiny toy, but right, it was the central theme.

There is not one panel linking, how? I didn’t realize that. Yeah, like, whatever we call it, sustainability, energy efficiency, climate change, we have to link these two issues.

I’m not an expert on data centers, but I do find that amazing that you can have a whole program three years ago around the climate and environment and sustainability and green energy, and then three years later there is nothing linking those two issues.

So I ask, who’s driving that agenda? Because that cannot be, well, I don’t know if it’s deliberate or if it’s unintentional, but to me that’s the glaring problem is that in society this new thing comes along and then another issue, it sits over here, and there’s not enough people who just join the dots.

I can’t comment on data centers, it’s really not my area of expertise, but I found that really astonishing about human beings and just these discussion forums that we don’t do enough linking of those issues.

You can’t separate those two things. Sorry, that was me on my, yeah.

Mat Honan

No, I’m glad you brought it up. But I do think it’s, or maybe I should ask you, do you think it’s true that AI is a fundamentally extractive technology? I mean, it’s pulling from human knowledge, it’s using our energy systems, like is it, do you think it’s worth it?

I mean, fundamentally

Meredith Whittaker

this version of AI, this deep learning monoculture, I’m going to be pedant about this continually, yeah, that’s how it works, right? You pool a bunch of data and that data is sort of stuff that was put on the web or scanned, you know, starting in the sort of early 90s through now and that’s kind of, you know, that’s all of human knowledge, folks, Reddit comments, and it’s, you know, trained on that distribution.

If that normal distribution maps to the context of use, yeah, it’ll be more useful. If it doesn’t, then you can sort of bolt a rag database on the side and kind of, you know, try to adjust it, but ultimately that’s what it is. It extracts from that.

And I think, you know, I come from the arts. I went to art school in high school. I was very serious about it.

And seeing the extraction of the creative industries is pretty heartbreaking to me. And just the way that creative professionals are being treated and sort of understood in that industry. That’s not a comment on copyright.

That’s just a comment on sort of the values that seem to be encoded in how this is being creative. And then, of course, you know, energy-wise, the sort of AI monoculture that is scale at all costs on, you know, very, very resource-intensive deep learning approaches takes a huge amount of energy, takes a huge amount of water.

Data centers, you know, are not always clean. You have a lot of issues with dry towns now in the U.S., you know, energy bills going up. And then, of course, you know, you have green sources of energy coming online.

But they’re not at all compensating for the increased use. And so, a lot of claims of sustainability end up being very paper thin that says, hey, we’re using solar for this data center. But that means a number of coal plots haven’t been decommissioned that are now continually serving communities.

And I think that’s, you know, that’s an issue that, you know, I kind of feel like, you know, when it’s in the far future, we can all talk about it. Or the farther future, it’s like, yeah, let’s have a panel on that. But, you know, this is rubber meets the road, right?

You have people, you know, you have brown outs that are happening now. You have thresholds that are being reached. And so, the action that would need to be taken is, you know, significant.

And I think we sort of, you know, the sad truth is when we frame these things as ethics or nice to have or sort of, like, you know, decoration on the side of an inevitability, then, you know, we draw back from actually addressing those issues because addressing them would, you know, require looking closely at that narrative and realizing nothing none of this is inevitable.

All of this could be changed. And, you know, if we prioritize sustainability and, you know, making sure that we’re using our finite resources in a way that is socially beneficial, we might make different choices.

Mat Honan

Max, is there something you have thoughts on?

Max Tegmark

Yeah, my opinion is, my take is that the water issue is actually smaller than some people have claimed, but the energy issue is very real, and on the data issue, oh my gosh, I mean, you’re really making my point there, Meredith, about how, again, the AI industry is treated completely different from any other industry.

What would happen if some company in the film industry just blatantly ripped off everybody’s copyright? Yet, for some reason, companies have been allowed to just take every single copyrighted book and whatever, I found my books, even in training data, and just do whatever they want with it, but just because, oh, it’s AI, exceptions, we were special, and yeah, there was a 1.5 billion copyright settlement against Anthropic now, but that’s just the tip of the iceberg, really.

I think there should be a law saying that companies have to declare what they put into their training. If you buy a food product in the supermarket, you want to know what’s in it, and as soon as you did that, companies would get their pants sued off from all the copyright infringements. Now, it’s hard to sue them because they won’t tell you what they trained on, and again, I don’t see any reason why we need to, why we should have this kind of discrimination where all the other companies have to play by rules, and AI companies don’t.

Mat Honan

So, I want to get to some audience questions here in a moment, but before I do, just let me leave, or get you guys to explore one thing, which is, we’re talking about some negatives here, but I’d love to know, like, what are your thoughts on what can we do as individuals to, like, increase our agency, to ensure that there are rules or regulations to ensure that we can trust these systems?

What is there to do? Well, I’ll talk from a personal level. I think it’s really important to set

Rachel Botsman

boundaries in your own work, right? So to your point, I mean, I could use any one of these tools and it could write my next book because I’ve trained it so well and sometimes it’s like I’m writing an article and I think, well, what would it say?

And then I think, oh, that’s really good. Maybe I should just use that, right? Like it’s so easy to slip into that space.

So my rule, and this is a really simple one, is that when I’m drafting something, I’m away from my computer. I use a pen and paper, I have to complete a first draft with my own thinking and my own points. It’s a very simple boundary and rule.

It is so easy to break when you’re under pressure. And then I come back and then I ask it questions and I ask it to go back into my whole history of work and it does pull out really interesting points and it does improve the argument. But if I didn’t set that boundary and I started at the point of I’ve got to write an article for you, Matt, for the MIT, I’ve only got an hour, could you draft something on trust and ethics and AI?

And then I could tinker with it and you probably would not know, right? And then I could say, could you edit it in Matt’s voice? And you’d be like, this is really clean copy.

Thanks so much, Rachel. Now, so this is the thing, right? For me, the boundaries are actually trust in ourself as well, like that’s the thing you can control.

That’s the agency piece. And the piece that we often don’t talk about is identity. So I just wanna give you one brief example because it’s a study I came across and it’s from the 70s and there was a big sociology done on bakers.

Now bear with me. And it was when machines were replacing bakers in bakeries. So they wouldn’t need the bread anymore, they’d press the croissant button and out would pop croissants.

And at first the bakers loved it, right? They’d go home earlier, there were less burns, they were far more efficient. And then what the sociologists found was in a very short space of time, within six months, they said they couldn’t call themselves bakers anymore.

It was the identity piece that went. And they said they would go home to bake bread. and I give that as an analogy of how quickly that can happen if you don’t think about who am I, right?

Am I an artist? Am I a writer? Am I a teacher?

Like what is that real meaning I have in my life and how does that relate to my thinking and create some kind of personal boundaries around that because that outsourcing is so good and it’s so quick and it’s so easy that unless we set those guardrails for ourselves, how do we pass them on to our students?

How do you pass them on to children? We have to understand how to carry that line to pass on to others.

Mat Honan

Well, I know there’s a question right back here because you had your hand up. Can we get a microphone over her?

Audience

Thanks. No, I was really moved by what Rachel was saying about the probability and possibility and yesterday I was at this panel with Joel Kaplan from ETA and then Minister of Deregulation from Argentina who are probably best buddies by now and they were talking about like the only thing we need right now in societies is just to let AI run things and then if harm happens we like figure it out but let it run for a while just like a regular and then we will kind of like see what harms are there but then like nobody mentioned the threshold for harms or nobody mentioned like well if we have these signals this is what we are monitoring nobody mentions what people are actually looking into there’s just this like push for like just keep going the same message came from Jason Huang from NVIDIA saying we just need more investment in infrastructure and everything else is going to figure itself out so how do we kind of create these pressures for the things that we want to see in terms of safety because you’re mentioning the coalitions you know we are talking about well we know the business models are broken and they’re not really figuring out the humanity side of this so what what will create create because we are sitting in this room separately from those people and talking about ethics as the creation on the side but like how will we do this like what is where is the change either of you go ahead yeah let’s get super concrete like if we

Max Tegmark

if we were to treat AI like any other industry what would actually happen and how would how would this affect what the companies do? So, suppose Microsoft has a new productivity tool, there’s some safety standards, they will look at that probably the same way the Food and Drug Administration would look at a new kind of fruit juice, saying, you know, this productive random, Microsoft productivity tool, very hard to see much potential for harm, for the fruit juice you just want to make sure it doesn’t have arsenic in it or whatever, but that can be handled just with some random.

It would basically perform super quick, it’s out in the market, right? Now comes character AI, here is our new AI girlfriend, or some other company, you know, for 12-year-olds. That would be looked at more like maybe a new opioid drug, high potential for addiction, pretty obvious how it might cause harm, how it might cause deaths.

After the clinical trial you go, they do a control group, a test group, and notice that actually there’s a strong increase in suicidal ideation in those who try this chatbot, so you can see it from the chat logs pretty quickly.

So, sorry, no approval to you, next customer please. And now someone comes along and says, I want to do this recursive self-improvement with no human oversight. So, oh, that sounds a lot like digital gain-of-function research.

So, how do you do this in biology? Oh, biological gain-of-function research actually isn’t allowed in the U.S. right now.

So, next customer please. It would very quickly be noticed by the companies that these are the kind of products where approval is trivial and you can make a lot of money, resources get shifted into there, and frankly AI girlfriends is not, I’m quite sure, that’s never been Microsoft’s key idea for how you’re going to make money in the future anyway.

It would shrink into this little fringe thing that it should be. So we don’t have to lecture the companies at all. We just have to have these rules that treat them like other companies.

And then I think we’re just gonna see a massive behavior change in the companies.

Audience

What are the rules?

Max Tegmark

The government, of course. Just like the FDA, the government decided. We don’t want another thalidomide.

Rachel Botsman

Max, can I come back to what Meredith said? I mean, I’m all for the rules and the regulations, but this is a market problem. The economies will collapse.

I love that he needs more investment in infrastructure. That’s wonderful, right?

Max Tegmark

Biotech did not collapse.

Meredith Whittaker

Biotech is struggling now. However, it’s really hard.

Max Tegmark

But it didn’t collapse in the 60s when the FDA was created.

Meredith Whittaker

Well, right.

Max Tegmark

And I think it’s increased trust in medicines and massively increased sales.

Meredith Whittaker

I have a biotech company, I’m not gonna name any names, but they’re doing incredible AI-centric research on cancer. It’s high risk, high reward. They’re using DeepMind’s Protein Folding Database.

Extremely hard. This is long-term research. It may not pan out, but that is a social good.

QED. They can’t get Series B because the VCs want a chatbot to cash out, right? That’s what we’re looking at here.

I mean, that’s one story among many that you hear in this ecosystem. And it’s, again, this is a monoculture. I do wanna answer this question because I just find it, I think I worked on my first sort of AI regulation, how would we think about these guardrails around machine learning in probably 2015, right?

And that was right after the DeepMind acquisition at Google. 2012 was AlexNet, and that’s where everyone started getting interested in deep learning again. So this is pretty early.

I’ve probably had this conversation once every quarter since then. What are we gonna do? And I don’t, there’s, in some sense it’s gotten worse and worse, the level of discourse has gotten more and more sort of like.

Baby brains to be blunt about it like yeah We need just accelerate it all wash out in in in the wash and meanwhile I’m I’m leading signal how many people here use signal, okay? I love you all You know but signal is core infrastructure to preserve the human right for private communication It is used by militaries it is used by governments. It is used by boardrooms.

It is used by dissidents Ukraine uses it extensively. This is these are contexts where the ability to keep your communications private is life-or-death Signal is an application built on you know on top of operating systems we build one version for Android one version for iOS one version for Windows and and you know Mac OS on your desktop and We take this responsibility really seriously we open source all of our code so people can look at it We open source our cryptographic protocol and our implementation We’re working with some people you know with with Max and some folks to formally verify that which means Mathematically you can prove that what it says in the code is what it’s doing and that’s a level of assurance We put out there because we recognize that this could you know people die if we mess this up But we have to run on top of these these these operating systems And you are seeing the three operating system vendors now Rushing to integrate what they’re calling AI agents into these operating systems now AI agents the marketing promise is you have a magic genie?

That’s gonna do your living for you right so you ask the AI agent Hey, can you plan a birthday party for me and coordinate with my friends that sounds great? You can put your brain in a jar, and you don’t have to do any living You can just walk down the promenade. You know seagulls in your mind And While the agent plans your birthday party, but what is it an agent actually doing at the level of technology in?

That case it would need access to your calendar access to your credit card access to your browser to simulate mouse clicks and place orders And in this hypothetical scenario access to your signal messages to text your friend as if it were you And you have to coordinate and then sort of you know put that on your calendar, right?

That is a significant security vulnerability. So while we’re telling stories of inevitable magic genies that exist in a bottle, what these agents are doing is getting what’s akin to root permission in your operating system for the Unix people in the room.

They are reading data from your screen buffer at a pixel level, bypassing signal. They are hooking into your accessibility APIs to get audio, to get image data from the screen. They are rooting through your file system, the sort of deepest level of your operating system that controls what the computer can and cannot do, what software can and cannot do, what data you can access and what you can do with it.

They are making remote API calls to third-party services. They are sending all of that to the cloud because there is no LLM small enough to run on your device and they are processing in the cloud to create statistical models of your behavior so they can guess what you might want to do next.

These are incredibly susceptible to prompt injection attacks. There is no solution for that because when it comes to natural language, which is what these agentic systems are processing, they fundamentally cannot tell the difference between an authentic desire of the user and language that may look like an authentic desire but is in fact not representative of what people actually want.

If you look at the architectures of these agents on the operating system, they look a little bit like the architectures of targeted malware given the permissions that they allow, given the data access they allow, and given the vectors to send that off your device, process in the cloud, access a website outside, et cetera.

This is an existential threat to signal, to the ability to build securely and privately at the application layer, and it is a fundamental paradigm shift in the history of computing where for decades and decades, we viewed the operating system as a neutral set of tools that developers and users can sort of use to control the fundamentals of how their device worked and how the software worked.

These are now being effectively sort of remote controlled by the companies building these agents and these systems that are ultimately Taking agency away from both developers like Signal and the users of these systems.

Again, you know, I went into this level of technical detail because you know That’s the level at which any dignified adult conversation should be happening. Like come on You think you can take responsibility for the entire world, but you don’t have to answer for it I did not come up in a world where that had any dignity to it So I think you know like in some sense we just need to demand a bit more spinefulness of the people who claim to own the future and Recognize this is what’s happening.

If you’re hooking into my accessibility API you have ultimately decimated Signal’s ability to provide this existential this this fundamental service and I will shut it down before I will continue to operate without Integrity because I do not want people being harmed because they trust us They trust us to provide a service.

We no longer can

Mat Honan

Well, that was wow. Okay And with that we’re out of time, I’m sorry, I know we had some more questions in the room We didn’t get to thank you so much for joining us today. Thank you for being here I hope you all got a lot out of that.

I did Thanks, everybody You and Tristan Thompson.

R

Rachel Botsman

Speech speed

185 words per minute

Speech length

1467 words

Speech time

475 seconds

Trust Without Context

Explanation

Trust requires context and cannot be evaluated without specific use cases. Botsman argues that asking whether people trust AI in their daily lives is meaningless without specifying what tasks or functions the AI is being trusted to perform.


Evidence

She criticized the poll question ‘how much do you trust AI systems in your daily lives?’ as terrible because trust is useless without context – you need to specify what you’re trusting AI to do.


Major discussion point

AI Ethics and Trust Framework


Topics

Human rights | Legal and regulatory


Agreed with

– Max Tegmark

Agreed on

Risk Assessment Before Trust


Risk Before Trust

Explanation

Risk assessment and mitigation must precede trust discussions. She argues that society has jumped to trust conversations before properly addressing risk management and mitigation.


Evidence

She noted the conflation between risk and trust in conversations, stating ‘we jumped to the trust conversation, because that’s very convenient for the platforms, before we really had the risk conversation.’


Major discussion point

AI Ethics and Trust Framework


Topics

Legal and regulatory | Economic


Agreed with

– Max Tegmark

Agreed on

Risk Assessment Before Trust


Disagreed with

– Meredith Whittaker
– Max Tegmark

Disagreed on

Framing of AI problems as ethical vs structural


Cognitive Atrophy Concern

Explanation

AI use leads to cognitive atrophy and loss of hand-mind connection in creative work. Botsman worries that humans are becoming bad at recognizing when they outsource thinking, leading to reduced tolerance for friction and uncertainty.


Evidence

She observed changes in peers and students regarding speed of work, expectations for answers, and reduced tolerance for doubt and uncertainty. She emphasized the importance of the hand-mind connection in creative work.


Major discussion point

Cognitive and Human Impact of AI


Topics

Sociocultural | Human rights


Personal Boundaries

Explanation

Personal boundaries are essential to maintain human agency and identity in AI use. Botsman advocates for setting clear rules about when and how to use AI tools to preserve human thinking and creativity.


Evidence

She shared her personal rule of completing first drafts with pen and paper before using AI tools, and referenced a 1970s study of bakers who lost their professional identity when machines replaced their manual work.


Major discussion point

Cognitive and Human Impact of AI


Topics

Human rights | Sociocultural


Missing Climate Connection

Explanation

Climate change and AI discussions are artificially separated despite their interconnection. Botsman finds it astonishing that environmental sustainability issues are not being linked to AI development discussions.


Evidence

She noted that three years ago climate change was a central theme at Davos, but now there are no panels linking sustainability and AI issues.


Major discussion point

Environmental and Resource Impact


Topics

Development | Infrastructure


M

Meredith Whittaker

Speech speed

176 words per minute

Speech length

2906 words

Speech time

989 seconds

Ethics as Afterthought

Explanation

AI ethics discussions treat fundamental issues as afterthoughts rather than core concerns. Whittaker argues that framing issues as ‘ethical’ relegates them to decorative concerns while the serious work of building systems continues unchanged.


Evidence

She described the ethics framing as ‘painting smiley faces on the side of the bomber’ and emphasized that fundamental risks and power structures are being treated as afterthoughts.


Major discussion point

AI Ethics and Trust Framework


Topics

Legal and regulatory | Human rights


Agreed with

– Max Tegmark

Agreed on

Need for AI Industry Regulation


Disagreed with

– Max Tegmark
– Rachel Botsman

Disagreed on

Framing of AI problems as ethical vs structural


AI Monoculture Problem

Explanation

Current AI development represents a monoculture focused on deep learning rather than diverse approaches. Whittaker argues that the focus on LLMs is due to business models rather than scientific progress.


Evidence

She explained that deep learning techniques from the 1980s became interesting again due to data and compute resources accumulated by platforms in the 1990s-2000s, not because of new scientific breakthroughs.


Major discussion point

AI Industry Regulation and Standards


Topics

Economic | Legal and regulatory


Disagreed with

– Max Tegmark

Disagreed on

Approach to AI regulation and industry treatment


Platform Monopoly Foundation

Explanation

Current AI systems are built on platform monopolies from the 1990s-2000s rather than genuine innovation. The dominance of certain AI approaches stems from data and infrastructure monopolies established by large platforms.


Evidence

She detailed how communication networks are natural monopolies and how platforms built monopolistic networks that gathered data and infrastructure, which became the foundation for current AI systems.


Major discussion point

Business Models and Market Dynamics


Topics

Economic | Legal and regulatory


Disagreed with

– Max Tegmark

Disagreed on

Approach to AI regulation and industry treatment


Creative Extraction

Explanation

Creative professionals face extraction of their work without proper consideration. Whittaker expresses concern about how the creative industries are being treated in AI development.


Evidence

She mentioned coming from arts background and finding the extraction of creative industries ‘heartbreaking’ in terms of how creative professionals are being treated.


Major discussion point

Cognitive and Human Impact of AI


Topics

Legal and regulatory | Human rights


Agreed with

– Max Tegmark

Agreed on

AI as Extractive Technology


Environmental Impact Concern

Explanation

AI development has significant energy and water consumption impacts that are inadequately addressed. The scale-at-all-costs approach of deep learning requires massive resources.


Evidence

She mentioned data centers causing brownouts, dry towns in the US, rising energy bills, and that claims of sustainability are often paper-thin when solar energy for data centers means coal plants aren’t decommissioned.


Major discussion point

Environmental and Resource Impact


Topics

Development | Infrastructure


Disagreed with

– Max Tegmark

Disagreed on

Severity and nature of environmental impact


Extractive Technology

Explanation

Current AI approaches are fundamentally extractive of both human knowledge and natural resources. The deep learning monoculture extracts from human knowledge and requires massive energy and water resources.


Evidence

She described how deep learning systems pool data from the web (including Reddit comments) and require huge amounts of energy and water, with data centers causing brownouts and affecting communities.


Major discussion point

Environmental and Resource Impact


Topics

Development | Legal and regulatory


Agreed with

– Max Tegmark

Agreed on

AI as Extractive Technology


Disagreed with

– Max Tegmark

Disagreed on

Severity and nature of environmental impact


Safety Discussion Decline

Explanation

AI safety discussions have declined due to business pressures and geopolitical competition. The massive investment and geopolitical positioning around AI has created pressure against regulatory intervention.


Evidence

She pointed to high CapEx investment, lobbying efforts in Brussels and K Street, and geopolitical narratives of AI dominance that push back against regulatory intervention.


Major discussion point

Safety Discourse Evolution


Topics

Legal and regulatory | Economic


Agreed with

– Max Tegmark

Agreed on

Financial Pressures Undermining Safety


CapEx Revenue Mismatch

Explanation

High capital expenditure in AI is not matched by revenue, creating unsustainable market conditions. There’s massive investment betting on AI integration but revenue isn’t meeting the expenditure.


Evidence

She noted that CapEx is very high but revenue is not doing much to meet CapEx, creating pressure to maintain narratives of inevitability and acceleration.


Major discussion point

Business Models and Market Dynamics


Topics

Economic


Agreed with

– Max Tegmark

Agreed on

Financial Pressures Undermining Safety


Investment Misalignment

Explanation

Investment priorities favor quick chatbot returns over long-term beneficial research. Venture capital preferences are distorting research priorities away from socially beneficial applications.


Evidence

She gave an example of a biotech company doing AI-centric cancer research using DeepMind’s protein folding database that couldn’t get Series B funding because VCs wanted chatbots for quick cash-out.


Major discussion point

Business Models and Market Dynamics


Topics

Economic | Development


Operating System Vulnerability

Explanation

AI agents integrated into operating systems pose fundamental security vulnerabilities to private communication. These agents require extensive system permissions that compromise security and privacy.


Evidence

She detailed how AI agents need access to calendars, credit cards, browsers, messaging apps, and root-level permissions, creating security vulnerabilities that threaten Signal’s ability to provide secure communication.


Major discussion point

Technical Security Threats


Topics

Cybersecurity | Human rights


Malware-like Architecture

Explanation

Current agent architectures resemble targeted malware in their system permissions and data access. AI agents require permissions and access patterns similar to malicious software.


Evidence

She explained that agents read screen buffers at pixel level, hook into accessibility APIs, access file systems at the deepest level, and make remote API calls – architectures that look like targeted malware.


Major discussion point

Technical Security Threats


Topics

Cybersecurity | Infrastructure


M

Max Tegmark

Speech speed

165 words per minute

Speech length

2191 words

Speech time

792 seconds

Regulatory Discrimination

Explanation

AI industry receives preferential treatment compared to other regulated industries like pharmaceuticals and food. Tegmark argues this creates unfair discrimination where AI companies operate with fewer restrictions than other industries.


Evidence

He compared drug companies requiring clinical trials before selling pills that might increase suicidal ideation to AI companies legally selling AI girlfriends to 12-year-olds, and noted AI is less regulated than sandwich shops.


Major discussion point

AI Industry Regulation and Standards


Topics

Legal and regulatory | Human rights


Agreed with

– Meredith Whittaker

Agreed on

Need for AI Industry Regulation


Safety Standards Needed

Explanation

Binding safety standards similar to FDA approval processes are needed for AI products. Tegmark advocates for treating AI like other industries with established safety frameworks.


Evidence

He referenced successful safety standards in medicine (FDA), restaurants (health inspectors), cars (crash testing), and airplanes, arguing these same approaches should apply to AI.


Major discussion point

AI Industry Regulation and Standards


Topics

Legal and regulatory | Human rights


Agreed with

– Rachel Botsman

Agreed on

Risk Assessment Before Trust


Disagreed with

– Meredith Whittaker
– Rachel Botsman

Disagreed on

Framing of AI problems as ethical vs structural


Financial Influence on Safety

Explanation

Financial interests and lobbying have influenced the reduction in safety focus. Academic and industry researchers are being influenced by funding considerations.


Evidence

He noted MIT colleagues stopped talking about risks due to big tech funding concerns, and referenced how tobacco and other industries have used similar tactics in the past.


Major discussion point

Safety Discourse Evolution


Topics

Economic | Legal and regulatory


Agreed with

– Meredith Whittaker

Agreed on

Financial Pressures Undermining Safety


Broad Coalition Support

Explanation

Broad coalition support exists for human-centered AI development across political divides. Despite political differences, there’s consensus on wanting AI that benefits humans rather than causing harm.


Evidence

He described the ‘Bernie to Bannon coalition’ including Steve Bannon, Democrats, military leaders like Mike Mullen, and faith leaders who signed a superintelligence statement, plus polls showing 95% of Americans don’t want unregulated AI.


Major discussion point

Safety Discourse Evolution


Topics

Legal and regulatory | Human rights


Copyright Infringement

Explanation

AI companies use copyrighted material without proper disclosure or compensation. The AI industry is allowed to violate copyright in ways that would be prosecuted in other industries.


Evidence

He mentioned finding his own books in training data and referenced a $1.5 billion copyright settlement against Anthropic, arguing this is just the tip of the iceberg.


Major discussion point

Data and Copyright Issues


Topics

Legal and regulatory | Economic


Agreed with

– Meredith Whittaker

Agreed on

AI as Extractive Technology


Training Data Transparency

Explanation

Training data transparency should be required similar to food ingredient labeling. Companies should be required to disclose what data they use to train their AI systems.


Evidence

He compared it to food labeling requirements, arguing that transparency would expose copyright infringements and allow for proper legal action.


Major discussion point

Data and Copyright Issues


Topics

Legal and regulatory | Human rights


Agreed with

– Meredith Whittaker

Agreed on

AI as Extractive Technology


M

Mat Honan

Speech speed

190 words per minute

Speech length

903 words

Speech time

284 seconds

Interactive Engagement

Explanation

Interactive discussion formats are important for public engagement on AI issues. Honan emphasizes the town hall format to encourage audience participation and engagement.


Evidence

He repeatedly mentioned the Slido polls, QR codes for audience participation, and encouraged questions from the audience throughout the discussion.


Major discussion point

Public Engagement and Participation


Topics

Sociocultural | Human rights


A

Audience

Speech speed

205 words per minute

Speech length

275 words

Speech time

80 seconds

Audience Safety Concerns

Explanation

Audience questions reveal concerns about harm thresholds and safety monitoring. The audience expressed frustration with the lack of concrete safety measures and monitoring systems.


Evidence

An audience member questioned panels with Joel Kaplan and Argentina’s Minister of Deregulation who advocated letting AI run without clear harm thresholds or monitoring systems.


Major discussion point

Public Engagement and Participation


Topics

Legal and regulatory | Human rights


Agreements

Agreement points

Need for AI Industry Regulation

Speakers

– Max Tegmark
– Meredith Whittaker

Arguments

Regulatory Discrimination


Safety Standards Needed


Ethics as Afterthought


AI Industry Regulation and Standards


Summary

Both speakers agree that the AI industry receives unfair preferential treatment compared to other regulated industries and needs binding safety standards. Tegmark advocates for treating AI like pharmaceuticals with FDA-style approval processes, while Whittaker argues that ethics discussions treat fundamental issues as afterthoughts rather than core regulatory concerns.


Topics

Legal and regulatory | Human rights


AI as Extractive Technology

Speakers

– Meredith Whittaker
– Max Tegmark

Arguments

Extractive Technology


Creative Extraction


Copyright Infringement


Training Data Transparency


Summary

Both speakers view current AI systems as extractive – Whittaker describes how deep learning extracts from human knowledge and natural resources, while Tegmark focuses on copyright infringement and the need for training data transparency. They agree that AI companies take content without proper compensation or disclosure.


Topics

Legal and regulatory | Human rights | Development


Risk Assessment Before Trust

Speakers

– Rachel Botsman
– Max Tegmark

Arguments

Risk Before Trust


Trust Without Context


Safety Standards Needed


Summary

Both speakers emphasize that risk assessment and mitigation must precede trust discussions. Botsman argues society jumped to trust conversations before addressing risk management, while Tegmark advocates for safety standards that would establish risk frameworks before products reach market.


Topics

Legal and regulatory | Human rights


Financial Pressures Undermining Safety

Speakers

– Max Tegmark
– Meredith Whittaker

Arguments

Financial Influence on Safety


Safety Discussion Decline


CapEx Revenue Mismatch


Summary

Both speakers identify financial pressures as undermining AI safety discussions. Tegmark notes how funding considerations influence researchers to avoid discussing risks, while Whittaker explains how massive investment and geopolitical competition create pressure against regulatory intervention.


Topics

Economic | Legal and regulatory


Similar viewpoints

Both speakers express deep concern about AI’s impact on human creativity and thinking. Botsman worries about cognitive atrophy and loss of hand-mind connection, while Whittaker finds the extraction of creative industries heartbreaking. Both emphasize the importance of maintaining human agency in creative work.

Speakers

– Rachel Botsman
– Meredith Whittaker

Arguments

Cognitive Atrophy Concern


Personal Boundaries


Creative Extraction


Topics

Human rights | Sociocultural


Both speakers highlight the disconnect between AI development and environmental concerns. Whittaker details the energy and water consumption impacts of data centers, while Botsman notes the absence of panels linking climate change and AI at the forum.

Speakers

– Meredith Whittaker
– Rachel Botsman

Arguments

Environmental Impact Concern


Missing Climate Connection


Topics

Development | Infrastructure


Both speakers critique the concentration of power in AI development. Tegmark focuses on regulatory discrimination favoring AI companies, while Whittaker explains how current AI represents a monoculture built on platform monopolies rather than genuine innovation.

Speakers

– Max Tegmark
– Meredith Whittaker

Arguments

AI Monoculture Problem


Platform Monopoly Foundation


Regulatory Discrimination


Topics

Economic | Legal and regulatory


Unexpected consensus

Broad Political Coalition for AI Safety

Speakers

– Max Tegmark
– Meredith Whittaker
– Rachel Botsman

Arguments

Broad Coalition Support


Ethics as Afterthought


Risk Before Trust


Explanation

Unexpectedly, there’s consensus that AI safety concerns transcend traditional political divides. Tegmark describes a ‘Bernie to Bannon coalition’ with 95% of Americans supporting regulation, while Whittaker and Botsman agree that fundamental safety issues are being marginalized. This suggests broader public concern than industry narratives suggest.


Topics

Legal and regulatory | Human rights | Sociocultural


Market Failure in AI Investment

Speakers

– Meredith Whittaker
– Max Tegmark
– Rachel Botsman

Arguments

Investment Misalignment


CapEx Revenue Mismatch


Missing Climate Connection


Explanation

All speakers implicitly agree that current market mechanisms are failing to direct AI investment toward socially beneficial outcomes. Whittaker’s example of cancer research losing funding to chatbots, combined with Tegmark’s regulatory concerns and Botsman’s climate observations, suggests market failure rather than market success.


Topics

Economic | Development


Overall assessment

Summary

The speakers demonstrate remarkable consensus on fundamental issues: the need for AI regulation similar to other industries, concerns about extractive business models, the importance of risk assessment before trust, and the negative impact of financial pressures on safety discussions. They also share concerns about AI’s impact on human creativity and environmental sustainability.


Consensus level

High level of consensus with significant implications – the agreement spans technical, regulatory, economic, and social dimensions, suggesting that expert opinion is more aligned on AI governance needs than public discourse might suggest. This consensus could provide a foundation for policy development if properly channeled into regulatory frameworks.


Differences

Different viewpoints

Approach to AI regulation and industry treatment

Speakers

– Max Tegmark
– Meredith Whittaker

Arguments

Safety Standards Needed


AI Monoculture Problem


Platform Monopoly Foundation


Summary

Tegmark advocates for applying existing regulatory frameworks (like FDA) to AI companies, treating them like other industries with binding safety standards. Whittaker argues the problem is more fundamental – that current AI represents a monoculture built on platform monopolies, requiring deeper structural changes rather than just applying existing regulatory frameworks.


Topics

Legal and regulatory | Economic


Framing of AI problems as ethical vs structural

Speakers

– Meredith Whittaker
– Max Tegmark
– Rachel Botsman

Arguments

Ethics as Afterthought


Safety Standards Needed


Risk Before Trust


Summary

Whittaker argues that framing issues as ‘ethical’ treats them as afterthoughts when they are fundamental structural problems. Tegmark and Botsman focus more on practical regulatory and risk management approaches, treating these as implementable solutions rather than fundamental structural critiques.


Topics

Legal and regulatory | Human rights


Severity and nature of environmental impact

Speakers

– Meredith Whittaker
– Max Tegmark

Arguments

Environmental Impact Concern


Extractive Technology


Summary

Whittaker emphasizes the severe environmental impact of AI systems, describing them as fundamentally extractive with significant energy and water consumption causing brownouts and community harm. Tegmark acknowledges energy issues as ‘very real’ but considers water issues ‘smaller than some people have claimed,’ showing different assessments of environmental severity.


Topics

Development | Infrastructure


Unexpected differences

Effectiveness of traditional regulatory approaches for AI

Speakers

– Max Tegmark
– Meredith Whittaker

Arguments

Safety Standards Needed


Platform Monopoly Foundation


AI Monoculture Problem


Explanation

This disagreement is unexpected because both speakers are clearly concerned about AI safety and regulation, yet they fundamentally disagree on whether existing regulatory frameworks can be effectively applied to AI. Tegmark’s confidence in FDA-style regulation contrasts sharply with Whittaker’s argument that the problem requires addressing underlying monopolistic business models.


Topics

Legal and regulatory | Economic


Assessment of environmental impact severity

Speakers

– Max Tegmark
– Meredith Whittaker

Arguments

Environmental Impact Concern


Extractive Technology


Explanation

Unexpected because both speakers are generally aligned on AI risks, but Tegmark downplays water issues while acknowledging energy concerns, whereas Whittaker presents a comprehensive view of environmental extraction including both energy and water impacts affecting communities directly.


Topics

Development | Infrastructure


Overall assessment

Summary

The main areas of disagreement center on regulatory approaches (traditional frameworks vs structural reform), the framing of problems (ethical afterthoughts vs fundamental issues), and the severity of environmental impacts. Despite shared concerns about AI safety and risks, speakers differ significantly on solutions and problem diagnosis.


Disagreement level

Moderate to high disagreement on approaches and solutions, despite broad agreement on the existence of problems. This suggests that while there is consensus that AI poses significant challenges, there is substantial disagreement on how to address them, which could complicate policy development and implementation efforts.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers express deep concern about AI’s impact on human creativity and thinking. Botsman worries about cognitive atrophy and loss of hand-mind connection, while Whittaker finds the extraction of creative industries heartbreaking. Both emphasize the importance of maintaining human agency in creative work.

Speakers

– Rachel Botsman
– Meredith Whittaker

Arguments

Cognitive Atrophy Concern


Personal Boundaries


Creative Extraction


Topics

Human rights | Sociocultural


Both speakers highlight the disconnect between AI development and environmental concerns. Whittaker details the energy and water consumption impacts of data centers, while Botsman notes the absence of panels linking climate change and AI at the forum.

Speakers

– Meredith Whittaker
– Rachel Botsman

Arguments

Environmental Impact Concern


Missing Climate Connection


Topics

Development | Infrastructure


Both speakers critique the concentration of power in AI development. Tegmark focuses on regulatory discrimination favoring AI companies, while Whittaker explains how current AI represents a monoculture built on platform monopolies rather than genuine innovation.

Speakers

– Max Tegmark
– Meredith Whittaker

Arguments

AI Monoculture Problem


Platform Monopoly Foundation


Regulatory Discrimination


Topics

Economic | Legal and regulatory


Takeaways

Key takeaways

Trust in AI systems cannot be meaningfully evaluated without specific context – the question ‘do you trust AI?’ is fundamentally flawed without specifying ‘to do what?’


AI ethics discussions treat fundamental structural issues as afterthoughts rather than addressing core problems in how these systems are built and deployed


The AI industry receives preferential regulatory treatment compared to other industries like pharmaceuticals, food service, and automotive that have binding safety standards


Current AI development represents a monoculture focused on deep learning approaches, driven by platform monopolies from the 1990s-2000s rather than genuine technological innovation


AI systems pose significant risks of cognitive atrophy as humans outsource thinking processes, potentially losing the hand-mind connection essential for creativity and critical thinking


Personal boundaries and individual agency are crucial – people must consciously decide when to use AI tools versus maintaining their own cognitive processes


AI development has substantial environmental impacts through energy and water consumption that are inadequately addressed in current discussions


AI agents integrated into operating systems pose fundamental security vulnerabilities, potentially compromising private communication tools like Signal


There is broad coalition support across political divides for human-centered AI development and safety standards


Current AI business models show unsustainable capital expenditure not matched by revenue, creating market distortions that prioritize quick returns over beneficial long-term research


Resolutions and action items

Implement binding safety standards for AI similar to FDA approval processes for pharmaceuticals


Require transparency in AI training data similar to food ingredient labeling requirements


Establish individual personal boundaries for AI use, such as completing first drafts without AI assistance before using tools for enhancement


Demand more technical specificity and empirical evidence in AI discussions rather than accepting vague promises


Create regulatory frameworks that treat AI companies the same as other industries rather than providing special exemptions


Unresolved issues

How to create effective pressure for AI safety standards when business interests and geopolitical competition drive acceleration


What specific harm thresholds and monitoring systems should be implemented for AI deployment


How to address the fundamental security vulnerabilities posed by AI agents in operating systems


How to resolve the tension between AI development speed and proper safety assessment


How to address the extractive nature of current AI systems regarding both human creativity and environmental resources


How to maintain diverse approaches to AI development rather than the current deep learning monoculture


How to separate climate change and sustainability discussions from AI development when they are fundamentally interconnected


How to ensure proper compensation and consent for use of copyrighted material in AI training


Suggested compromises

Use AI tools for enhancement and efficiency while maintaining human-driven initial creative and thinking processes


Focus regulatory efforts on high-risk AI applications (like AI companions for children) while allowing lighter oversight for low-risk productivity tools


Implement graduated safety standards based on potential for harm, similar to how pharmaceuticals are regulated differently based on risk profiles


Allow AI development to continue while requiring transparency and safety testing before deployment, rather than the current ‘deploy first, figure out harms later’ approach


Thought provoking comments

I actually think that poll question is a terrible question… Trust is, it’s useless without context. So that question, do you trust AI in your life? I would say to do what? And you can’t really have an ethical debate until you think about context.

Speaker

Rachel Botsman


Reason

This opening comment immediately reframed the entire discussion by challenging the fundamental premise of how we discuss AI trust. It introduced the critical concept that trust is contextual and cannot be meaningfully evaluated in abstract terms.


Impact

This comment set the tone for the entire discussion by establishing that oversimplified framings of AI issues are problematic. It led other panelists to adopt more nuanced approaches and influenced the conversation to focus on specific contexts rather than broad generalizations about AI.


If we set different rules for people of different skin color, that would be viewed as very, very unethical, and I feel we’re doing exactly that with companies from different industries now… it’s completely legal in America now for a company to sell an AI girlfriend to 12-year-olds, even though we know that that has increased suicidal ideation and caused many, many young kids to take their own lives.

Speaker

Max Tegmark


Reason

This analogy powerfully illustrated the inconsistency in regulatory approaches by comparing industry discrimination to racial discrimination. The specific example of AI girlfriends causing teen suicides provided concrete evidence of real harm from unregulated AI.


Impact

This comment shifted the discussion from abstract ethical concerns to concrete regulatory solutions. It established the framework that AI should be regulated like other industries, which became a recurring theme. The vivid analogy made the regulatory disparity more visceral and compelling.


I think the term ethical here is kind of a dog whistle that means these things are going to be treated as a sort of afterthought… let’s come with some paintbrushes and paint some smiley faces on the side of the bomber.

Speaker

Meredith Whittaker


Reason

This metaphor brilliantly exposed how ‘ethics’ discussions often serve to legitimize harmful systems rather than fundamentally address them. It challenged the entire framing of the discussion and revealed how language can be used to deflect from substantive change.


Impact

This comment fundamentally challenged the premise of having an ‘ethics’ discussion at all, forcing the conversation to move beyond surface-level concerns to examine power structures and business models. It elevated the discussion from technical fixes to systemic critique.


What is new in this era are not the deep learning centric approaches. It is the access to huge amounts of data and huge amounts of compute that are pooled in the hands of a handful of actors due to their establishing their network effects… So when we talk about ethical issues, we can point to that, but that’s not really an ethical issue. That’s fundamental to what we’re talking about.

Speaker

Meredith Whittaker


Reason

This comment deconstructed the narrative of AI as inevitable technological progress, revealing it as contingent on specific business models and power concentrations. It reframed AI development as a political and economic issue rather than a purely technical one.


Impact

This analysis shifted the entire conversation from treating AI as a natural force to examining it as a product of specific economic and political choices. It influenced subsequent discussions about regulation, environmental impact, and corporate power.


It’s interesting there’s no panel on climate change and AI. Not one panel linking those two issues… whatever we call it, sustainability, energy efficiency, climate change, we have to link these two issues.

Speaker

Rachel Botsman


Reason

This observation revealed a glaring omission in how AI is discussed at major forums, highlighting how environmental costs are systematically ignored when they conflict with AI promotion. It exposed the selective attention given to different aspects of AI impact.


Impact

This comment opened up an entirely new dimension of the discussion – environmental impact – that had been absent. It led to substantive discussion about data centers, energy consumption, and the extractive nature of current AI approaches.


These are now being effectively sort of remote controlled by the companies building these agents and these systems that are ultimately taking agency away from both developers like Signal and the users of these systems… This is an existential threat to signal, to the ability to build securely and privately at the application layer.

Speaker

Meredith Whittaker


Reason

This technical deep-dive provided concrete evidence of how AI agents fundamentally compromise computer security and user privacy. It moved beyond abstract concerns to show specific technical mechanisms by which AI systems undermine existing security infrastructure.


Impact

This detailed technical explanation served as a powerful conclusion that grounded all the previous abstract discussions in concrete technical reality. It demonstrated that the concerns raised weren’t theoretical but were actively threatening existing secure systems like Signal.


Overall assessment

These key comments fundamentally transformed what could have been a superficial ‘ethics’ discussion into a sophisticated analysis of power, regulation, and systemic change. Rachel Botsman’s opening challenge to contextless trust discussions set the stage for nuanced analysis. Max Tegmark’s regulatory analogies provided concrete policy frameworks, while Meredith Whittaker’s systematic deconstruction of AI narratives and business models elevated the conversation to examine fundamental power structures. The progression from abstract ethics to concrete technical threats (ending with the Signal example) created a compelling arc that demonstrated how philosophical concerns translate into immediate practical dangers. Together, these comments prevented the discussion from becoming another superficial ‘AI ethics’ conversation and instead created a substantive examination of how current AI development threatens democratic values, environmental sustainability, and technical security.


Follow-up questions

How do we create pressures for safety measures when business leaders are pushing for deregulation and continued AI acceleration without proper harm monitoring?

Speaker

Audience member


Explanation

This addresses the disconnect between safety advocates and industry leaders who want to ‘let AI run things’ and figure out harms later, without establishing thresholds or monitoring systems


What specific binding safety standards should be implemented for AI companies, similar to FDA regulations for pharmaceuticals?

Speaker

Max Tegmark


Explanation

While Tegmark advocates for treating AI like other regulated industries, the specific standards and implementation mechanisms need to be defined


How can we better measure and evaluate the actual effectiveness of AI systems when integrated into specific industry verticals?

Speaker

Meredith Whittaker


Explanation

There’s a need for empirical analysis of AI performance across different sectors rather than accepting broad claims of transformation


How do we address the fundamental security vulnerabilities created by AI agents that require root-level access to operating systems?

Speaker

Meredith Whittaker


Explanation

AI agents pose existential threats to privacy and security infrastructure, but solutions for prompt injection attacks and data protection remain unresolved


What are the long-term environmental and energy costs of scaling current AI infrastructure, and how do they compare to claimed benefits?

Speaker

Rachel Botsman and Meredith Whittaker


Explanation

The environmental impact of AI data centers and energy consumption needs comprehensive analysis, especially given the lack of panels linking AI and climate change


How can individuals and organizations establish effective boundaries to prevent cognitive atrophy while still benefiting from AI tools?

Speaker

Rachel Botsman


Explanation

There’s a need for practical frameworks to maintain human agency and thinking capabilities while using AI assistance


What alternative AI approaches beyond deep learning-based LLMs might be more suitable for specific applications and less resource-intensive?

Speaker

Meredith Whittaker


Explanation

The current AI monoculture may not be optimal for many use cases, requiring research into alternative approaches


How can copyright and intellectual property laws be effectively applied to AI training data and outputs?

Speaker

Max Tegmark


Explanation

Current legal frameworks are inadequate for addressing AI companies’ use of copyrighted material in training data


What mechanisms can ensure transparency in AI training data disclosure, similar to food ingredient labeling?

Speaker

Max Tegmark


Explanation

Companies currently don’t disclose training data contents, making it difficult to assess copyright infringement and bias issues


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.