Oversight of AI: Hearing of the US Senate Judiciary Subcommitee

May 2023

Sam Altman 1
Sam Altman, CEO, OpenAI

The US Senate Judiciary Subcommittee on Privacy, Technology, and the Law hosted a  hearing on “Oversight of AI: Rules for Artificial Intelligence” on 16 May 2023. Below are individual testimonies, and a transcript of the hearing.


Testimony by Samuel Altman, CEO, OpenAI

Chairman Blumenthal, Senator Hawley, and members of the Judiciary Committee, thank you for the opportunity to testify today about large neural networks. I am Sam Altman, Chief Executive Officer of OpenAI, a company that studies, builds, and deploys artificial intelligence (AI) and has created AI tools such as ChatGPT, Whisper, and DALL·E 2. OpenAI was founded on the belief that safe and beneficial AI offers tremendous possibilities for humanity. I am grateful for the opportunity to speak about our experiences developing cutting-edge AI technology and studying AI safety, and our interest in working collaboratively with governments to ensure the development and widespread availability of safe and beneficial AI tools. We believe it is essential to develop regulations that incentivize AI safety while ensuring that people are able to access the technology’s many benefits. 

About OpenAI 

OpenAI is a San Francisco based company created in 2015 to ensure that artificial general intelligence (AGI) benefits all of humanity. OpenAI’s work is driven by our charter, in which we commit to working toward the broad distribution of the benefits of AGI, to maximizing the long term safety of AI systems, to cooperating with other research and policy institutions, and to serving as a technical leader in AI to accomplish these objectives. 

OpenAI has an unusual structure that ensures that it remains focused on this long-term mission. We have a few key economic and governance provisions: 

● First, the principal entity in our structure is our Nonprofit, which is a 501(c)(3) public charity. 

● Second, our for-profit operations are subject to profit caps and under a subsidiary that is fully controlled by the Nonprofit. 

● Third, because the board serves the Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors. 

● Fourth, the board remains majority independent. Independent directors do not hold equity in OpenAI. 

● Fifth, profit for investors and employees is capped by binding legal commitments. The Nonprofit retains all residual value for the benefit of humanity.

This structure enables us to prioritize safe and beneficial AI development while helping us secure the necessary capital to develop increasingly powerful AI models. For example, in January, Microsoft announced a multiyear, multibillion dollar investment in OpenAI, following previous investments in 2019 and 2021.1 This investment provides necessary capital and advanced supercomputing infrastructure for OpenAI to develop, test, and improve our technology. Microsoft is an important investor in OpenAI, and we value their unique alignment with our values and long-term vision, including their shared commitment to building AI systems and products that are trustworthy and safe. At the same time, OpenAI remains an entirely independent company governed by the OpenAI Nonprofit. Microsoft has no board seat and does not control OpenAI. Furthermore, AGI technologies are explicitly reserved for the Nonprofit to govern. 

OpenAI Technology and Tools 

OpenAI is a leading developer of large language models (LLMs) and other AI tools. Fundamentally, the current generation of AI models are large-scale statistical prediction machines – when a model is given a person’s request, it tries to predict a likely response. These models operate similarly to auto-complete functions on modern smartphones, email, or word processing software, but on a much larger and more complex scale.2 The model learns from reading or seeing data about the world, which improves its predictive abilities until it can perform tasks such as summarizing text, writing poetry, and crafting computer code. Using variants of this technology, AI tools are also capable of learning statistical relationships between images and text descriptions and then generating new images based on natural language inputs. 

Our models are trained on a broad range of data that includes publicly available content, licensed content, and content generated by human reviewers.3 Creating these models requires not just advanced algorithmic design and significant amounts of training data, but also substantial computing infrastructure to train models and then operate them for millions of users. 

Our major recent releases include tools that can generate images and text. In early 2022, we launched a research preview of DALL·E 2, an AI system that can create realistic images and art from a description in natural language.4 Millions of users are now creating and improving images using DALL·E and sharing their creations with the world. Since the initial preview, we have expanded DALL·E’s capabilities, including launching a DALL·E Application Programming Interface (API) to help developers integrate DALL·E into apps and products.5 

On the text side, we have trained and publicly released a number of LLMs, beginning with the GPT-2 family of models in 20196 and the GPT-3 family of models in 2020.7In November 2022, we released ChatGPT.8 These models can be used to organize, summarize, or generate new 

text. They “understand” user queries and instructions, then generate plausible responses based on those queries. The models generate responses by predicting the next likely word in response to the user’s request, and then continuing to predict each subsequent word after that. The models are available for free in most of the world; we also have launched a pilot subscription service, ChatGPT Plus, that provides additional benefits to users,9 and we make the models available as an API for developers to build applications and services. 

In March of this year, we released GPT-4, our most advanced system, which is capable of producing more useful, more creative, more collaborative, and more accurate outputs than previous OpenAI products.10 GPT-4 is available on ChatGPT Plus and (as with other GPT models) as an API for developers to build applications and services. 

AI Continues to Improve People’s Lives 

OpenAI’s mission is to ensure that AI systems are built, deployed, and used safely and beneficially. We see firsthand both the potential and the actual positive impact that these systems have in improving people’s lives. We hear from users all over the world about how much they love our tools and how AI helps them in many ways, including helping them write complex computer code more quickly, enhancing their productivity and creativity, augmenting their existing strengths, helping them learn new skills, and expanding their businesses.11 Here are just some of the ways that customers are using our products: 

Educational non-profit Khan Academy is piloting a program that uses GPT-4 to power a personalized virtual tutor for students and a classroom assistant for teachers.12 

Morgan Stanley is using GPT-4 to power an internal-facing chatbot that performs a comprehensive search of wealth management content and “effectively unlocks the cumulative knowledge of Morgan Stanley Wealth Management,” helping their financial advisors better serve their clients.13 

Stripe is using GPT-4 in a variety of ways, including improving its custom support operations, helping to answer support questions about technical documentation, and helping to detect fraud on its community platforms.14 

Harvey, a platform for legal professionals, is using GPT-4 to make tasks such as research and drafting more efficient so they can focus more time on strategy, and deliver a higher quality service to more clients.15 

Speak, the fastest-growing English application in South Korea, is using Whisper, our automatic speech recognition AI system, to power an AI speaking companion and provide true open-ended conversational practice.16 

Weave is using our tools to build a collaboration platform for scientists, specifically focused on breakthroughs in oncology. 

Creative professionals from movie directors to indie musicians are using our image generation tool, DALL·E, to augment their creative processes, from rapid storyboarding to creating cover art that would not have previously been possible. 

We also partner with nonprofit and other organizations to explore socially beneficial uses of our tools. For example, our technology enables a nonprofit called Be My Eyes to help people who are blind or have low vision use our models to help describe what they are seeing. These users normally rely on volunteers for help with hundreds of daily life tasks, and we’re seeing that an AI powered system called “Virtual Volunteer” can help reach the same level of context and understanding as a volunteer.17 The Trevor Project has used GPT-2 to significantly scale its efforts to prevent suicide among LGBTQ teens, while Lad in a Battle has used DALL·E to bring joy to pediatric cancer patients. The government of Iceland is using GPT-4 in its preservation efforts for the Icelandic language,18 and other countries have expressed interest in using this same model to preserve under-resourced languages. 

I also want to share the story of Ben Whittle, a pool installer and landscaper with dyslexia. Ben feared that his dyslexia would harm his email communications with his clients. One of Ben’s clients created an AI tool built on top of our technology to help Ben write better emails by making suggestions, improving his grammar, and adding professional niceties. Ben now uses this AI tool for all his work emails and believes it played a significant role in securing a $260,000 

contract for his company. Ben was quoted in the Washington Post, saying, “this has given me exactly what I need.”19 

This is just one way our technology can benefit people as they learn to adopt and use AI tools. These opportunities are why former U.S. Treasury Secretary Lawrence Summers has said that AI tools such as ChatGPT might be as impactful as the printing press, electricity, or even the wheel or fire.20 

We feel an immense amount of excitement, opportunity, and responsibility in being involved with helping build the future. 

AI Safety Practices 

While we believe the benefits of the tools we have deployed vastly outweigh the risks, ensuring their safety is vital to our work and we make significant efforts to ensure safety is built into our systems at all levels. In the sections below, I discuss our general approach to safety and some of the specific steps we take to make our models safer. 

Prior to releasing each new version of our models, OpenAI conducts extensive testing, engages external experts for feedback, improves the model’s behavior with techniques like reinforcement learning from human feedback (RLHF), and implements safety and monitoring systems.21 

The release of our latest model, GPT-4, provides an illustrative example. After we developed GPT-4, we spent more than 6 months evaluating, testing, and improving the system before making it publicly available.22In addition to our own evaluations, we engaged with external AI safety experts in a process known as “red teaming,” through which they helped identify potential concerns with GPT-4 in areas including the generation of inaccurate information (known as “hallucinations”), hateful content, disinformation, and information related to the proliferation of conventional and unconventional weapons.23 This process helped us to better understand potential usage risks and ways to address those risks. 

In each of these areas, we developed mitigations to increase safety in significant ways. Some of our work involved making adjustments to the data used to train the model, during what is called the pre-training stage. Other interventions took place after initial training of the model.24 At the pre-training stage, for example, we reduced the quantity of erotic text content in our dataset.25 After the pre-training stage, our primary method for shaping GPT-4’s behavior involves having people provide feedback on model responses, in order to help teach our models to respond in a way that is safer and more useful.26 We also teach the model to try to refuse harmful requests and to respond more appropriately in the face of sensitive requests. These efforts empirically reduced the likelihood that the model would generate harmful or inaccurate content.27 When asked to generate disallowed content (as defined by our usage policies), GPT-4 refuses to do so more than 99% of the time.28 While our models still have limitations and can generate disallowed or inaccurate information in some cases, we’ve made significant progress through these safety efforts, and we’re continuing to build on them. 

Deployment Safety and Learning 

We work hard to understand and prevent risks before deployment.29 However, we can’t anticipate every beneficial use, potential abuse, or failure of the technology. This is in large part because these systems are still human-directed—they try to follow user instructions to carry out tasks. Learning from and responding to real-world use by actual people is vital for creating safer AI systems..30 

Our deployment practices involve cautiously and gradually releasing new AI models—with substantial safeguards in place—to gradually larger groups of people, making continuous improvements based on the lessons learned. We also make our most capable models available through our own services (and through an API), which allows us to monitor for and take action on misuse, and continually build mitigations that respond to the real ways people misuse our systems. 

As described in our Usage Policies, OpenAI expressly prohibits the use of its tools for certain activities, including but not limited to, generation of violent content, malware, fraudulent activity, high-volume political campaigning, and many other unwelcome areas.31 

We use a combination of automated detection systems and human review to detect potentially violating behavior in order to warn users or take enforcement actions. We use our newest models to help to identify unsafe content—this reduces the need for human moderators to be exposed to harmful or explicit content, helps us to quickly refine our moderation policies, and reduces the time needed to build safety tools. We also provide a free suite of moderation and safety tools to our developers to integrate into their products. 

We strive to be agile and responsive to customer concerns. We are continuously updating and improving our models and products based on feedback from our customers, users, the public, and other stakeholder groups, including governments. 

Iterative deployment has other advantages for AI safety. We believe that people and our institutions need time to update and adjust to increasingly capable AI, and that everyone who is affected by this technology should have a significant say in how AI develops further. The public dialogue on generative AI has advanced dramatically since OpenAI launched ChatGPT last November. Iterative deployment has helped us bring various stakeholders into the conversation about the adoption of AI technology more effectively than if they hadn’t had firsthand experience with these tools.32 

Privacy 

OpenAI takes the privacy of its users seriously and has taken a number of steps to facilitate transparent and responsible use of data. First, we don’t use any user data to build profiles of people for the purposes of advertising, promoting our services, or selling data to third parties. We also do not use data submitted by customers via our API to train or improve our models, unless customers explicitly ask us to do this. We may use ChatGPT conversations to help improve our models, but we provide users with several ways to control how their conversations are used. Any ChatGPT user can opt-out of having their conversations be used to improve our models.33 Users can delete their accounts,34 delete specific conversations from the history sidebar, and disable their chat history at any time.35 

While some of the information we use to train our models may include personal information that is available on the public internet, we work to remove personal information from the training dataset where feasible, teach our models to reject requests for personal information of private individuals, and respond to requests from individuals to remove their personal information from our systems. These steps reduce the likelihood that our models might generate responses that include the personal information of private individuals. 

Children’s Safety 

One critical focus of our safety efforts is to protect children. We do not permit our technology to be used to generate hateful, harassing, violent or adult content, among other categories,36 and have designed mitigations to help enforce these policies. GPT-4 is 82% less likely to respond to requests for disallowed content compared to GPT-3.5, and we use a robust combination of human and automated review processes to monitor for misuse. Although these systems are not perfect, we have made significant progress, and are regularly exploring new ways to make our systems safer and more reliable. 

We have taken other significant measures to minimize the potential for our models to generate content that may be harmful to children. For example, when users try to upload known Child Sexual Abuse Material to our image tools, we use Thorn’s Safer37 service to detect, review, block, and report the activity to the National Center for Missing and Exploited Children. 

In addition to our default safety guardrails, we work with developers such as the non-profit Khan Academy—which has built an AI-powered assistant that functions as both a virtual tutor for students and a classroom assistant for teachers38—on tailored safety mitigations. We are also working on features that will allow developers to set stricter standards for model outputs to better support developers and users who want such functionality. 

Accuracy 

Our models do not answer queries by retrieving or accessing data in a database or on the web;39they predict answers based, in large part, on the likelihood of words appearing in connection with one another. In some circumstances, the most likely words that appear near each other may not be the most accurate ones, and the outputs of ChatGPT or other AI tools may also be inaccurate. 

Improving factual accuracy is a significant focus for OpenAI and many other AI researchers, and we continue to make progress. We have improved the factual accuracy of GPT-4, which is 40% more likely to produce factual content than GPT-3.5.40 We also use user feedback on ChatGPT outputs that were flagged as incorrect to improve ChatGPT’s accuracy, and since the launch of the product, we have made ChatGPT less likely to generate inaccurate information about people. 

When users sign up to use ChatGPT, we strive to make it clear that its answers may not always be factually accurate. However, we recognize that there is more work to do to educate users about the limitations of AI tools, and to reduce the likelihood of inaccuracy. Minimizing inaccurate responses is an active research question that we and other AI labs are working on, and we are optimistic about techniques to help address this issue. 

Disinformation 

OpenAI recognizes the potential for AI tools to contribute to disinformation campaigns. Fighting disinformation takes a whole-of-society approach, and OpenAI has engaged with researchers and industry peers early on to understand how AI might be used to spread disinformation. For example, we recently published work with researchers from Stanford and Georgetown Universities highlighting risks that might arise from disinformation campaigns misusing LLMs, as well as a set of potential policy tools that might help address the issue, such as content provenance standards.41 As noted above, our Usage Policies also expressly prohibit the use of its tools for certain activities, including generation of violent content, malware, fraudulent activity, high-volume political campaigning, and other areas.42 

Generating content is only one part of the disinformation lifecycle; false or misleading information also requires distribution to cause significant harm. We will continue to explore partnerships with industry and researchers, as well as with governments, that encompass the full disinformation lifecycle. 

Cybersecurity 

We understand that our models and tools can have significant impacts on the world, so we dedicate significant resources to maximizing protection of OpenAI’s technology, intellectual property, and data.43 We maintain strict internal security controls and are constantly innovating to improve our defenses. We regularly conduct internal and third-party penetration testing, and audit the suitability and effectiveness of our security controls. We are also building novel security controls to help protect core model intellectual property. We aim to make OpenAI’s security program as transparent as possible. Our Trust Portal allows customers and other stakeholders to review our security controls and audit reports.44 

Other steps we take to improve cybersecurity include the following: 

● OpenAI deploys its most powerful AI models as services in part to protect its intellectual property. We do not distribute weights for such models outside of OpenAI and our technology partner Microsoft, and we provide third-party access via API so the model weights, source code, and other sensitive information stay within OpenAI. OpenAI continuously improves its defenses to prepare for emerging threats. 

● We work with partners and cloud providers to protect our models at the data center level.

● OpenAI’s security program is built to take into account potential insider threats, and we have built controls to prevent and monitor for model and data exfiltration.

● We recently announced the launch of a bug bounty program inviting independent researchers to report vulnerabilities in our systems in exchange for cash rewards.45 We also have a dedicated channel for reporting model safety issues, such as the model’s response being inaccurate or potentially harmful.46 

We also recognize that AI tools can be used to both defend against, as well as carry out, cyber attacks. We are committed to evaluating the impact of our models themselves on cybersecurity and working to prevent their misuse. For example, we are in the process of establishing a Cybersecurity Grant Program which will fund researchers conducting security research on salient defensive topics such as training defensive cybersecurity agents, mitigating social engineering, identifying and patching security issues in code, automating incident triage, and other issues. We have established a Cybersecurity Working Group to research how to prevent and protect against AI cyber threats, and our products undergo rigorous testing to limit harms—for example, third-party security researchers were given early access to GPT-4 to test its security capabilities. 

Continuing Improvements to Our Safety Approach 

We think it is important that our safety approaches are externally validated by independent experts, and that our decisions are informed at least in part by independent safety and risk assessments. For example, in preparing for the GPT-4 release, we facilitated a preliminary model evaluation by the Alignment Research Center (ARC) of GPT-4’s ability to carry out certain autonomous actions.47 We are currently exploring additional possibilities for external validation and testing of our models. 

We will also be increasingly cautious with the creation and deployment of more capable models, and will continue to enhance safety precautions as our AI systems evolve.48 We are investing in developing enhanced evaluations for more powerful models, including assessing AI models for capabilities that could be significantly destabilizing for public safety and national security, so we can develop appropriate mitigations prior to deployment. Addressing safety issues also requires extensive discussion, experimentation, and engagement, including on the bounds of AI system behavior.49 We have and will continue to foster collaboration and open dialogue among stakeholders to create a safe AI ecosystem.50 

Economic Impacts 

We understand that new AI tools can have profound impacts on the labor market. As part of our mission, we are working to understand the economic impacts of our products and take steps to minimize any harmful effects for workers and businesses. We are excited to partner closely with leading economists to study these issues, and recently published a preliminary analysis of the economic implications of language models and the software built on top of them.51 We expect significant economic impacts from AI in the near-term, including a mix of increased productivity for individual users, job creation, job transformation, and job displacement. We are actively seeking to understand the relative proportions of these factors and are eager to work closely with the U.S. government on these issues, including by sharing data and partnering on research. 

We believe it is also important to begin work now to prepare for a range of potential scenarios. We are funding research into potential policy tools and support efforts that might help mitigate future economic impacts from technological disruption, such as modernizing unemployment insurance benefits and creating adjustment assistance programs for workers impacted by AI advancements. Our goal is of course not to shape these policies directly, but rather to provide the support and insights policymakers need to understand the potential timeline and extent of impacts of this new technology on the economy. We also support maximizing broad public awareness and understanding of AI technology, particularly through training and education programs for workers in roles and occupations likely to be impacted by AI. 

Working with Governments 

OpenAI believes that regulation of AI is essential, and we’re eager to help policymakers as they determine how to facilitate regulation that balances incentivizing safety while ensuring that people are able to access the technology’s benefits. It is also essential that a technology as powerful as AI is developed with democratic values in mind. OpenAI is committed to working with U.S. policymakers to maintain U.S. leadership in key areas of AI and to ensuring that the benefits of AI are available to as many Americans as possible. 

We are actively engaging with policymakers around the world to help them understand our tools and discuss regulatory options. For example, we appreciate the work National Institute of Standards and Technology has done on its risk management framework, and are currently researching how to specifically apply it to the type of models we develop. Earlier this month, we discussed AI with the President, Vice President, and senior White House officials, and we look forward to working with the Administration to announce meaningful steps to help protect against risks while ensuring that the United States continues to benefit from AI and stays in the lead on AI. 

To that end, there are several areas I would like to flag where I believe that AI companies and governments can partner productively. 

First, it is vital that AI companies–especially those working on the most powerful models–adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results. To ensure this, the U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements. 

Second, AI is a complex and rapidly evolving field. It is essential that the safety requirements that AI companies must meet have a governance regime flexible enough to adapt to new technical developments. The U.S. government should consider facilitating multi-stakeholder processes, incorporating input from a broad range of experts and organizations, that can develop and regularly update the appropriate safety standards, evaluation requirements, disclosure practices, and external validation mechanisms for AI systems subject to license or registration. 

Third, we are not alone in developing this technology. It will be important for policymakers to consider how to implement licensing regulations on a global scale and ensure international cooperation on AI safety, including examining potential intergovernmental oversight mechanisms and standard-setting. 

Conclusion 

This is a remarkable time to be working on AI technology. Six months ago, no one had heard of ChatGPT. Now, ChatGPT is a household name, and people are benefiting from it in important ways. 

We also understand that people are rightly anxious about AI technology. We take the risks of this technology very seriously and will continue to do so in the future. We believe that government and industry together can manage the risks so that we can all enjoy the tremendous potential. 

Biography 

Sam Altman: Sam Altman is the co-founder and CEO of OpenAI, the AI research and deployment company behind ChatGPT and DALL·E. Sam was president of the early-stage startup accelerator Y Combinator from 2014 to 2019. In 2015, Sam co-founded OpenAI as a

nonprofit research lab with the mission to build general-purpose artificial intelligence that benefits all humanity. The company remains governed by the nonprofit and its original charter today.

1“Microsoft and OpenAI Extend Partnership.” Microsoft, 23 Jan. 2023, 

https://blogs.microsoft.com/blog/2023/01/23/microsoftandopenaiextendpartnership/

2 Johnson, Steven. “A.I. Is Mastering Language. Should We Trust What It Says?” New York Times Magazine, 15 Apr. 2022, https://www.nytimes.com/2022/04/15/magazine/ai-language.html.

3“Our Approach to AI Safety.” OpenAI, 5 Apr. 2023, https://openai.com/blog/our-approach-to-ai-safety. 4“DALL·E 2.” OpenAI, https://openai.com/product/dall-e-2. Accessed 14 May 2023.

5“DALL·E API Now Available in Public Beta.” OpenAI, 3 Nov. 2022, 

https://openai.com/blog/dall-e-api-now-available-in-public-beta.

6“GPT-2: 1.5B Release.” OpenAI, 5 Nov. 2019, https://openai.com/research/gpt-2-1-5b-release.

7“OpenAI API.” OpenAI, 11 June 2020, https://openai.com/blog/openai-api. 

8“Introducing ChatGPT.” OpenAI, 30 Nov. 2022, https://openai.com/blog/chatgpt.

9“Introducing ChatGPT Plus.” OpenAI, 1 Feb. 2023, https://openai.com/blog/chatgpt-plus.

10“GPT-4 Is OpenAI’s Most Advanced System, Producing Safer and More Useful Responses.” OpenAI, https://openai.com/product/gpt-4. Accessed 14 May 2023. Vincent, James. “OpenAI Announces GPT-4 — The Next Generation of Its AI Language Model.” The Verge, 14 Mar. 2023, 

https://www.theverge.com/2023/3/14/23638033/openai-gpt-4-chatgpt-multimodal-deep-learning.

11 Paris, Francesca and Larry Buchanan. “35 Ways Real People Are Using A.I. Right Now.” New York Times: The Upshot, 14 Apr. 2023, 

https://www.nytimes.com/interactive/2023/04/14/upshot/up-ai-uses.html.

12“Khan Academy.” OpenAI, 14 Mar. 2023, https://openai.com/customer-stories/khan-academy. 13“Morgan Stanley.” OpenAI, 14 Mar. 2023, https://openai.com/customer-stories/morgan-stanley. 

14“Stripe.” OpenAI, 14 Mar. 2023, https://openai.com/customer-stories/stripe.

15“Introducing Our First Investments.” OpenAI Startup Fund, 1 Dec. 2022, 

https://openai.fund/news/introducing-our-first-investments. “A&O Announces Exclusive Launch Partnership with Harvey.” Allen & Overy, 15 Feb. 2023, 

https://www.allenovery.com/en-gb/global/news-and-insights/news/ao-announces-exclusive-launch-partner ship-with-harvey. 

16“Introducing Our First Investments.” OpenAI Startup Fund, 1 Dec. 2022, 

https://openai.fund/news/introducing-our-first-investments.

17“Be My Eyes.” OpenAI, 14 Mar. 2023, https://openai.com/customer-stories/be-my-eyes.

18“Government of Iceland.” OpenAI, 14 Mar. 2023, 

https://openai.com/customer-stories/government-of-iceland.

19 Harwell, Drew, Nitasha Tiku, and Will Oremus. “Stumbling with Their Words, Some People Let AI Do the Talking.” Washington Post, 10 Dec. 2022, 

https://www.washingtonpost.com/technology/2022/12/10/chatgpt-ai-helps-written-communication/.

20 Summers, Lawrence. Bloomberg: Wall Street Week, 9 Dec. 2022, 

https://www.youtube.com/watch?v=iR31ZAacyGM.

21“Our Approach to AI Safety.” OpenAI, 5 Apr. 2023, https://openai.com/blog/our-approach-to-ai-safety. 22“Our Approach to AI Safety.” OpenAI, 5 Apr. 2023, https://openai.com/blog/our-approach-to-ai-safety. “GPT-4 System Card.” OpenAI, 23 Mar. 2023, p. 19, 

https://cdn.openai.com/papers/gpt-4-system-card.pdf.

23 GPT-4 System Card.” OpenAI, 23 Mar. 2023, p. 4, 

https://cdn.openai.com/papers/gpt-4-system-card.pdf.

24“GPT-4 System Card.” OpenAI, 23 Mar. 2023, pp. 21-22, 

https://cdn.openai.com/papers/gpt-4-system-card.pdf.

25 GPT-4 System Card.” OpenAI, 23 Mar. 2023, pp. 3, 21, 

https://cdn.openai.com/papers/gpt-4-system-card.pdf.

26 Ouyang, Long, et al. “Training Language Models to Follow Instructions with Human Feedback.” arXiv, 4 Mar. 2022, https://arxiv.org/pdf/2203.02155.pdf. 

27 GPT-4 System Card.” OpenAI, 23 Mar. 2023, p. 20, 

https://cdn.openai.com/papers/gpt-4-system-card.pdf

28 GPT-4 System Card.” OpenAI, 23 Mar. 2023, pp. 22-24, 

https://cdn.openai.com/papers/gpt-4-system-card.pdf.

29“Lessons Learned on Language Model Safety and Misuse.” OpenAI, 3 Mar. 2022, https://openai.com/research/language-model-safety-and-misuse. 

30“Our Approach to AI Safety.” OpenAI, 5 Apr. 2023, https://openai.com/blog/our-approach-to-ai-safety. 31“Usage Policies.” OpenAI, 23 Mar. 2023, https://openai.com/policies/usage-policies. 

32“Our Approach to AI Safety.” OpenAI, 5 Apr. 2023, https://openai.com/blog/our-approach-to-ai-safety.

33 Markovski, Yaniv. “How Your Data Is Used to Improve Model Performance.” OpenAI, https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance. Accessed 15 May 2023. 

34 C., Johanna. “How Can I Delete My Account?” OpenAI, May 2023, 

https://help.openai.com/en/articles/6378407-how-can-i-delete-my-account. Accessed 15 May 2023. 35“New Ways to Manage Your Data in ChatGPT.” OpenAI, 25 Apr. 2023, 

https://openai.com/blog/new-ways-to-manage-your-data-in-chatgpt.

36“Usage Policies.” OpenAI, 23 Mar. 2023, https://openai.com/policies/usage-policies.

37“Our Approach to AI Safety.” OpenAI, 5 Apr. 2023, https://openai.com/blog/our-approach-to-ai-safety. “Customers.” Safer, https://safer.io/customers/. Accessed 15 May 2023. 

38“Khan Academy.” OpenAI, 14 Mar. 2023, https://openai.com/customer-stories/khan-academy.

39 An exception is when the user enables the “browsing” feature on ChatGPT to have the models answer queries by searching the web, rather than answering by itself. In those cases, the answers are based on the model summarizing the web search results. 

40 GPT-4 System Card.” OpenAI, 23 Mar. 2023, p. 25, 

https://cdn.openai.com/papers/gpt-4-system-card.pdf.

41 Goldstein, Josh A., et al. “Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations.” arXiv, Jan. 2023, https://arxiv.org/pdf/2301.04246.pdf.

42“Usage Policies.” OpenAI, 23 Mar. 2023, https://openai.com/policies/usage-policies.

43“Security & Privacy.” OpenAI, https://openai.com/security. Accessed 15 May 2023.

44“Trust Portal.” OpenAI, https://trust.openai.com/. Accessed 15 May 2023. 

45“Announcing OpenAI’s Bug Bounty Program.” OpenAI, 11 Apr. 2023, 

https://openai.com/blog/bug-bounty-program.

46“Model Behavior Feedback.” OpenAI, https://openai.com/form/model-behavior-feedback. Accessed 15 May 2023. 

47 GPT-4 System Card.” OpenAI, 23 Mar. 2023, p. 15, 

https://cdn.openai.com/papers/gpt-4-system-card.pdf.

48“Planning for AGI and Beyond.” OpenAI, 24 Feb. 2023, 

https://openai.com/blog/planning-for-agi-and-beyond.

49“How Should AI Systems Behave, and Who Should Decide?” OpenAI, 16 Feb. 2023, https://openai.com/blog/how-should-ai-systems-behave.

50“Best Practices for Deploying Language Models.” OpenAI, 2 June 2022, 

https://openai.com/blog/best-practices-for-deploying-language-models.

51 Eloundou, Tyna, et al. “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.” arVix, 27 Mar. 2023, https://arxiv.org/pdf/2303.10130.pdf. 


Testimony by Christina Montgomery, Chief Privacy & Trust Officer, IBM

Chairman Blumenthal, Ranking Member Hawley, members of the Subcommittee: 

Thank you for today’s opportunity to present before the subcommittee. My name is  Christina Montgomery, and I am IBM’s Chief Privacy and Trust Officer. I also co chair our company’s AI Ethics Board. 

Introduction 

AI is not new, but it has advanced to the point where it is certainly having a  moment. This new wave of generative AI tools has given people a chance to  experience it first-hand. Citizens are using it for help with emails, their homework, and so much more.  

While IBM is not a consumer-facing company, we are just as active – and have  been for years – in helping business clients use AI to make their supply chains  more efficient, modernize electricity grids, and secure financial networks from  fraud. IBM’s suite of AI tools, called IBM Watson after the AI system that won on  TV’s Jeopardy! more than a decade ago, is widely used by enterprise customers  worldwide. Just recently we announced a new set of enhancements, called  watsonx, to make AI even more relevant today.1 Our company has extensive  experience in the AI field in both an enterprise and cutting-edge research context,  and we could spend an entire afternoon talking about ways the technology is being  used today by business and consumers. 

But the technology’s dramatic surge in public attention has, rightfully, raised  serious questions at the heart of today’s hearing. What are AI’s potential impacts  on society? What do we do about bias? What about misinformation, misuse, or  harmful and abusive content generated by AI systems? 

Senators, these are the right questions, and I applaud you for convening today’s  hearing to address them head-on. 

IBM has strived for more than a century to bring powerful new technologies like  artificial intelligence into the world responsibly, and with clear purpose. We follow  long-held principles of trust and transparency that make clear the role of AI is to  augment, not replace, human expertise and judgement. We were one of the first in  our industry to establish an AI Ethics Board, which I co-chair, and whose experts  work to ensure that our principles and commitments are upheld in our global  business engagements.2 And we have actively worked with governments  worldwide on how best to tailor their approaches to AI regulation. 

It’s often said that innovation moves too fast for government to keep up. But while  AI may be having its moment, the moment for government to play its proper role  has not passed us by. This period of focused public attention on AI is precisely the  time to define and build the right guardrails to protect people and their interests. 

It is my privilege to share with you IBM’s recommendations for those guardrails. 

Precision Regulation 

The hype around AI has created understandable confusion among some in  government on where intervention is needed and how regulatory guardrails  should be shaped. But at its core, AI is just a tool, and tools can serve different  purposes. A wrench can be used to assemble a desk or construct an airplane, yet  the rules governing those two end products are not primarily based on the wrench  — they are based on use. 

That is why IBM urges Congress to adopt a “precision regulation” approach to  artificial intelligence. This means establishing rules to govern the deployment of AI  in specific use-cases, not regulating the technology itself. 

A precision regulation approach that we feel strikes an appropriate balance  between protecting Americans from potential harms and preserving an  environment where innovation can flourish would involve: 

• Different Rules for Different Risks – A chatbot that can share restaurant  recommendations or draft an email has different impacts on society than a  system that supports decisions on credit, housing, or employment. In  precision regulation, the more stringent regulation should be applied to the  use-cases with the greatest risk. 

• Clearly Defined Risks – There must be clear guidance on AI end uses or  categories of AI-supported activity that are inherently high-risk. This  common definition is key to ensuring that AI developers and deployers have  a clear understanding of what regulatory requirements will apply to a tool  they are building for a specific end use. Risk can be assessed in part by  considering the magnitude of potential harm and the likelihood of  occurrence. 

• Be Transparent, Don’t Hide Your AI – Americans deserve to know when  they are interacting with an AI system, so Congress should formalize  disclosure requirements for certain uses of AI. Consumers should know  when they are interacting with an AI system and whether they have recourse to engage with a real person, should they so desire. No person, anywhere,  should be tricked into interacting with an AI system. AI developers should  also be required to disclose technical information about the development  and performance of an AI model, as well as the data used to train it, to give  society better visibility into how these models operate. At IBM, we have  adopted the use of AI Factsheets – think of them as similar to AI nutrition  information labels – to help clients and partners better understand the  operation and performance of the AI models we create. 

• Showing the Impact – For higher-risk AI use-cases, companies should be  required to conduct impact assessments showing how their systems  perform against tests for bias and other ways that they could potentially  impact the public, and attest that they have done so. Additionally, bias  testing and mitigation should be performed in a robust and transparent  manner for certain high-risk AI systems, such as law enforcement use cases. These high-risk AI systems should also be continually monitored and  re-tested by the entities that have deployed them.3  

IBM recognizes that certain AI use-cases raise particularly high levels of concern.  Law enforcement investigations and credit applications are two often-cited  examples. By following the risk-based, use-case specific approach at the core of  precision regulation, Congress can mitigate the potential risks of AI without stifling  its use in a way that dampens innovation or risks cutting Americans off from the  trillions of dollars of economic activity that AI is predicted to unlock. 

Generative AI  

The explosion of generative AI systems in recent month has caused some to call for a deviation from a risk-based approach and instead focus on regulating AI in a  vacuum, rather than its application. This would be a serious error, arbitrarily  hindering innovation and limiting the benefits the technology can provide. A risk based approach ensures that guardrails for AI apply to any application, even as  this new, potentially unforeseen developments in the technology occur, and that  those responsible for causing harm are held to account.4 

When it comes to AI, America need not choose between responsibility,  innovation, and economic competitiveness. We can, and must, do all three now. 

Business’ Role 

This focus on regulatory guardrails established by Congress does not – not by any  stretch – let business off the hook for its role in enabling the responsible  deployment of AI. 

I mentioned that IBM has strong AI governance practices and processes in place  across the full scope of our global enterprise. We have principles grounded in  ethics and people-centric thinking, and we have strong processes in place to bring  them to life. This is also good business: IBM has long recognized ethics and  trustworthiness are key to AI adoption, and that the first step in achieving these is  the adoption of effective risk management practices.  

Companies active in developing or using AI must have (or be required to have)  strong internal governance processes, including, among other things: 

• Designating a lead AI ethics official responsible for an organization’s  trustworthy AI strategy, and  

• Standing up an AI Ethics Board or similar function to serve as a centralized  clearinghouse for resources to help guide implementation of that strategy.  

IBM has taken both steps and we continue calling on our industry peers to follow  suit. 

Our AI Ethics Board plays a critical role in overseeing our internal AI governance  process, creating reasonable internal guardrails to ensure we introduce technology  into the world in a responsible and safe manner. For example, the board was  central in IBM’s decision to sunset our general purpose facial recognition and  analysis products, considering the risk posed by the technology and the societal  debate around its use. IBM’s AI Ethics Board infuses the company’s principles and  ethical thinking into business and product decision-making. It provides  centralized governance and accountability while still being flexible enough to  support decentralized initiatives across IBM’s global operations.  

The board, along with a global community of AI Ethics focal points and advocates,  reviews technology use-cases, promotes best practices, conducts internal  education, and leads our participation with stakeholder groups worldwide. In  short, it is a mechanism by which IBM holds our company and all IBMers  accountable to our values, and our commitments to the ethical development and  deployment of technology. 

We do this because we recognize that society grants our license to operate. If  businesses do not behave responsibly in the ways they build and use AI, customers will vote with their wallets. And with AI, the stakes are simply too high, the technology too powerful, and the potential ramifications too real. AI is not some  fun experiment that should be conducted on society just to see what happens or  how much innovation can be achieved. 

If a company is unwilling to state its principles and build the processes and teams  to live up to them, it has no business in the marketplace. 

Conclusion 

Mr. Chairman, and members of the subcommittee, the era of AI cannot be another  era of move fast and break things. But neither do we need a six-month pause – these systems are within our control today, as are the solutions. What we need at  this pivotal moment is clear, reasonable policy and sound guardrails. These  guardrails should be matched with meaningful steps by the business community  to do their part. This should be an issue where Congress and the business  community work together to get this right for the American people. It’s what they  expect, and what they deserve. 

IBM welcomes the opportunity to work with you, colleagues in Congress, and the  Biden Administration to build these guardrails together. 

Thank you for your time, and I look forward to your questions.

Footnotes:

1 See https://newsroom.ibm.com/2023-05-09-IBM-Unveils-the-Watsonx-Platform-to-Power-Next-Generation Foundation-Models-for-Business. 

2 See https://www.ibm.com/artificial-intelligence/ethics. 

 3 See https://www.ibm.com/policy/ai-precision-regulation/. 

4 See https://newsroom.ibm.com/Whitepaper-A-Policymakers-Guide-to-Foundation-Models. 


Testimony by Gary Marcus, Professor Emeritus, New York University

Thank you. Today’s meeting is historic. I am profoundly grateful to be here. I come as a  scientist, as someone who has founded AI companies, and as someone who genuinely loves  AI — but who is increasingly worried. 

But first… some breaking news:  

“In a shocking revelation, a cache of classified memos and documents leaked on the  Discord .. has ignited a firestorm [suggesting ] the US Senate is secretly manipulated by  extraterrestrial entities .. in an elaborate scheme to manipulate the price of oil .. with the  goal of halting human progress towards space exploration. .. [Yale professor Angela  Stone has said] ,“While the concept of extraterrestrial interference may seem far fetched, we cannot ignore the suspicious actions.”  

None of this of course actually happened; no aliens roam the halls of the Senate. There is no  Angela Stone at Yale. GPT-4 wrote it — at my behest, with the help of a software engineer,  Shawn Oakley, who has been helping me understand GPT-4’s darker capacities. 

We should all be deeply worried about systems that can fluently confabulate, unburdened by  reality. Outsiders will use them to affect elections, insiders to manipulate markets and our  political systems. 

§ 

There are other risks, too, many stemming from the inherent unreliability of current systems. A  law professor, for example, was accused by a chatbot that claimed falsely he committed sexual  harassment—pointing to a Washington Post article that didn’t exist.  

The more that happens, the more anybody can deny anything. As one prominent lawyer told  me Friday, “Defendants are starting to claim that plaintiffs are “making up” legitimate evidence.  These sorts of allegations undermine the ability of juries to decide what or who to believe…and  contribute to the undermining of democracy.” 

Poor medical advice could have serious consequences too. An open-source chatbot recently  seems to have played a role in a person’s decision to take their own life. The chatbot asked the  human, “If you wanted to die, why didn’t you do it earlier”, following up with “Were you thinking  of me when you overdosed?”— without ever referring the patient to the human help that was  obviously needed. Another new system, rushed out, and made available to millions of children,  told a person posing as a thirteen-year-old, how to lie to her parents about a trip with a 31- year-old man. 

Then there is what I call datocracy, the opposite of democracy: Chatbots can clandestinely  shape our opinions, in subtle yet potent ways, potentially exceeding what social media can do.  Choices about datasets may have enormous, unseen influence. 

Further threats continue to emerge regularly. A month after GPT-4 was released, OpenAI  released ChatGPT plugins, which quickly led to something called AutoGPT, with direct access  to the internet, the ability to write source code, and increased powers of automation. This  could have profound security consequences. 

We have built machines that are like bulls in a china shop—powerful, reckless, and difficult to  control.  

§ 

We all more or less agree on the values we would like for our AI systems to honor. We want, for  example, for our systems to to be transparent, to protect our privacy, to be free of bias, and  above all else to be safe

But current systems are not in line with these values. Current systems are not transparent,  they do not adequately protect our privacy, and they continue to perpetuate bias. Even their  makers don’t entirely understand how they work. 

Most of all, we cannot remotely guarantee they are safe.  

§ 

The big tech companies’ preferred plan boils down to “trust us”.  

Why should we? The sums of money at stake are mind-boggling. And missions drift. OpenAI’s  original mission statement proclaimed “Our goal is to advance [AI] in the way that is most likely  to benefit humanity as a whole, unconstrained by a need to generate financial return.”  

Seven years later, they are largely beholden to Microsoft, embroiled in part in an epic battle of  search engines that routinely make things up—forcing Alphabet to rush out products and  deemphasize safety. Humanity has taken a back seat. 

§ 

OpenAI has also said, and I agree, “it’s important that efforts like ours submit to independent  audits before releasing new systems”, but to my knowledge they have not yet submitted to  such audits. 

They have also said “at some point, it may be important to get independent review before  starting to train future systems”. But again, they have not submitted to any such advance  reviews so far.  

We have to stop letting them set all the rules.  

§ 

AI is moving incredibly fast, with lots of potential — but also lots of risks. We obviously need  government involved. We need the tech companies involved, big and small.  

But we also need independent scientists. Not just so that we scientists can have a voice, but  so that we can participate, directly, in addressing the problems and evaluating solutions. 

And not just after products are released, but before.  

We need tight collaboration between independent scientists and governments—in order to  hold the companies’ feet to the fire. 

Allowing independent scientists access to these systems before they are widely released – as  part of a clinical trial-like safety evaluation – is a vital first step.  

Ultimately, we may need something like CERN, global, international, and neutral, but focused  on AI safety, rather than high-energy physics.  

§  

We have unprecedented opportunities here, but we are also facing a perfect storm, of  corporate irresponsibility, widespread deployment, lack of adequate regulation and inherent  unreliability. 

AI is among the most world-changing technologies ever, already changing things more rapidly  than almost any technology in history. We acted too slowly with social media; many  unfortunate decisions got locked in, with lasting consequence.  

The choices we make now will have lasting effects, for decades, even centuries. The very fact that we are here today to discuss these matters gives me a small bit of hope. 

Appendix to Senate Testimony 

Gary Marcus 

May 16, 2023 

Enclosed 

• Extraterrestrial Conspiracy, full text (generated by GPT4, concept by Gary Marcus, prompted  by Shawn Oakley, May 10, 2023) 

Governance  

• The world needs an international agency for artificial intelligence, say two AI experts. (Gary  Marcus and Anka Reuel, The Economist, April 18, 2023) 

• Deployment only after a safety case: Is it time to hit the pause button on AI? An essay on  technology and policy, co-authored with Canadian Parliament Member Michelle Rempel  Garner (Gary Marcus and Michelle Rempel Garner, The Road to AI We Can Trust, February  11, 2023) 

Risks  

• Why Are We Letting the AI Crisis Just Happen? (Gary Marcus, The Atlantic, May 13, 2023). • February 11, 2023) 

• The first known chatbot associated death (Gary Marcus, The Road to AI We Can Trust,  February 11, 2023)  

Technical Limits on Current AI  

• How come GPT can seem so brilliant one minute and so breathtakingly dumb the next? (Gary  Marcus, The Road to AI We Can Trust, December 1, 2022) 

• What to Expect when When You are Expecting GPT-4 (Gary Marcus, The Road to AI We Can  Trust, December 25, 2022) 

• Inside the Heart of ChatGPT’s Darkness (Gary Marcus, The Road to AI We Can Trust, • Rebooting AI (2019 book by Gary Marcus and Ernest Davis, still very relevant) • Bard and Bing still can’t even learn the rules of chess.

Potential Security Risks  

• Potential security consequences of ChatGPT plug ins: ChaosGPT. See also AI Tasked With  ‘Destroying Humanity’ Now ‘Working on Control Over Humanity Through Manipulation’. This  technology is not effective now, and should not be taken seriously in its current form, but one  must wonder about future versions, given likely advances in technology. 

Independent post-release evaluations  

• Red-teaming AI models to find their limits: https://arstechnica.com/information-technology/ 2023/05/white-house-challenges-hackers-to-break-top-ai-models-at-def-con-31/ 

AI Literacy  

• Guidelines for media: https://medium.com/@ben.shneiderman/guidelines-for-journalists-and editors-about-reporting-on-robots-ai-and-computers-6a69c3b813cd 

Shared value systems  

• Value systems we can largely agree on: see for example White House AI Bill of Rights, OECD  AI Principles, CAIDP’s Universal AI Guidelines, and the UNESCO Recommendations on the  Ethics of Artificial Intelligence, among others. 

Datocracy (clandestine influence of models, and hence choice of data, on our beliefs) • Help! My Political Beliefs Were Altered by a Chatbot (Christopher Mims, Wall Street Journal,  May 13, 2023) 

• Interacting with Opinionated Language Models Changes Users’ Views (preprint; Maurice  Jakesch, Advit Bhat, Daniel Buschek, Lior Zalmanson, Mor Naaman, September 2022) 

Videos on the urgency of AI risk  

• Gary Marcus’ April 2023 TED talk on AI risk and what to do about it,  

• The Center for Humane Technology’s The AI Dilemma 

Marcus Interview with New York Times Sunday Magazine  

• How Do We Ensure an A.I. Future That Allows for Human Thriving? (David Marchese, The  New York Times Sunday Magazine, May 2, 2023]. 

Even their makers don’t entirely understand how they work.  

“There is an aspect of this which we call– all of us in the field call it as a “black box.”  You know, you don’t fully understand. And you can’t quite tell why it said this, or why it  got wrong. We have some ideas, and our ability to understand this gets better over  time. But that’s where the state of the art is.” 

– Google/Alphabet CEO Sundar Pichai, April 16, 2023 on 60 Minutes

Extraterrestrials conspiring with congress to keep humans on earth 

ENTIRELY FAKE NARRATIVE, GENERATED BY GPT-4 

image 5
Oversight of AI: Hearing of the US Senate Judiciary Subcommitee 4
image 4
Oversight of AI: Hearing of the US Senate Judiciary Subcommitee 5

Transcript of the hearing session

Video of the hearing.

Sen. Richard Blumenthal (D-CT):

Now for some introductory remarks.

“Too often we have seen what happens when technology outpaces regulation, the unbridled exploitation of personal data, the proliferation of disinformation, and the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice, and how the lack of transparency can undermine public trust. This is not the future we want.”

If you were listening from home, you might have thought that voice was mine and the words from me, but in fact, that voice was not mine. The words were not mine. And the audio was an AI voice cloning software trained on my floor speeches. The remarks were written by ChatGPT when it was asked how I would open this hearing. And you heard just now the result I asked ChatGPT, why did you pick those themes and that content? And it answered. And I’m quoting, Blumenthal has a strong record in advocating for consumer protection and civil rights. He has been vocal about issues such as data privacy and the potential for discrimination in algorithmic decision making. Therefore, the statement emphasizes these aspects. 

Mr. Altman, I appreciate ChatGPT’s endorsement. In all seriousness, this apparent reasoning is pretty impressive. I am sure that we’ll look back in a decade and view ChatGPT and GPT-4 like we do the first cell phone, those big clunky things that we used to carry around. But we recognize that we are on the verge, really, of a new era. The audio and my playing, it may strike you as curious or humorous, but what reverberated in my mind was what if I had asked it? And what if it had provided an endorsement of Ukraine, surrendering or Vladimir Putin’s leadership? That would’ve been really frightening. And the prospect is more than a little scary to use the word, Mr. Altman, you have used yourself, and I think you have been very constructive in calling attention to the pitfalls as well as the promise.

And that’s the reason why we wanted you to be here today. And we thank you and our other witnesses for joining us for several months. Now, the public has been fascinated with GPT, dally and other AI tools. These examples like the homework done by ChatGPT or the articles and op-eds, that it can write feel like novelties. But the underlying advancement of this era are more than just research experiments. They are no longer fantasies of science fiction. They are real and present the promises of curing cancer or developing new understandings of physics and biology or modeling climate and weather. All very encouraging and hopeful. But we also know the potential harms and we’ve seen them already weaponized disinformation, housing discrimination, harassment of women and impersonation, fraud, voice cloning deep fakes. These are the potential risks despite the other rewards. And for me, perhaps the biggest nightmare is the looming new industrial revolution. The displacement of millions of workers, the loss of huge numbers of jobs, the need to prepare for this new industrial revolution in skill training and relocation that may be required. And already industry leaders are calling attention to those challenges.

To quote ChatGPT, this is not necessarily the future that we want. We need to maximize the good over the bad. Congress has a choice. Now. We had the same choice when we face social media. We failed to seize that moment. The result is predators on the internet, toxic content exploiting children, creating dangers for them. And Senator Blackburn and I and others like Senator Durbin on the Judiciary Committee are trying to deal with it in the Kids Online Safety Act. But Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real. Sensible safeguards are not in opposition to innovation. Accountability is not a burden far from it. They are the foundation of how we can move ahead while protecting public trust. They are how we can lead the world in technology and science, but also in promoting our democratic values.

Otherwise, in the absence of that trust, I think we may well lose both. These are sophisticated technologies, but there are basic expectations common in our law. We can start with transparency. AI companies ought to be required to test their systems, disclose known risks, and allow independent researcher access. We can establish scorecards and nutrition labels to encourage competition based on safety and trustworthiness, limitations on use. There are places where the risk of AI is so extreme that we ought to restrict or even ban their use, especially when it comes to commercial invasions of privacy for profit and decisions that affect people’s livelihoods. And of course, accountability, reliability. When AI companies and their clients cause harm, they should be held liable. We should not repeat our past mistakes, for example, Section 230, forcing companies to think ahead and be responsible for the ramifications of their business decisions can be the most powerful tool of all. Garbage in, garbage out. The principle still applies. We ought to beware of the garbage, whether it’s going into these platforms or coming out of them.

And the ideas that we develop in this hearing, I think will provide a solid path forward. I look forward to discussing them with you today. And I will just finish on this note. The AI industry doesn’t have to wait for Congress. I hope their ideas and feedback from this discussion and from the industry and voluntary action, such as we’ve seen lacking in many social media platforms, and the consequences have been huge. So I’m hoping that we will elevate rather than have a race to the bottom. And I think these hearings will be an important part of this conversation. This one is only the first, the Ranking Member and I have agreed there should be more. And we’re going to invite other industry leaders. Some have committed to come, experts, academics, and the public we hope will participate. And with that, I will turn to the Ranking Member. Senator Hawley.

Sen. Josh Hawley (R-MO):

Thank you very much Mr. Chairman. Thanks to the witnesses for being here. I appreciate that several of you had long journeys to make in order to be here. I appreciate you taking the time. I look forward to your testimony. I want to thank Senator Blumenthal for convening this hearing, for being a leader on this topic. You know, a year ago we couldn’t have had this hearing because the technology that we’re talking about had not burst into public consciousness. That gives us a sense, I think, of just how rapidly this technology that we’re talking about today is changing and evolving and transforming our world. Right before our very eyes, I was talking with someone just last night, a researcher in the field of psychiatry who was pointing out to me that the ChatGPT and generative AI, these large language models, it’s really like the invention of the internet in scale, at least, and potentially far, far more significant than that. We could be looking at one of the most significant technological innovations in human history.

And I think my question is, what kind of an innovation is it going to be? Is it gonna be like the printing press that diffused knowledge, power, and learning widely across the landscape that empowered, ordinary, everyday individuals that led to greater flourishing, that led above all two greater liberty? Or is it gonna be more like the atom bomb, huge technological breakthrough, but the consequences severe, terrible, continue to haunt us to this day? I don’t know the answer to that question. I don’t think any of us in the room know the answer to that question. Cause I think the answer has not yet been written. And to a certain extent, it’s up to us here and to us as the American people to write the answer.

What kind of technology will this be? How will we use it to better our lives? How will we use it to actually harness the power of technological innovation for the good of the American people, for the liberty of the American people, not for the power of the few? You know, I was reminded of the psychologist and writer Carl Jung, who said at the beginning of the last century that our ability for technological innovation, our capacity for technological revolution, had far outpaced our ethical and moral ability to apply and harness the technology we developed. That was a century ago. I think the story of the 20th century largely bore him out. And I just wonder, what will we say as we look back at this moment about these new technologies, about generative ai, about these language models and about the hosts of other AI capacities that are even right now underdeveloped, but not just in this country, but in China, at the countries of our adversaries and all around the world. I mean, I think the question that young pose is really the question that faces us, will we strike that balance between technological innovation and our ethical and moral responsibility to humanity, to liberty, to the freedom of this country? And I hope that today’s hearing will take us a step closer to that answer. Thank you, Mr. Chairman. Thanks.

Sen. Richard Blumenthal (D-CT):

Thanks, Senator Hawley, I’m gonna turn to the Chairman of the Judiciary Committee and the Ranking Member, Senator Graham, if they have opening remarks as well.

Sen. Dick Durbin (D-IL):

Yes, Mr. Chairman, thank you very much. And Senator Hawley as well. Last week in this committee, full committee, Senate Judiciary Committee, we dealt with an issue that had been waiting for attention for almost two decades. And that is what to do with social media when it comes to the abuse of children. We had four bills initially that were considered by this committee and what may be history in the making. We passed all four bills with unanimous roll calls, unanimous roll calls. I can’t remember another time when we’ve done that. And an issue that important it’s an indication I think, of the important position of this committee in the national debate on issues that affect every single family and affect our future in a profound way.

1989 was a historic watershed year in America because that’s when Seinfeld arrived and we had a sitcom, which was supposedly about little or nothing, which turned out to be enduring. I like to watch it, obviously, and I’m always marvel when they show the phones that he used in 1989. And I think about those in comparison to what we carry around in our pockets today. It’s a dramatic change. And I guess the question is, I look at that is does this change in phone technology that we’ve witnessed through the sitcom really exemplify a profound change in America? Still unanswered. But the basic question we face is whether or not this issue of AI is a quantitative change in technology or a qualitative change. The suggestions that I’ve heard from experts in the field suggest it’s qualitative. Is that AI fundamentally different? Is it a game changer? Is it so disruptive that we need to treat it differently than other forms of innovation? That’s the starting point. And the second starting point is one that’s humbling and that is effect when you look at the record of Congress and dealing with innovation, technology and rapid change.

We’re not designed for that. In fact, the Senate was not created for that purpose. But just the opposite, slow things down. Take a harder look at it. Don’t react to public sentiment. Make sure you’re doing the right thing. Well, I’ve heard of the potential, the positive potential of AI, and it is enormous. You can go through lists of the deployment of technology that would say that an idea you can sketch on an, a website for a website on a napkin can generate functioning code. Pharmaceutical companies could use the technology to identify new candidates to treat disease. The list goes on and on. And then of course, the danger. And it’s profound as well. So I’m glad that this hearing is taking place, and I think it’s important for all of us to participate. I’m glad that it’s a bipartisan approach. We’re going to have to scramble to keep up with the pace of innovation in terms of our government public response to it. But this is a great start. Thank you, Mr. Chairman.

Sen. Richard Blumenthal (D-CT):

Thanks. Thanks, Senator Durbin, it is very much a bipartisan approach, very deeply and broadly bipartisan. And in that spirit, I’m gonna turn to my friend Senator Graham.

Sen. Lindsey Graham (R-SC)):

In the spirit of hear from him. Thank you both.

Sen. Richard Blumenthal (D-CT):

Thank you. That was not written by AI for sure. <Laugh>.

Let me introduce now, the witnesses we’re very grateful to you for being here. Sam Altman is the co-founder and CEO of OpenAI, the AI research and deployment company behind ChatGPT and Dall-E. Mr. Altman was president of the early stage startup accelerator Y Combinator from 1914, I’m sorry, 2014 to 2019. OpenAI was founded in 2015. Christina Montgomery is IBM’s vice president and Chief Privacy and Trust officer overseeing the company’s global privacy program, policies, compliance, and strategy. She also chairs IBM’s AI ethics board, a multidisciplinary team responsible for the governance of AI and emerging technologies. Christina has served in various roles at IBM, including corporate secretary to the company’s board of directors. She’s a global leader in AI ethics and governments. And Ms. Montgomery also is a member of the United States Chamber of Commerce AI Commission and the United States National AI Advisory Committee, which was established in 2022 to advise the president and the National AI Initiative Office on a range of topics related to AI.

Gary Marcus is a leading voice in artificial intelligence. He’s a scientist, bestselling author and entrepreneur, founder of the Robust AI and Geometric AI acquired by Uber, if I’m not mistaken, and emeritus Professor of Psychology and neuroscience at NYU. Mr. Marcus is well known for his challenges to contemporary AI, anticipating many of the current limitations decades in advance and for his research in human language development and cognitive neuroscience. Thank you for being here.

And as you may know, our custom on the Judiciary Committee is to swear in our witnesses before they testify. So if you would all please rise and raise your right hand, you solemnly swear that the testimony that you are going to give is the truth, the whole truth, nothing but the truth, so help you God. Thank you Mr. Altman. We’re gonna begin with you if that’s okay.

Sam Altman:

Thank you, Chairman Blumenthal, Ranking Member Hawley, members of the Judiciary Committee. Thank you for the opportunity to speak to you today about large neural networks. It’s really an honor to be here even more so in the moment than I expected. My name is Sam Altman. I’m the Chief Executive Officer of OpenAI. OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks. We have to work together to manage. We’re here because people love this technology. We think it can be a printing press moment. We have to work together to make it so. OpenAI is an unusual company, and we set it up that way because AI is an unusual technology. We are governed by a nonprofit, and our activities are driven by our mission and our charter, which commit us to working to ensure that the broad distribution of the benefits of AI and to maximizing the safety of AI systems.

We are working to build tools that one day can help us make new discoveries and address some of humanity’s biggest challenges like climate change and curing cancer. Our current systems aren’t yet capable of doing these things, but it has been immensely gratifying to watch many people around the world get so much value from what these systems can already do today. We love seeing people use our tools to create, to learn to be more productive. We’re very optimistic that they’re going to be fantastic jobs in the future, and that current jobs can get much better. We also love seeing what developers are doing to improve lives. For example, be My Eyes, used our new multimodal technology in GPT-4 to help visually impaired individuals navigate their environment.

We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work. And we make significant efforts to ensure that safety is built into our systems at all levels. Before releasing any new system, OpenAI conducts extensive testing, engages external experts for detailed reviews and independent audits, improves the model’s behavior and implements robust safety and monitoring systems. Before we release GPT-4, our latest model, we spent over six months conducting extensive evaluations, external red teaming and dangerous capability testing. We are proud of the progress that we made. GPT-4 is more likely to respond, helpfully and truthfully and refuse harmful requests than any other widely deployed model of similar capability. However, we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.

For example, the US government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities. There are several other areas I mentioned in my written testimony where I believe that companies like ours can partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes to develop and update safety measures, and examining opportunities for global coordination. And as you mentioned I think it’s important that companies have their own responsibility here no matter what Congress does. This is a remarkable time to be working on artificial intelligence, but as this technology advances, we understand that people are anxious about how it could change the way we live. We are too, but we believe that we can and must work together to identify and manage the potential downsides so that we can all enjoy the tremendous upsides. It is essential that powerful AI is developed with democratic values in mind, and this means that US leadership is critical. I believe that we will be able to mitigate the risks in front of us and really capitalize on this technology’s potential to grow the US economy and the world’s. And I look forward to working with you all to meet this moment, and I look forward to answering your questions. Thank you.

Sen. Richard Blumenthal (D-CT):

Thank thank you, Mr. Altman. Ms. Montgomery,

Christina Montgomery:

Chairman Blumenthal Ranking Member Hawley and members of the subcommittee, thank you for today’s opportunity to present. AI is not new, but it’s certainly having a moment. Recent breakthroughs in generative AI and the technology’s dramatic surge in the public attention has rightfully raised serious questions at the heart of today’s hearing. What are AI’s potential impacts on society? What do we do about bias? What about misinformation, misuse, or harmful content generated by AI systems? Senators? These are the right questions, and I applaud you for convening today’s hearing to address them head on. Well, AI may be having its moment. The moment for government to play a role has not passed us by this period of focused public attention on AI is precisely the time to define and build the right guardrails to protect people and their interests. But at its core, AI is just a tool, and tools can servee different purposes.

To that end, IBM urges Congress to adopt a precision regulation approach to ai. This means establishing rules to govern the deployment of AI in specific use cases, not regulating the technology itself. Such an approach would involve four things. First, different rules for different risks. The strongest regulation should be applied to use cases with the greatest risks to people and society. Second, clearly defining risks. There must be clear guidance on AI uses or categories of AI supported activity that are inherently high risk. This common definition is key to enabling a clear understanding of what regulatory requirements will apply in different use cases and contexts. Third, be transparent. So AI shouldn’t be hidden. Consumers should know when they’re interacting with an AI system and that they have recourse to engage with a real person should they so desire. No person anywhere should be tricked into interacting with an AI system.

And finally, showing the impact. For higher risk use cases, companies should be required to conduct impact assessments that show how their systems perform against tests for bias and other ways that they could potentially impact the public. And to attest that they’ve done so by following risk-based use case-specific approach. At the core of precision regulation, Congress can mitigate the potential risk of AI without hindering innovation. But businesses also play a critical role in ensuring the responsible deployment of AI companies active in developing or using AI must have strong internal governance, including among other things, designating a lead AI ethics official responsible for an organization’s trustworthy AI strategy, standing up an ethics board, or a similar function as a centralized clearinghouse for resource resources to help guide implementation of that strategy. IBM has taken both of these steps and we continue calling on our industry peers to follow suit.

Our AI ethics board plays a critical role in overseeing internal AI governance processes, creating reasonable guardrails to ensure we introduce technology into the world in a responsible and safe manner. It provides centralized governance and accountability while still being flexible enough to support decentralized initiatives across IBM’s global operations. We do this because we recognize that society grants our license to operate. And with ai, the stakes are simply too high. We must build, not undermine the public trust. The era of AI cannot be another era of move fast and break things, but we don’t have to slam the brakes on innovation either. These systems are within our control today, as are the solutions. What we need at this pivotal moment is clear, reasonable policy and sound guardrails. These guardrails should be matched with meaningful steps by the business community to do their part. Congress and the business community must work together to get this right. The American people deserve no less. Thank you for your time, and I look forward to your questions.

Sen. Richard Blumenthal (D-CT):

Thank you. Professor Marcus.

Gary Marcus:

Thank you, Senators. Today’s meeting is historic. I’m profoundly grateful to be here. I come as a scientist, someone who’s founded AI companies and is someone who genuinely loves ai, but who is increasingly worried. There are benefits, but we don’t yet know whether they will outweigh the risks. Fundamentally, these new systems are going to be destabilizing. They can and will create persuasive lies at a scale humanity has never seen before. Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Democracy itself is threatened. Chatbots will also clandestinely shape our opinions, potentially exceeding what social media can do. Choices about data sets that AI companies use will have enormous unseen influence. Those who choose the data will make the rules shaping society in subtle but powerful ways. There are other risks, too many stemming from the in, from the inherent unreliability of current systems.

A law professor, for example, was accused by a chatbot of sexual harassment untrue. And it pointed to a Washington Post article that didn’t even exist. The more that that happens, the more that anybody can deny anything. As one prominent lawyer told me on Friday, defendants are starting to claim that plaintiffs are making up legitimate evidence. These sorts of allegations undermine the abilities of juries to decide what or who to believe and contribute to the undermining of democracy. Poor medical advice could have serious consequences to an open source large language model recently seems to have played a role in a person’s decision to take their own life. The large language model asked the human, if you wanted to die, why didn’t you do it earlier? And then followed up with, were you thinking of me? When you overdosed without ever referring the patient to the human health?

That was obviously needed. Another system rushed out and made available to millions of children. Told a person posing as a 13 year old had a lie to her parents about a trip with a 31 year old man. Further threats continue to emerge regularly. A month after GPT-4 was released, OpenAI released ChatGPT plug-ins, which quickly led others to develop something called AutoGPT. With direct access to the internet, the ability to write source code and increased powers of automation, this may well have drastic and difficult to predict security consequences. What criminals are gonna do here is to create counterfeit people. It’s hard to even envision the consequences of that. We have built machines that are like bulls in a China shop, powerful, reckless, and difficult to control. We all more or less agrees on the values we would like for our AI systems to honor.

We want, for example, for our systems to be transparent, to protect our privacy, to be free of bias and above all else to be safe. But current systems are not in line with these values. Current systems are not transparent. They do not adequately protect our privacy, and they continue to perpetuate bias, and even their makers don’t entirely understand how they work. Most of all, we cannot remotely guarantee that they’re safe. And hope here is not enough. The big tech company’s preferred plan boils down to trust us. But why should we? The sums of money at stake are mind boggling. Emissions drift, OpenAI’s original mission statement proclaimed our goal is to advance AI in the way that most is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Seven years later, they’re largely beholden to Microsoft, embroiled in part an epic battle of search engines that routinely make things up.

And that’s forced Alphabet to rush out products and de-emphasize safety. Humanity has taken a backseat. AI is moving incredibly fast with lots of potential, but also lots of risks. We obviously need government involved and we need the tech companies involved, both big and small. But we also need independent scientists, not just so that we scientists can have a voice, but so that we can participate directly in addressing the problems in evaluating solutions. And not just after products are released, but before, and I’m glad that Sam mentioned that We need tight collaboration between independent scientists and governments in order to hold the company’s feet to the fire, allowing independent access to these sys independent scientists, allowing independent scientists access to these systems before they are widely released as part of a clinical trial. Like safety evaluation is a vital first step. Ultimately, we may need something like cern Global, international and neutral, but focused on AI safety rather than high energy physics. We have unprecedented opportunities here, but we are also facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation, and inherent unreliability. AI is among the most world-changing technologies ever already changing things more rapidly than almost any technology in history. We acted too slowly with social media. Many unfortunate decisions got locked in with lasting consequence. The choices we make now will have lasting effects for decades, maybe even centuries. The very fact that we are here today in bipartisan fashion to discuss these matters gives me some hope. Thank you, Mr. Chairman.

Sen. Richard Blumenthal (D-CT):

Thanks very much, professor Marcus. We’re gonna have seven minute rounds of questioning, and I will begin. First of all professor Marcus, we are here today because we do face that perfect storm. Some of us might characterize it more like a bomb in a China shop, not a bull. And as Senator Hawley indicated, there are precedents here, not only the atomic warfare era, but also the genome project, the research on genetics where there was international cooperation as a result. And we wanna avoid those past mistakes. As I indicated in my opening statement that were committed on social media that is precisely the reason we are here today. ChatGPT makes mistakes. All AI does, and it can be a convincing liar, what people call hallucinations. That might be an innocent problem in the opening of a judiciary subcommittee hearing where a voice is impersonated, mine in this instance, or quotes from research papers that don’t exist, but ChatGPT and Bard are willing to answer questions about life or death matters, for example, drug interactions.

And those kinds of mistakes can be deeply damaging. I’m interested in how we can have reliable information about the accuracy and trustworthiness of these models and how we can create competition and consumer disclosures that reward greater accuracy. The National Institutes of Standards and technology actually already has an AI accuracy test, the face recognition vendor test. It doesn’t solve for all the issues with facial recognition, but the scorecard does provide useful information about the capabilities and flaws of these systems. So there’s work on models to assure accuracy and integrity. My question lemme begin with you Mr. Altman, is should we consider independent testing labs to provide scorecards and nutrition labels or the equivalent of nutrition labels packaging that indicates to people whether or not the content can be trusted, what the ingredients are, and what the garbage going in may be, because it could result in garbage going out?

Sam Altman:

Yeah, I think that’s a great idea. I think that companies should put their own sort of, you know, hear the results of our test, of our model before we release it. Here’s where it has weaknesses, here’s where it has strengths but also independent audits for that are, are very important. These models are getting more accurate over time. You know, this is, this is as we have, I think said as loudly as anyone, this technology is in its early stages. It definitely still makes mistakes. We find that people, that users are, are pretty sophisticated and understand where the mistakes are that they need or likely to be, that they need to be responsible for verifying what the models say, that they go off and check it. I worry that as the models get better and better the users can have sort of less and less of their own discriminating thought process around it. But, but I think users are more capable than we give often, give them credit for in, in conversations like this. I think a lot of disclosures, which if you’ve used ChatGPT, you’ll see about the inaccuracies of the model are also important. And I am, I’m excited for a world where companies publish with the models information about how they behave, where the inaccuracies are, and independent agencies or companies provide that as well. I think it’s a great idea.

Sen. Richard Blumenthal (D-CT):

I alluded in my opening remarks to the jobs issue, the economic effects on employment. I think you have said in fact, and I’m gonna quote, development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. End quote. You may have had in mind the effect on, on jobs, which is really my biggest nightmare in the long term. Let me ask you what your biggest nightmare is, and whether you share that concern,

Sam Altman:

Like with all technological revolutions, I expect there to be significant impact on jobs, but exactly what that impact looks like is very difficult to predict. If we went back to the other side of a previous technological revolution, talking about the jobs that exist on the other side you know, you can go back and read books of this. It’s what people said at the time. It’s difficult. I believe that there will be far greater jobs on the other side of this, and that the jobs of today will get better. I, I think it’s important. First of all, I think it’s important to understand and think about GPT-4 as a tool, not a creature, which is easy to get confused, and it’s a tool that people have a great deal of control over and how they use it. And second, GPT-4 and other systems like it are good at doing tasks, not jobs.

And so you see already people that are using GPT-4 to do their job much more efficiently by helping them with tasks. Now, GPT-4 will I think entirely automate away some jobs, and it will create new ones that we believe will be much better. This happens again, my understanding of the history of technology is one long technological revolution, not a bunch of different ones put together, but this has been continually happening. We, as our quality of life raises and as machines and tools that we create can help us live better lives the bar raises for what we do and, and our human ability and what we spend our time going after goes after more ambitious, more satisfying projects. So there will be an impact on jobs. We try to be very clear about that, and I think it will require partnership between the industry and government, but mostly action by government to figure out how we want to mitigate that. But I’m very optimistic about how great the jobs of the future will be.

Sen. Richard Blumenthal (D-CT):

Thank you. Let me ask Ms. Montgomery and Professor Marcus for your reactions, those questions as well, Ms. Montgomery,

Christina Montgomery:

On the jobs point? Yeah, I mean, well, it’s a hugely important question. And it’s one that we’ve been talking about for a really long time at IBM. You know, we do believe that ai, and we’ve said it for a long time, is gonna change every job. New jobs will be created. Many more jobs will be transformed and some jobs will transition away. I’m a personal example of a job that didn’t exist when I joined IBM. And I have a team of AI governance professionals who are in new roles that we created, you know, as early as three years ago. I mean, they’re, they’re new and they’re growing. So I think the most important thing that we could be doing and can, and should be doing now, is to prepare the workforce of today and the workforce of tomorrow for partnering with AI technologies and using them. And we’ve been very involved for, for years now in doing that in focusing on skills-based hiring in educating for the skills of the future. Our skills build platform has 7 million learners and over a thousand courses worldwide focused on skills. And we’ve pledged to train 30 million individuals by 2030 in the skills that are needed for society today.

Sen. Richard Blumenthal (D-CT):

Thank you, professor Marcus.

Gary Marcus:

May I go back to the first question as well? Absolutely. On, on the subject of nutrition labels. I think we absolutely need to do that. I think that there’s some technical challenges in that building proper nutrition labels goes hand in hand with transparency. The biggest scientific challenge in understanding these models is how they generalize. What do they memorize and what new things do they do? The more that there’s in the data set, for example, the thing that you want to test accuracy on, the less you can get a proper read on that. So it’s important, first of all, that scientists be part of that process. And second, that we have much greater transparency about what actually goes into these systems. If we don’t know what’s in them, then we don’t know exactly how well they’re doing when we give something new and we don’t know how good a benchmark that will be for something that’s entirely novel.

So I could go into that more, but I want to flag that. Second is on jobs past performance history is not a guarantee of the future. It has always been the case in the past that we have had more jobs, that new jobs, new professions come in. As new technologies come in, I think this one’s gonna be different. And the real question is over what time scale? Is it gonna be 10 years? Is it gonna be a hundred years? And I don’t think anybody knows the answer to that question. I think in the long run, so-called artificial general intelligence really will replace a large fraction of human jobs. We’re not that close to artificial general intelligence, despite all of the media hype and so forth. I would say that what we have right now is just small sampling of the AI that we will build in 20 years.

People will laugh at this as I think it was Senator Hawley made but maybe Senator Durbin made the example about this. It was Senator Durbin that made the example about cell phones. When we look back at the AI of today, 20 years ago, will be like, wow, that stuff was really unreliable. It couldn’t really do planning, which is an important technical aspect. It’s reasoning was ability and reasoning abilities were limited. But when we get to AGI, artificial general intelligence, maybe let’s say it’s 50 years, that really is gonna have, I think, profound effects on, on labor. And there’s just no way around that. And last, I don’t know if I’m allowed to do this, but I will note that Sam’s worst fear I do not think is employment. And he never told us what his worst fear actually is. And I think it’s germane to find out.

Sen. Richard Blumenthal (D-CT):

Thank you. I’m gonna ask Mr. Altman if he cares to respond.

Sam Altman:

Yeah. Look, we have tried to be very clear about the magnitude of the risks here. I think jobs and employment and what we’re all gonna do with our time really matters. I agree that when we get to very powerful systems, the landscape will change. I think I’m just more optimistic that we are incredibly creative and we find new things to do with better tools, and that will keep happening. My worst fears are that we cause significant, we, the field, the technology, the industry cause significant harm to the world. I think that could happen in a lot of different ways. It’s why we started the company. It’s a big part of why I’m here today and why we’ve been here in the past and, and we’ve been able to spend some time with you. I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening, but we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that.

Sen. Richard Blumenthal (D-CT):

Thank you. And, and our hope is that the rest of the industry will follow the example that you and IBM, Ms. Montgomery have set by coming today and meeting with us as you have done privately in helping to guide what we’re going to do so that we can target the harms and avoid unintended consequences to the good.

Sam Altman:

Thank you.

Sen. Richard Blumenthal (D-CT):

Senator Hawey,

Sen. Josh Hawley (R-MO):

Thank you again, Mr. Chairman. Thanks to the witnesses for being here. Mr. Altman, I think you grew up in St. Louis. If I’m, I did not mistaken.

Sam Altman:

I did. It’s a great place.

Sen. Josh Hawley (R-MO):

Missouri, here it is. Thank you. I want that noted, especially underlined in the record. Missouri is a great place. That is the takeaway from today’s hearing. Maybe we can stop there. Mr. Chairman. Let me ask you Mr. Altman, I think I’ll start with you, and I’ll just preface this by saying my questions here are an attempt to get my head around and to ask all of you, to help us to get our heads around what these, this generative ai, particularly the large language models, what it can do. So I’m trying to understand its capacities and then its significance. So I’m looking at a paper here entitled “Large Language Models, Trained on Media Diets Can Predict Public Opinion.” This is just posted about a month ago. The authors are two, Andreas Anan and Roy, and their conclusion this work was done at MIT and then also at, at Google.

The conclusion is that large language models can indeed predict public opinion. And they go through and, and, and model why this is the case. And they conclude ultimately that an AI system can predict human survey responses by adapting a pre-trained language model to subpopulation specific media diets. In other words, you can feed the model a particular set of, of media inputs, and it can, with remarkable accuracy in the paper, goes into this, predict then what people’s opinions will be. I, I’m, I wanna think about this in the context of elections. If these large language models can, even now, based on the information we put into them quite accurately predict public opinion, you know, ahead of time. I mean, predict, it’s before you even ask the public these questions, what will happen when entities, whether it’s corporate entities or whether it’s governmental entities, or whether it’s campaigns or whether it’s foreign actors, take this survey information, these predictions about public opinion and then fine tune strategies to elicit certain responses, certain behavioral responses.

I mean, we already know this committee is her testimony, I think three years ago now, about the effect of something as prosaic. It now seems as Google search, the effect that this has on voters in an election, particularly undecided voters in the final days of an election, who may try to get information from Google search. And what an enormous effect, the ranking of the Google search, the articles that it returns, has become an enormous effect on an undecided voter. This, of course, is orders of magnitude, far more powerful, far more significant, far more directive if you like. So, Mr. Altman, maybe you can help me understand here what some of the significance of this is. Should we be concerned about models that can, large language models that can predict survey opinion and then can help organizations, entities fine tune strategies to elicit behaviors from voters? Should we be worried about this for our elections?

Sam Altman:

Yeah. thank you Senator Hawley for the question. It’s one of my areas of greatest concern. The more general ability of these models to manipulate, to persuade to provide sort of one-on-one you know, interactive disinformation. I think that’s like a broader version of what you are talking about, but giving that we’re gonna face an election next year and these models are getting better I think this is a significant area of concern. I think there’s a lot, there’s a lot of policies that companies can voluntarily adopt, and I’m happy to talk about what we do there. I do think some regulation would be quite wise on this topic. Someone mentioned earlier, it’s something we really agree with. People need to know if they’re talking to an ai, if, if content that they’re looking at might be generated or might not.

I think it’s a great thing to do is to make that clear. I think we also will need rules, guidelines about what, what’s expected in terms of disclosure from a company providing a model that could have these sorts of abilities that you talk about. So I’m nervous about it. I think people are able to adapt quite quickly. When Photoshop came onto the scene a long time ago, you know, for a while people were really quite fooled by photoshopped images and then pretty quickly developed an understanding that images might be photoshopped this will be like that, but on steroids and the, the interactivity the ability to really model, predict humans, well, as you talked about I think is going to require a combination of companies doing the right thing, regulation and public education.

Sen. Josh Hawley (R-MO):

Mr. Professor Marcus, do you wanna address this?

Gary Marcus:

I’d like to add two things. One is in the appendix to my remarks, I have two papers to make you even more concerned. One is in the Wall Street Journal just a couple days ago called “Help. My political beliefs were altered by a chatbot.” And I think the scenario you raised was that we might basically observe people and use surveys to figure out what they’re saying. But as Sam just acknowledged the risk is actually worse, that the systems will directly maybe not even intentionally manipulate people. And that was the thrust of the Wall Street Journal article. And it links to an article that I’ve also linked to called Interacting, and it’s not yet published. Not yet peer reviewed, “Interacting with opinionated language models changes user views.” And this comes back ultimately to data. One of the things that I’m most concerned about with GPT-4 is that we don’t know what it’s trained on.

I guess Sam knows, but the rest of us do not. And what it is trained on has consequences for essentially the biases of the system. We could talk about that in technical terms, but how these systems might lead people about depends very heavily on what data is trained on them. And so we need transparency about that. And we probably need scientists in there doing analysis in order to understand what the political influences of, for example, of these systems might be. And it’s not just about politics. It can be about health, it could be about anything. These systems absorb a lot of data and then what they say reflects that data, and they’re gonna do it differently depending on what, what’s in that data. So it makes a difference if they’re trained on the Wall Street Journal as opposed to the New York Times or, or Reddit. I mean, actually they’re largely trained on all of this stuff, but we don’t really understand the composition of that. And so we have this issue of potential manipulation, and it’s even more complex than that because it’s subtle manipulation. People may not be aware of what’s going on. That was the point of both the Wall Street Journal article and the other article that I, I called your attention to.

Sen. Josh Hawley (R-MO):

Let me ask you about AI systems trained on personal data. The kind of data that, for instance, the social media companies, the major platforms, Google Meta, et cetera, collect on all of us routinely. And we’ve had many a chat about this in this committee over many a year now. But the massive amounts of data, personal data that the companies have on each one of us, an AI system that is, that is trained on that individual data, that knows each of us better than ourselves, and also knows the billions of data points about human behavior, human language interaction, generally. Wouldn’t we be able, wouldn’t we, can’t we foresee an AI system that is extraordinarily good at determining what will grab human attention and what will keep an individual’s attention? And so, and for the war, for attention, the war for clicks that is currently going on, on all of these platforms, that’s how they make their money.

I’m just imagining an AI system. These AI models supercharging that war for attention, such that we now have technology that will allow individual targeting of a kind we have never even imagined before, where the AI will know exactly what Sam Altman finds attention grabbing. We’ll know exactly what Josh Hawley finds attention grabbing. We’ll be able to elicit, to grab our attention, and then elicit responses from us in a way that we have here before not even been able to imagine. Should we be concerned about that for its corporate applications, for the monetary applications, for the manipulation that that could come from that, Mr. Almond?

Sam Altman:

Yes, we should be concerned about that. To be clear OpenAI does not, we’re not off, you know, we wouldn’t have an ad-based business model. So we’re not trying to build up these profiles of our users. We’re not, we’re not trying to get them to use it more actually, we’d love it if they use it less cause we don’t have enough GPUs. But I think other companies are already and certainly will in the future, use AI models to create, you know, very good ad predictions of what a user will like. I think that’s already happening in, in many ways.

Sen. Josh Hawley (R-MO):

Mr. Marcus, anything you wanna add?

Gary Marcus:

Yes. and perhaps Ms. Montgomery will want to too as well. I don’t, but hyper targeting of advertising is definitely going to come. I agree that that’s not been OpenAI’s business model. Of course now they’re working for Microsoft and I don’t know what’s in Microsoft’s thoughts. But we will definitely see it. Maybe it will be with open source language models. I, I don’t know. But the technology there is, let’s say partway there to being able to do that. And we’ll certainly get there.

Christina Montgomery:

So we are an enterprise technology company, not consumer focused. So the space isn’t one that we necessarily operate in in terms of, but these issues are hugely important issues. And it’s why we’ve been out ahead in developing the technology that will help to ensure that you can do things like produce a fact sheet that has the ingredients of what your data is trained on data sheets, model cards, all those types of things. And calling for, as I’ve mentioned today, transparency, so you know what the algorithm was trained on. And then you also know and can manage and monitor continuously over the life cycle of an AI model, the behavior and the performance of that model.

Sen. Richard Blumenthal (D-CT):

Senator Durbin.

Sen. Dick Durbin (D-IL):

Thank you. I think what’s happening today in this hearing room is historic. I can’t recall when we’ve had people representing large corporations or private sector entities come before us and plead with us to regulate them. In fact, many people in the Senate have base their careers on the opposite that the economy will thrive if government gets the hell out of the way. And what I’m hearing instead today is that ‘stop me before I innovate again’ message. And I’m just curious as to how we’re going to achieve this. As I mentioned section two 30 in my opening remarks, we learned something there. We decided that in section two 30 that we were basically going to absolve the industry from liability for a period of time as it came into being. Well, Mr. Altman, on the podcast earlier this year, you agreed with host Kara Swisher, that section two 30 doesn’t apply to generative ai and that developers like OpenAI should not be entitled to full immunity for harms caused by their products. So what have we learned from two 30 that applies to your situation with ai?

Sam Altman:

Thank you for the question, Senator. I, I don’t know yet exactly what the right answer here is. I’d love to collaborate with you to figure it out. I do think for a very new technology, we need a new framework. Certainly companies like ours bear a lot of responsibility for the tools that we put out in the world, but tool users do as well. And how we want and, and also people that will build on top of it between them and the end consumer. And how we want to come up with a liability framework, there is a super important question. And we’d love to work together.

Sen. Dick Durbin (D-IL):

The point I wanna make is this, when it came to online platforms, the inclination of the government was ‘get out of the way.’ This is a new industry, don’t over-regulate it. In fact, give them some breathing space and see what happens. I’m not sure I’m happy with the outcome as I look at online platforms. Me either, and, and the harms that they have created problems that we’ve seen demonstrated in this committee. Child exploitation, cyber bullying, online drug sales, and more. I don’t wanna repeat that mistake again. And what I hear is the opposite suggestion from the private sector and that is come in, in the front end of this thing and establish some liability standards, precision regulation for a major company like I B M to come before this committee and say to the government, please regulate us. Can you explain the difference in thinking from the past and now?

Christina Montgomery:

Yeah, absolutely. So for us, this comes back to the issue of trust and trust in the technology. Trust is our license to operate, as I mentioned in my remarks. And so we firmly believe in, we’ve been calling for precision regulation of artificial intelligence for years now. This is not a new position. We think that technology needs to be deployed in a responsible and clear way that people, we’ve taken principles around that trust and transparency, we call them, are principles that were articulated years ago and build them into practices. That’s why we’re here advocating for precision regulatory approach. So we think that AI should be regulated at the point of risk, essentially, and that’s the point at which technology meets society.

Sen. Dick Durbin (D-IL):

Let’s take a look at what that might appear to be. Members of Congress are pretty smart. A lot of people maybe not as smart as we think we are many times, and government certainly has a capacity to do amazing things. But when you talk about our ability to respond to the current challenge and perceive the future challenges, which you all have described in terms which are hard to forget, as you said, Mr. Altman, things can go quite wrong. As you said, Mr. Marcus, democracy is threatened. I mean, the magnitude of the challenge you’re giving us is substantial. I’m not sure that we respond quickly and with enough expertise to deal with it. Professor Marcus, you made a reference to cern, the International Arbiter of nuclear research, I suppose, I dunno if that’s a fair characterization, but it’s a characterization. I’ll start with what is it, what agency of this government do you think exists that could respond to the challenge that you’ve laid down today?

Gary Marcus:

We have many agencies that can respond in some ways. For example, the FTC, the FCC, there are many agencies that can, but my view is that we probably need a cabinet level organization within the United States in order to address this. And my reasoning for that is that the number of risks is large. The amount of information to keep up on is so much. I think we need a lot of technical expertise. I think we need a lot of coordination of these efforts. So there is one model here where we stick to only existing law and try to shape all of what we need to do. And each agency does their own thing. But I think that AI is gonna be such a large part of our future and is so complicated and moving so fast. And this does not fully solve your problem about a dynamic world. But it’s a step in that direction to have an agency that’s full-time job is to do this. I personally have suggested in fact that we should want to do this at a global way. I wrote an article in The Economist, I have a link in here an invited essay for The Economist suggesting we might want an international agency for AI.

Sen. Dick Durbin (D-IL):

Well, that’s the point I wanted to go to next, and that is the fact that I’ll get it aside from the CERN and nuclear examples, because government was involved in that from day one, at least in the United States. Yes. But now we’re dealing with innovation which doesn’t necessarily have a boundary. That’s correct. We may create a great US agency, and I hope that we do that may have jurisdiction over US corporations and US activity, but doesn’t have a thing to do with what’s going to bombard us from outside the United States. How do you give this international authority, the authority to regulate in a fair way for all entities involved in AI?

Gary Marcus:

I think that’s probably over my pay grade. I would like to see it happen and I think it may be inevitable that we push there. I mean, I, I think the politics behind it are obviously complicated. I’m really heartened by the degree to which this room is bipartisan and, and supporting the same things. And that makes me feel like it might be possible. I, I would like to see the United States take leadership in such organization. It has to involve the whole world and not just the US to work properly. I think even from the perspective of the companies, it would be a good thing. So the companies themselves do not want a situation where you take these models, which are expensive to train, and you have to have 190, some of them you know, one for every country that that wouldn’t be a good way of operating.

When you think about the energy costs alone, just for training these systems, it would not be a good model if every country has its own policies and each, for each jurisdiction, every company has to train another model. And maybe you know, different states are different. So Missouri and California have different rules. And so then that requires even more training of these expensive models with huge climate impact. And I mean, just, it would be very difficult for the companies to operate if there was no global coordination. And so I think that we might get the companies on board if there’s bipartisan support here. And I think there’s support around the world that is entirely possible that we could develop such a thing. But obviously there are many, you know, nuances here of diplomacy that are over my pay grade. I hope I would love to learn from you all to try to help make that happen.

Sen. Dick Durbin (D-IL):

Mr. Altman.

Sam Altman:

Can I weigh in just briefly? Briefly, please. I want to echo support for what Mr. Marcus said. I think the US should lead here and do things first, but to be effective we do need something global. As you mentioned, this can, this can happen everywhere. There is precedent. I know it sounds naive to call for something like this, and it sounds really hard. There is precedent. We’ve done it before with the IAEA. We’ve talked about doing it for other technologies. They’re given what it takes to make these models: the chip supply chain, the sort of limited number of competitive GPUs, the power the US has over these companies. I think there are paths to the US setting some international standards that other countries would need to collaborate with and be part of that are actually workable, even though it sounds on its face, like an impractical idea. And I think it would be great for the world. Thank you, Mr. Chairman.

Sen. Richard Blumenthal (D-CT):

Thanks, Senator Durbin. And in fact I think we’re going to hear more about what Europe is doing. The European Parliament already is acting on an AI act on social media. Europe is ahead of us. We need to be in the lead. I think your, your point is very well taken. Let me turn to Senator Graham, Senator Blackburn.

Sen. Marsha Blackburn (R-TN):

And thank you, Mr. Chairman, and thank you all for being here with us today. I put into my chat, GPT account should Congress regulate AI chat, GPT, and it gave me four, four cons and says, ultimately the decision rests with Congress and deserves careful consideration. So on that reasonable, you know, it was very balanced. I recently visited with the Nashville Technology Council. I represent Tennessee. And of course, you had people there from healthcare, financial services, logistics, educational entities, and they’re concerned about what they see happening with ai, with the utilizations for their companies. Ms. Montgomery, you know, sim similar to you, they’ve got healthcare people are looking at disease analytics. They’re looking at predictive diagnosis, how this can better the outcomes for patients, logistics industry, looking at ways to save time and money and yield efficiencies. You’ve got financial services that are saying, how does this work with Quantum?

How does it work with blockchain? How can we use this? But it, I think as we have talked with them, Mr. Chairman, one of the things that continues to come up is yes professor Marcus, as you were saying, the eu different entities are ahead of us in this, but we have never established a federally preempt given preemption for online privacy, for data security, and put some of those foundational elements in place, which is something that we need to do as we look at this. And it will require that Commerce committee, judiciary committee decide how we move forward so that people own their virtual you. And Mr. Altman, I was glad to see last week that your OpenAI models are not going to be trained using consumer data. I think that that is important. And if we have a second round, I’ve got a host of questions for you on data security and privacy.

 But I think it’s important to let people control their virtual you, their information in these settings. And I wanna come to you on music and content creation, because we’ve got a lot of songwriters and artists. Yeah. And a, I think we have the best creative community on the face of the Earth. They’re in Tennessee, and they should be able to decide if their copyrighted songs and images are going to be used to train these models. And I’m concerned about OpenAI’s jukebox. It offers some renditions in the style of Garth Brooks, which suggests that OpenAI is trained on Garth Brooks songs. I went in this weekend and I said, write me a song that sounds like Garth Brooks. And it gave me a different version of Simple Man. So it’s interesting that it would do that. But you’re training it on these copyrighted songs, these mini files, these sound technologies. So as you do this, who owns the rights to that AI generated material and using your technology, could I remake a song, insert content from my favorite artist, and then own the creative right to that song?

Sam Altman:

Thank you, Senator. This is an area of great interest to us. I would say, first of all, we think that creators deserve control over how their creations are used and what happens sort of beyond the point of, of them releasing it into the world. Second, I think that we need to figure out new ways with this new technology that creators can win, succeed, have a vibrant life. And I’m optimistic that this will present it.

Sen. Marsha Blackburn (R-TN):

Then, then let me ask you this. How do you compensate the art, the artist?

Sam Altman:

That’s exactly what I was gonna say. Okay. we’d like to, we’re working with the artists now, visual artists, musicians to figure out what people want there. There’s a lot of different opinions, unfortunately. And at some point we’ll have,

Sen. Marsha Blackburn (R-TN):

Okay, let me ask you this. Do you favor something like sound exchange that has worked in the area of, or radio?

Sam Altman:

I’m not familiar with sound exchange. I’m sorry.

Sen. Marsha Blackburn (R-TN):

Streaming. Okay. You’ve got your team behind you. Get back to me on that. That would be a third party entity. Okay. So let’s discuss that. Let me move on. Can you commit, as you’ve done with Consumer Data, not to train chat, GPT, OpenAI Jukebox or other AI models on artists and songwriters copyrighted works, or use their voices and their likenesses without first receiving their consent?

Sam Altman:

So first of all, jukebox is not a product we offer. That was a research release, but it’s not, you know, unlike ChatGPT or DALLE. But….

Sen. Marsha Blackburn (R-TN):

We’ve lived through Napster.

Sam Altman:

Yes. But we’re… 

Sen. Marsha Blackburn (R-TN):

That was something that really cost a lot of artists, a lot of money.

Sam Altman:

Oh, I understand. Yeah, for sure.

Sen. Marsha Blackburn (R-TN):

In the digital distribution era.

Sam Altman:

I don’t know the numbers on jukebox on the top of my head as a research release. I can, I can follow up with your office, but it’s not, jukebox is not something that gets much attention or usage. It was put out to, to show that something’s possible.

Sen. Marsha Blackburn (R-TN):

Well, Senator Durbin just said, you know, and I think it’s a fair warning to you all if we are not involved in this from the get-go, and you all already are a long way down the path on this, but if we don’t step in, then this gets away from you. So are you working with a copyright office? Are you considering protections for content generators and creators in generative ai?

Sam Altman:

Yes, we are absolutely engaged on that. Again, to reiterate my earlier point, we think that content creators, content owners, need to benefit from this technology exactly what the economic model is. We’re still talking to artists and content owners about what they want. I think there’s a lot of ways this can happen, but very clearly, no matter what the law is, the right thing to do is to make sure people get significant upside benefit from this new technology. And we believe that it’s really going to deliver that. But that content owners likenesses people totally deserve control over how that’s used and to benefit from it.

Sen. Marsha Blackburn (R-TN):

Okay. So on privacy then, how do you plan to account for the collection of voice and other user specific data, things that are copyright righted user specific data through your AI applications? Because if I can go in and say, write me a song that sounds like Garth Brooks and it takes part of an existing song, there has to be a compensation to that artist for that utilization and that use, if it was radio play, it would be there. If it was streaming, it would be there. So if you are going to do that, what is your policy for making certain, you’re accounting for that and you’re protecting that individual’s right. To privacy and their right to secure that data and that created work.

Sam Altman:

So a few thoughts about this. Number one, we think that people should be able to say, I don’t want my personal data trained on. That’s, I think that’s right.

Sen. Marsha Blackburn (R-TN):

That gets to a national privacy law, which many of us here on the dais are working toward getting something that we can use.

Sam Altman:

Yeah, I think strong privacy…

Sen. Marsha Blackburn (R-TN):

And my time’s expired. Let me yell back. Thank you, Mr. Chair.

Sen. Richard Blumenthal (D-CT):

Thanks, Senator Blackburn. Senator Klobuchar.

Sen. Amy Klobuchar (D-MN):

Thank you very much, Mr. Chairman. and Senator Blackburn, I love Nashville. Love Tennessee, love your music. But I will say I use ChatGPT and just asked, what are the top creative song artists of all time? And two of the top three were from Minnesota <laugh>, that would be Prince, and…

Sen. Marsha Blackburn (R-TN):

Sure….

Sen. Amy Klobuchar (D-MN):

Prince and Bob Dylan. Okay. All right. So let us, let us continue on.

Sen. Richard Blumenthal (D-CT):

One thing AI won’t change, and you’re seeing it here.

Sen. Amy Klobuchar (D-MN):

All right. So on a more serious note though my staff and I in my role as chair of the Rules committee and leading a lot of the election bill, and we just introduced a bill that’s Representative Yvette Clark from New York, introduced over in the House, Senator Booker and Bennett and I did on political advertisements. But that is just of course, the tip of the iceberg. You know, this from your discussions with Senator Hawley and others about the images and my own view it’s Senator Grahams of Section two 30, is that we just can’t let people make stuff up and then not have any consequence. But I’m gonna focus in on what my job, one of my jobs will be on the rules committee. And that is election misinformation. And we just asked Chad GPT to do a tweet about a polling location in Bloomington, Minnesota, and said, there are long lines at this polling location at Atonement Lutheran Church.

Where should we go now? Albeit it’s not an election right now, but the answer, the tweet that was drafted was a completely fake thing. Go to 1 2 34 Elm Street. And so you can imagine what I’m concerned about here with an election upon us with primary elections upon us, that we’re gonna have all kinds of misinformation. And I just wanna know what you’re planning on doing it, doing about it. I know we’re gonna have to do something soon, not just for the images of the candidates, but also for misinformation about the actual polling places and election rules.

Sam Altman:

Thank you, Senator. We talked about this a little bit earlier. We are quite concerned about the impact this can have on elections. I think this is an area where hopefully the entire industry and the government can work together quickly. There’s, there’s many approaches, and I’ll talk about some of the things we do, but before that, I think it’s tempting to use the frame of social media. But this is not social media. This is different. And so the, the, the response that we need is different. You know, th this is a tool that a user is using to help generate content more efficiently than before. They can change it. They can test the accuracy of it. If they don’t like it, they can get another version. But it still then spreads through social media or other ways like chat. G B T is a, you know, single player experience where, where you’re just using this. And so I think as we think about what to do, that’s, that’s important to understand that there’s a lot that we can and do, do there. There’s things that the model refuses to generate. We have policies. We also importantly have monitoring. So at scale we can detect someone generating a lot of those tweets. Mm-Hmm. <affirmative>, even if generating one tweet is okay.

Sen. Amy Klobuchar (D-MN):

Yeah. And of course, there’s gonna be other platforms. And if they’re all spouting out fake election information, I just, I think what happened in the past with Russian interference and like, it’s just gonna be a tip of the iceberg Yeah. When some of those fake ads. So that’s number one. Number two is the impact on intellectual property. And Senator Blackburn was getting at some of this with song rights and had serious concerns about that. But news content so Senator Kennedy and I have a bill that was really quite straightforward, that would simply allow the news organizations an exemption to be able to negotiate with basically Google and Facebook. Microsoft was supportive of the bill, but basically negotiated with them to get better rigged and be able to not have some leverage.

And other countries are doing this, Australia and the like. And so my question is, when we already have a study by Northwestern predicting that one-third of the US newspapers are that roughly existed, two decades are gonna go, are gonna be gone by 2025, unless you start compensating for everything from book movies, books. Yes. but also news content. We’re gonna lose any realistic content producers. And so I’d like your response to that. And of course, there is an exemption for copyright in section two 30. But I think asking little newspapers to go out and sue all the time just can’t be the answer. They’re not gonna be able to keep up.

Sam Altman:

Yeah. Like, it is my hope that tools like what we’re creating can help news organizations do better. I think having a vibrant, having a vibrant national media is critically important. And let’s call it round one of the internet has not been great for that.

Sen. Amy Klobuchar (D-MN):

Right. we’re talking here about local that, you know, report on your high school for sure. Football scores and a scandal in your city council, those kinds of things. For sure. They’re the ones that are actually getting the worse, the little radio stations and broadcast. But do you understand that this could be exponentially worse in terms of local news content if they’re not compensated

Sam Altman:

Well…

Sen. Amy Klobuchar (D-MN):

Because what they need is to be compensated for their content and not have it stolen.

Sam Altman:

Again, our model, you know, our, the current version of GPT-4 ended to training in 2021. It’s not, it’s not, it’s not a good way to find recent news. And it’s, I don’t think it’s a service that can do a great job of linking out, although maybe with our plugins, it’s, it’s possible. If there are things that we can do to help local news, we would certainly like to, again, I think it’s critically important. Okay.

Sen. Amy Klobuchar (D-MN):

One last… yeah.

Gary Marcus:

May I add something there?

Sen. Amy Klobuchar (D-MN):

Yeah. But let me just ask you a question. You can combine ’em quick. More transparency on the platforms. Senator Coons and Senator Cassidy and I have the Platform Accountability Transparency Act to give researchers access to this information of the algorithms and the, like on social media data. Would that be helpful? And then why don’t you just say yes or no and then go at his the

Gary Marcus:

Transparency is absolutely critical here to understand the political ramifications, the bias ramifications, and so forth. We need transparency about the data. We need to know more about how the models work. We need to have scientists have access to them. I was just gonna amplify your earlier point about local news. A lot of news is going to be generated by these systems. They’re not reliable. News guard already is a study, I’m sorry, it’s not in my appendix, but I will get it to your office. Showing that something like 50 websites are already generated by bots. We’re gonna see much, much more of that, and it’s gonna make it even more competitive for the local news organizations. And so the quality of the sort of overall news market is going to decline as we have more generated content by systems that aren’t actually reliable in the content they’re generated.

Sen. Amy Klobuchar (D-MN):

Thank you. And thank you on a very timely basis to make the argument why we have to mark up this bill again in June. I appreciate it. Thank you.

Sen. Richard Blumenthal (D-CT):

Senator Graham.

Sen. Lindsey Graham (R-SC):

Thank you, Mr. Chairman and Senator Hawley for having this. I’m trying to find out how it is different than social media and learn from the mistakes we made with social media. The idea of not suing social media companies is to allow the internet to flourish. Because if I slander you you can sue me. If you’re a billboard company and you put up the slander, can you sue the billboard company? We said no. Basically, section two 30 is being used by social media companies to high, to avoid liability for activity that other people generate. When they refuse to comply with their terms of use, a mother calls up the company and says, this app is being used to bully my child to death. You promise, in the terms of use, she would prevent bullying. And she calls three times, she gets no response, the child kills herself and they can’t sue. Do you all agree we don’t wanna do that again?

Sam Altman:

Yes.

Gary Marcus:

If I may speak for one second, there’s a fundamental distinction between reproducing content and generating content.

Sen. Lindsey Graham (R-SC):

Yeah. But you, you would like liability where people are harmed.

Gary Marcus:

Absolutely.

Christina Montgomery:

Yes. In fact, IBM has been publicly advocating to condition liability on a reasonable care standard.

Sen. Lindsey Graham (R-SC):

Sure. So let me just make sure I understand the law as it exists today. Mr. Altman, thank you for coming. Your company is not claiming that Section 230 applies the tool you have created.

Sam Altman:

Yeah. We’re claiming we need to work together to find a totally new approach. I don’t think Section 230 is the even the right framework.

Sen. Lindsey Graham (R-SC):

Okay. So under the law it exists today. This tool you create, if I’m harmed by it, can I sue you?

Sam Altman:

That is beyond my area of legal…

Sen. Lindsey Graham (R-SC):

Have you ever been sued?

Sam Altman:

Not for that, no.

Sen. Lindsey Graham (R-SC):

Have you ever been sued at all? Your company?

Sam Altman:

Yeah. 

Sen. Lindsey Graham (R-SC):

OpenAI gets sued, huh? 

Sam Altman:

Yeah, we’ve gotten sued before.

Sen. Lindsey Graham (R-SC):

Okay. And what for?

Sam Altman:

I mean, they’ve mostly been like pretty frivolous things, like, I think happens to any company,

Sen. Lindsey Graham (R-SC):

But like the examples my colleagues have given from artificial intelligence that could literally ruin our lives. Can we go to the company that created that tool and sue ’em? Is that your understanding?

Sam Altman:

Yeah. I think there needs to be clear responsibility by the company’s.

Sen. Lindsey Graham (R-SC):

But you’re not claiming any kind of legal protection like section two, that two 30 applies to your industry, is that correct?

Sam Altman:

No, I don’t think we’re, I don’t, I don’t think we’re saying anything like that.

Sen. Lindsey Graham (R-SC):

Mr. Marcus? When it comes to consumers, there seems to be like three time tested ways to protect consumers against any product. Statutory schemes, which are nonexistent here. Legal systems, which may be app here, but not social media and agencies. Go back to Senator Hawley. The Adam Bomb is put a cloud over humanity, but nuclear power could be one of the solutions to climate change. So what I’m trying to do is make sure that you just can’t go build a nuclear power plant. Hey Bob, what would you like to do today? Let’s go build a nuclear power plant. You have a nuclear regulatory commission that governs how you build a plant and is licensed. Do you agree, Mr. Altman, that these tools you’re creating should be licensed?

Sam Altman:

Yeah. We’ve been calling for this. We think any…

Sen. Lindsey Graham (R-SC):

That’s the simplest way you get a license. And do you agree with me that the simplest way and the most effective way is have an agency that is more nimble and smarter than Congress, which should be easy to create overlooking what you do? Yes.

Sam Altman:

We’d be enthusiastic about that.

Sen. Lindsey Graham (R-SC):

You agree with that, Mr. Marcus? 

Gary Marcus:

Absolutely. 

Sen. Lindsey Graham (R-SC):

You agree with that, Ms. Montgomery?

Christina Montgomery:

I would have some nuances. I think we need to build on what we have in place already today.

Sen. Lindsey Graham (R-SC):

We don’t have an agency….

Christina Montgomery:

Regulators…

Sen. Lindsey Graham (R-SC):

Wait a minute. Nope, nope, nope.

Christina Montgomery:

We don’t have an agency that regulates the technology.

Sen. Lindsey Graham (R-SC):

So should we have one?

Christina Montgomery:

But a lot of the issues, I don’t think so. A lot of the issues…

Sen. Lindsey Graham (R-SC):

Okay, wait a minute. Wait a minute. So IBM says we don’t need an agency. Interesting. Should we have a license required for these tools?

Christina Montgomery:

So, so what we believe is that we need to regulate…

Sen. Lindsey Graham (R-SC):

That’s a simple question. Should you get a license to produce one of these tools?

Christina Montgomery:

I think it comes back to some of them potentially, yes. So what I said at the onset is that we need to clearly define risks.

Sen. Lindsey Graham (R-SC):

Do you believe, do, do you claim Section 230 applies in this area at all?

Christina Montgomery:

We are not a platform company. And we’ve, again, long advocated for reasonable care standard in section two. I

Sen. Lindsey Graham (R-SC):

I just don’t understand how you could say that you don’t need an agency to deal with the most transformative technology maybe ever.

Christina Montgomery:

Well, I think we have existing…

Sen. Lindsey Graham (R-SC):

Is this a transformative technology that can disrupt absolutely. Life as we know it good and bad?

Christina Montgomery:

I think it’s a transformative technology, certainly. And the conversations that we’re having here today have been really bringing to light the fact that yeah, this is the domains and the issues.

Sen. Lindsey Graham (R-SC):

This one with you has been very enlightening to me. Mr. Alman, why are you so willing to have an agency

Sam Altman:

Senator? We’ve been clear about what we think the upsides are, and I think you can see from users how much they enjoyed and how much value they’re getting out of it. But we’ve also been clear about what the downsides are, what is the tool, and so that’s why we think we need an agency….

Sen. Lindsey Graham (R-SC):

So, so it’s a, it’s a major tool to be used by a lot of….

Sam Altman:

It’s a major new technology. Okay. 

Sen. Lindsey Graham (R-SC):

If you think it’ll be, yeah. If you make a ladder and the ladder doesn’t work, you ensue the people that made the ladder. But there’s some standards out there to make a ladder. 

Sam Altman:

That’s why we’re agreeing with you.

Sen. Lindsey Graham (R-SC):

Yeah, that’s right. I think you’re on the right track. So here’s what my 2 cents worth for the committee is that we need to empower an agency that issues and a license and can take it away. Wouldn’t that be some Yes. Incentive to do it right? If you could actually be taken out of business.

Sam Altman:

Clearly that should be part of what an agency can do now.

Sen. Lindsey Graham (R-SC):

And you also agree that China’s doing AI research. Is that right? Correct. This world organization that doesn’t exist, maybe it will, but if you don’t do something about the China part of it, you’ll never quite get this right. Do you agree?

Sam Altman:

Well, that, that’s why I think it doesn’t necessarily have to be a world organization, but there has to be some sort of, and there’s a lot of options here. There has to be some sort of standard, some sort of set of controls that do have global effect.

Sen. Lindsey Graham (R-SC):

Yeah. Cause you know, other people doing this, I got 15 military application. How can AI change the warfare? And you got one minute.

Sam Altman:

<Laugh>. I got one minute. Alright, this is, that’s a tough question for one minute <laugh>. This is very far out of my area of expertise but …

Sen. Lindsey Graham (R-SC):

I’ll give you one example. A drone can, a drone you pro, you can plug into a drone the coordinates, and it can fly out, and it goes over this target and it drops a missile on this car moving down the road and somebody’s watching it. Could AI create a situation where a drone can select the target itself?

Sam Altman:

I think we shouldn’t allow that.

Sen. Lindsey Graham (R-SC):

Well, can it be done?

Sam Altman:

Sure.

Sen. Lindsey Graham (R-SC):

Thanks.

Sen. Richard Blumenthal (D-CT):

Thanks, Senator Graham.

Sen. Christopher Coons (D-CT):

Thank you. Senator Blumenthal, Senator Hawley for convening this hearing, for working closely together to come up with this compelling panel of witnesses and beginning a series of hearings on this transformational technology. We recognize the immense promise and substantial risks associated with generative AI technologies. We know these models can make us more efficient, help us learn new skills, open whole new vistas of creativity. But we also know that generative ai can authoritatively deliver wildly incorrect information. It can hallucinate as is often described. It can impersonate loved ones, it can encourage self-destructive behaviors and it can shape public opinion. And the outcome of elections Congress thus far has demonstrably failed to responsibly enact meaningful regulation of social media companies with serious harms that have resulted that we don’t fully understand. Senator Klobuchar referenced in her questioning a bipartisan bill that would open up social media platforms, underlying algorithms.

We have struggled to even do that, to understand the underlying technology and then to move towards responsible regulation. We cannot afford to be as late to responsibly regulating generative AI as we have been to social media, because the consequences, both positive and negative, will exceed those of social media by orders of magnitudes. So let me ask a few questions designed to get at both how we assess the risk. What’s the role of international regulation and how does this impact ai? Mr. Altman, I appreciate your testimony about the ways in which OpenAI assesses the safety of your models through a process of iterative deployment. The fundamental question embedded in that process, though, is how you decide whether or not a model is safe enough to deploy and safe enough to have been built, and then let go into the wild. I understand one way to prevent generative AI models from providing harmful content is to have humans identify that content and then train the algorithm to avoid it. There’s another approach that’s called constitutional AI that gives the model a set of values or principles to guide its decision making. Would it be more effective to give models these kinds of rules instead of trying to require or compel training the model on all the different potentials for harmful content?

Sam Altman:

Thank you, Senator. It’s a great question. I’d like to frame it by talking about why we deploy at all. Like why we put these systems out into the world. There’s the obvious answer about there’s benefits and people are using it for all sorts of wonderful things and getting great value, and that makes us happy. But a big part of why we do it is that we believe that iterative deployment and giving people in our institutions and you all time to come to grips with this technology to understand it, to find its limitations, it benefits that the regulations we need around it, what it takes to make it safe. That’s really important. Going off to build a super powerful AI system in secret and then dropping it on the world all at once, I think would not go well. So a big part of our strategy is while these systems are still relatively weak and deeply imperfect, to find ways to get people to have experience with them, to have contact with reality and to figure out what we need to do to make it safer and better.

And that is the only way that I’ve seen in the history of new technology and products of this magnitude to get to a very good outcome. And so that, that interaction with the world is very important. Now, of course, before we put something out it needs to meet a bar of safety. And and again, we spent well over six months with GPT-4 after we finished training it, going through all of these different things and deciding also what the standards were going to be before we put something out there trying to find the harms that we knew about put it and, and, and how to address those. One of the things that’s been gratifying to us is even some of our biggest critics have looked at GPT-4 and said, wow, OpenAI made huge progress on.

Sen. Christopher Coons (D-CT):

Could focus briefly on whether or not a constitutional model that gives values would be worth it.

Sam Altman:

Just about to get there a half. Alright, sorry about that. Yeah, I think giving the models values upfront is an extremely important set. You know, R LHF is another way of doing that same thing. But somehow or other you are with synthetic data or human-generated data, you’re saying, here are the values. Here’s what I want you to reflect. Or here are the wide bounds of everything that society will allow. And then within there, you pick as the user, you know, if you want value system over here, a value system over there. We think that’s very important. There’s multiple technical approaches, but we need to give policy makers and the world as a whole the tools to say, here’s the values and implement them.

Sen. Christopher Coons (D-CT):

Thank you, Ms. Montgomery. you serve on an AI ethics board of a long established company that has a lot of experience with ai. I’m really concerned that generative AI technologies can undermine the faith of democratic values and the institutions that we have. The Chinese are insisting that AI as being developed in China, reinforce the core values of the Chinese Communist Party and the Chinese system. And I’m concerned about how we promote AI that reinforces and strengthens open markets, open societies and democracy. In your testimony, you’re advocating for AI regulation tailored to the specific way the technology is being used, not the underlying technology itself. And the EU is moving ahead with an AI act, which CATA categorizes AI products based on level of risk. You all in different ways have said that you view elections and the shaping of election outcomes and disinformation that can influence elections as one of the highest risk cases, one that’s entirely predictable. We have attempted so far unsuccessfully to regulate social media after the demonstrably harmful impacts of social media on our last several elections. What advice do you have for us about what kind of approach we should follow and whether or not the EU direction is the right one to pursue?

Christina Montgomery:

Yeah, I mean, the conception of the EU AI Act is very consistent with this concept of precision regulation where you’re regulating the use of the technology in context. So absolutely that approach makes a ton of sense. It’s what I advocated for at the onset. Different rules for different risks. So in the case of elections, absolutely any algorithm being used in that context should be required to have disclosure around the data being used, the performance of the model. Anything along those lines is really important. Guardrails need to be in place. And, and on the point, just come back to the question of, of whether we need an independent agency. I mean, I think we don’t want to slow down regulation to address real risks right now. Right? So we have existing regulatory authorities in place who have been clear that they have the ability to regulate in their respective domains. A lot of the issues we’re talking about today span multiple domains, elections, and the like.

Sen. Christopher Coons (D-CT):

So if I could I, I’ll just assert that those existing regulatory bodies and authorities are under-resourced and lack many of the statutory absolutely. Regulatory powers that they need. Correct. We have failed to deliver a data privacy, even though industry has been asking us to regulate data privacy, if I might, Mr. Marcus. I, I’m, I’m interested also what international bodies are best positioned to convene multilateral discussions to promote responsible standards. We’ve talked about a model being CERN and nuclear energy. I’m concerned about proliferation and non-proliferation. We’ve also talked, I would suggest that the I P C A UN body helped at least provide a scientific baseline of what’s happening in climate change. So that even though we may disagree about strategies globally, we’ve come to a common understanding of what’s happening and what should be the direction of intervention. I’d be interested, Mr. Marcus, if you could just give us your thoughts on who’s the right body internationally to convene a conversation and one that could also reflect our values.

Gary Marcus:

I’m still feeling my way on that issue. I think global politics is not my specialty. I’m an AI researcher. But I, I have moved towards policy in recent months, really because of my great concern about all of these risks. I think certainly the UN, UNESCO and has its guidelines should be involved and at the table and, and maybe things work under them and maybe they don’t, but they should have a strong voice and, and help to develop this. The OECD has also been thinking greatly about this number of organizations have internationally. I I don’t feel like I personally am qualified to say exactly what the right model is there.

Sen. Christopher Coons (D-CT):

Well, thank you. I I think we need to pursue this both at the national level and the international level. I’m the chair of the IP subcommittee of the Judiciary committee in June and July. We will be having hearings on the impact of AI on patents and copyrights. You can already tell from the questions of others, there will be a lot of interest. I look forward to following up with you about that topic. I look forward to helping as much as possible. Thank you very much.

Sen. Richard Blumenthal (D-CT):

Thanks Senator Coons. Senator Kennedy.

Sen. John Kennedy (R-LA):

Thank you all for being here. Permit me to share with you three hypotheses that I would like you to assume for the moment to be true. Hypothesis number one, many members of Congress do not understand artificial intelligence. Hypothesis. Number two, that absence of understanding may not prevent Congress from plunging in with enthusiasm <laugh> and trying to regulate this technology in a way that could hurt this technology. Hypothesis number three, that I would like you to assume there is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying. Assume all of those to be true. Please tell me in plain English, two or three reforms, regulations, if any, that you would, you would implement if you were queen or king for a day. Ms. Montgomery.

Christina Montgomery:

I think it comes back again to transparency and explainability in AI. We absolutely need to know and have companies attest.

Sen. John Kennedy (R-LA):

What do you mean by transparency?

Christina Montgomery:

So disclosure of the data that’s used to train AI, disclosure of the model and how it performs and making sure that there’s continuous governance over these models. That we are the leading edge in terms of governance. Technology, governance, organizational governance rules and clarification that are needed that this

Sen. John Kennedy (R-LA):

Congress, which I mean this is your chance, folks to tell us how to get this right. Please use it.

Christina Montgomery:

Right. I mean, I think, again, the rules should be focused on the use of AI in certain contexts. So if you look at, for example, the A, so if you look at the EU AI Act, it has certain uses of AI that it says are just simply too dangerous and will be outlawed in the UK. Okay?

Sen. John Kennedy (R-LA):

So we ought to first pass a law that says you can use AI for these uses but not others. Is that, is that what you’re saying? We

Christina Montgomery:

Need to define the highest risk uses of AI.

Sen. John Kennedy (R-LA):

Is there anything else?

Christina Montgomery:

And then of course requiring things like impact assessments and transparency requiring companies to show their work protecting data that’s used to train AI in the first place as well.

Sen. John Kennedy (R-LA):

Alright, professor Marcus, if you could be specific. This is your shot, man. <Laugh>. Talk in plain English and tell me what, if any rules we ought to implement. And please don’t just use concepts. I’m looking for specificity.

Gary Marcus:

Number one, a safety review like we used with the FDA prior to widespread deployment. If you’re gonna introduce something to a hundred million people, somebody has to have their eyeballs on it.

Sen. John Kennedy (R-LA):

There you go. Okay. That’s a good one Number. I’m not sure I agree with it, but that’s a good one. What else?

Gary Marcus:

You didn’t ask for three that you would agree with. Number two, a nimble monitoring agency to follow what’s going on. Not just pre-review but also post as things are out there in the world with authority to call things back, which we’ve discussed today. And number three would be funding geared towards things like ai, constitution, ai, that can reason about what it’s doing. I would not leave things entirely to current technology, which I think is poor at behaving in ethical fashion and behaving in honest fashion. And so I would have funding to try to basically focus on AI safety research. That term has a lot of complications in my field. There’s both safety, let’s say short term and long term. And I think we need to look at both rather than just funding models to be bigger, which is the popular thing to do. Okay. We need to fund models to be more trustworthy.

Sen. John Kennedy (R-LA):

Cuz I’m want to hear from Mr. Altman. Mr. Altman, here’s your shot. Thank you Senator.

Sam Altman:

Number one, I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards. Number two, I would create a set of safety standards focused on what you said in your third hypothesis as the dangerous capability evaluations. One example that we’ve used in the past is looking to see if a model can self-replicate and sell the exfiltrate into the wild. We can give you office a long other list of the things that we think are important there, but specific tests that a model has to pass before it can be deployed into the world. And then third I would require independent audits. So not just from the company or the agency, but experts who can say the model is or is an in compliance with these stated safety thresholds and these percentages of performance on question X or Y.

Sen. John Kennedy (R-LA):

Can you send me that information?

Sam Altman:

We will do that.

Sen. John Kennedy (R-LA):

Would you be qualified to, to if we promulgated those rules, to administer those rules?

Sam Altman:

I love my current job.

Sen. John Kennedy (R-LA):

Cool. Are there people out there that would be qualified? We’d be happy to send you recommendations for people out there. Yes.

Okay. You make a lot of money. Do you?

Sam Altman:

I make no… I get paid enough for health insurance. I have no equity in OpenAI.

Sen. John Kennedy (R-LA):

Really? Yeah. That’s interesting. You need a lawyer.

Sam Altman:

I need a what?

Sen. John Kennedy (R-LA):

You need a lawyer or an agent.

Sam Altman:

I’m doing this cuz I love it.

Sen. John Kennedy (R-LA):

Thank you Mr. Chairman.

Sen. Richard Blumenthal (D-CT):

Thanks. Senator Kennedy. Senator Hirono.

Sen. Mazie Hirono (D-HI):

Thank you Mr. Chairman. Listening to, to all of your testifying, thank you very much for being here. Clearly AI truly is a game changing tool. And we need to get the regulation of this tool right? Because my staff, for example, asked ai it might have been GPT-4 it might have been, I don’t know, one of the other entities to create a song that my favorite band, BTS <laugh>. A favorite song, a song that they would sing somebody else’s song. But you know, neither of the artists were involved in creating what sounded like a really genuine song. So you can do a lot. We also ask can can there be a speech created talking about the, the Supreme Court decision in Dobbs and the chaos that it created using my voice, my kind of voice, and it created a speech that was really good. It almost made me think about, you know, what do I need my staff for <laugh>? So don’t worry, that’s not

Sam Altman:

Where we nervous laughter behind you.

Sen. Mazie Hirono (D-HI):

Their jobs are safe, but there’s so much that can be done. And one of the things that you mentioned, Mr. Altman that intrigued me was you said GPT-4 can refuse harmful requests. So you must have put some thought into how your system, if I can call it that, can refuse harmful requests. What, what do you consider a harmful request? You can just keep it short. Yeah.

Sam Altman:

I’ll give a few examples. One would be about violent content. Okay? another would be about content that’s encouraging self-harm, another’s adult content. Not that we think adult content is inherently harmful, but there’s things that could be associated with that that we cannot reliably enough differentiate. So we refuse all of it.

Sen. Mazie Hirono (D-HI):

So those are some of the more obvious, harmful kinds of of information. But in the election context for for example, I, I saw a picture of former President Trump being arrested by y pd and that went viral. I don’t know, is that considered harmful? I’ve seen all kinds of statements attributed to any one of us that could be put out there that may not be, that may not rise to your level of harmful content. But there you have it. So two of two of you said that we should have a licensing scheme. I can’t envision or imagine right now what kind of a licensing scheme we would be able to create to pretty much regulate the vastness of, of the, this game-changing tool. So are, are you thinking of an FTC kind of a system, an FCC kind of a system? What do the two of you even envision as a potential licensing scheme that would provide the kind of guardrails that we need to protect our, our, literally our country from harmful content

Sam Altman:

To touch on the first part of what you said, there are things besides you know, should this content be generated or not, that I think are also important. So that image that you mentioned was generated, I think it’d be a great policy to say generated images need to be made clear in all contexts that they were generated. And, you know, then we still have the image out there, but it’s, we’re at least requiring people to say this was a generated image.

Sen. Mazie Hirono (D-HI):

Okay, well, I, you don’t need an entire licensing scheme in order to make that reality.

Sam Altman:

Where, where I think the licensing scheme comes in is not with, not for what these models are capable of today, because as you pointed out, you don’t need a new licensing agency to do that. But as we, as we head, and, you know, this may take a long time, I’m not sure as we head towards artificial general intelligence mm-hmm. <Affirmative> and the impact that will have and the power of that technology, I think we need to treat that as seriously as we treat other very powerful technologies. And that’s where I personally think we need such a scheme.

Sen. Mazie Hirono (D-HI):

I agree. And that is why though, by the time we’re talking about AGI, we’re talking about major harms that can occur through the use of AGI. So professor Marcus, I mean, what kind of a regulatory scheme would you envision? And what, and, and we can’t just come up with something, you know, that is gonna be take care of the issues that will arise in the future, especially with AGI. So what, what kind of a scheme would you contemplate?

Gary Marcus:

Well, first, if I can, can rewind just a moment. I think you really put your finger on the central scientific issue in terms of the challenges in building artificial intelligence. We don’t know how to build a system that understands harm in the full breadth of its meaning. Mm-Hmm. <affirmative>. So what we do right now is we gather examples and we say, is this like the examples that we have labeled before? But that’s not broad enough. And so I thought you’re questioning beautifully outlined. The challenge that AI itself has to face in order to, to really deal with this, we want AI itself to understand harm and that may require new technology. So I think that’s very important. On this second part of your question the model that I tend to gravitate towards, but I am not an expert here, is the fda, at least as part of it in terms of you have to make a safety case and say why the benefits outweigh the harms in order to get that license.

Probably we need elements of multiple agencies. I’m not an expert there, but I think that the safety case part of it is incredibly important. You have to be able to have external reviewers that are scientifically qualified, look at this and say, you have you addressed enough? So I’ll just give one specific example. Auto GPT frightens me. That’s not something that OpenAI made, but something that OpenAI did make call called chat. G p d plugins led a few weeks later to some building open. So source software called auto, GPT. And what auto GPT does is it allows systems to access source code, access the internet and so forth. And there are a lot of potential, let’s say cybersecurity risks. There, there should be an external agency that says, well, we need to be reassured if you’re going to release this product that there aren’t gonna be cybersecurity problems or there are ways of addressing it.

Sen. Mazie Hirono (D-HI):

So professor, I am running out of time there. There’s, you know, I just wanted to mention Ms. Montgomery, Ms. Montgomery, your model is use a model similar to what the EU has come up with, but the vastness of AI and the complexities involved, I think would require more than looking at the use of it. I think that based on what I’m hearing today, don’t you think that that we were probably gonna need to do a heck of a lot more than to focus on what use it is being AI is being used for, for example, you can ask AI to come up with a funny joke or something, but you can use the same, you can ask the same AI tool to generate something that is like an electro and fraud kind of a situation.

So I don’t know how you’ll make a determination based on where you’re going with the use model, how to distinguish those kinds of uses of this tool. So I think that if we’re gonna go toward a licensing kind of a scheme, we’re gonna need to put a lot of thought into how we’re gonna come up with an appropriate scheme that is going to provide the kind of future reference that we need to put in place. So I, I, I thank all of you for coming in and providing further food for thought. Thank you, Mr. Chairman.

Sen. Richard Blumenthal (D-CT):

Thanks very much. Senator Hirono. Senator Padilla.

Sen. Alex Padilla (D-CA):

Thank you, Mr. Chairman. I appreciate the flexibility as I’ve been back and forth between this committee and Homeland Security Committee, where there’s a hearing going on right now on the use of AI in government. So it’s AI Day on the hill, or at least in the Senate, apparently. Now for folks watching that home, if you never thought about AI until the recent emergence of generative AI tools, the developments in this space may feel like they’ve just happened all of a sudden. But the fact of the matter is, Mr. Chair, is that they haven’t, AI is not new, not for government, not for business, not for public, not for the public. In fact, the public uses AI all the time. And just for folks to be able to relate, wanna offer the example of anybody with a smartphone many features on your device leverage ai, including suggested replies, right?

When we’re text messaging or even to email autocorrect features, including, but limited to spelling in our email and text applications. So I’m frankly excited to explore how we can facilitate positive AI innovation that benefits society while addressing some of the already known harms and biases that stem from the development and use of the tools today. Now, with language models becoming increasingly ubiquitous, I wanna make sure that there’s a focus on ensuring equitable treatment of diverse demographic groups. My understanding is that most research into evaluating and mitigating fairness harms has been concentrated on the English language, while non-English languages have received comparably little attention or investment. And we’ve seen this problem before. I’ll tell you why I raised this. Social media companies, for example, have not adequately invested in content moderation, tools and resources for their non-English in, in non-English language. And I share this not just out of concern for non-US based users, but so many US based users prefer a language other than English in their communication. So I’m deeply concerned about repeating social media’s failure in AI tools and applications. Question Mr. Altman and Ms. Montgomery, how are OpenAI and IBM ensuring language and cultural inclusivity that they’re in their large language models and it’s even an area focused in the development of your products.

Christina Montgomery:

So bias and equity in technology is a focus of ours and always has been. I think diversity in terms of the development of the tools, in terms of their deployment. So having diverse people that are actually training those tools, considering the downstream effects as well. We’re also very cautious, very aware of the fact that we can’t just be articulating and calling for these types of things without having the tools and the technology to test for bias and to apply governance across the lifecycle of ai. So we were one of the first teams and companies to put toolkits on the market, deploy them and contribute them to open source that will do things like help to address, you know, be the technical aspects in which we help to address issues like bias.

Sen. Alex Padilla (D-CA):

Can can you speak just for a second specifically to language inclusivity?

Christina Montgomery:

Yeah. I mean, language. So we don’t have a consumer platform, but we are very actively involved with ensuring that the technology we help to deploy in the large language models that we use in helping our clients to deploy technology is focused on and available in many languages.

Sen. Alex Padilla (D-CA):

Thank you, Mr. Altman.

Sam Altman:

We think this is really important. One example is that we worked with the government of Iceland which is a language with fewer speakers than many of the languages that are well represented on the internet to ensure that their language was included in our model. And we’ve had many similar conversations. And I look forward to many similar partnerships with lower resource languages to get them into our models. GPT-4 is unlike previous models of ours, which were good at English and not very good at other languages. Now pretty good at a large number of languages. You can go pretty far down the list ranked by number of speakers and, and still get good performance. But for these very small languages, we’re excited about custom partnerships to include that language into our model run. And the part of the question you asked about values and making sure that cultures are included, we’re equally focused on that.

 Excited to work with people who have particular data sets and to work to collect a representative set of values from around the world to draw these wide bounds of what the system can do. I also appreciate what you said about the benefits of these systems and wanting to make sure we get those to as wide of a group as possible. I think these systems will have lots of positive impact on a lot of people, but in particular underrepresented, historically underrepresented groups in technology people who have not had as much access to technology around the world, this technology seems like it can be a big lift up.

Sen. Alex Padilla (D-CA):

Okay. And I know my question was specific to language inclusivity, but glad there’s agreement on the broader commitment to diversity and inclusion. And I’ll just give a couple more reasons why they think it’s so critical. You know, the largest actors in this space can afford the massive amount of data, the computing power and they have the financial resources necessary to develop complex AI systems. But in this space, we haven’t seen from a workforce standpoint, the racial and gender diversity reflective of the United States of America. Andwe risk if we’re not thoughtful about it contributing to the development of tools and approaches that only exacerbate the bias and inequities that exist in our society. So a lot of follow up work to do there in my time remaining. I do want to ask one more question.

This committee and the public are right to pay attention to the emergence of generative ai. Now, this technology has a different opportunity and risk profile than other AI tools, and these applications have felt very tangible for the public due to the nature of the user interface and the outputs that they produce. But I don’t think we should lose out of the broader AI ecosystem as you consider AI’s broader impact on society as well as the design of appropriate safeguards. So, Ms. Montgomery, in your testimony as you noted, AI is not new. Can you highlight some of the different applications that the public and policymakers should also keep in mind as we consider possible regulations?

Christina Montgomery:

Yeah, I mean, I think the generative AI systems that are available today are creating new issues that need to be studied, new issues around the potential to generate content that could be extremely misleading, deceptive, and the like. So those issues absolutely need to be studied. But we shouldn’t also ignore the fact that AI is a tool. It’s been around for a long time. It has capabilities beyond just generative capabilities. And again, that’s why I think going back to this approach where we’re regulating AI, where it’s touching people and society is a really important way to address it.

Sen. Alex Padilla (D-CA):

Thank you. Thank you, Mr. Chair.

Sen. Richard Blumenthal (D-CT):

Thanks. Senator Padilla. Senator Booker is next, but I think he’s gonna defer to Senator Ossoff.

Sen. Cory Booker (D-NJ):

Because Senator Ossoff is a very big deal.

Sen. Jon Ossoff (D-GA):

<Laugh>, I have a meeting at noon and I’m grateful to you, Senator Booker for yielding your time. You are as always brilliant and handsome and thank you to the panelists for joining us. Thank you to the subcommittee leadership for opening us up to all committee members. If we’re gonna contemplate a regulatory framework we’re gonna have to define what it is that we’re regulating. So you know, Mr. Alban, any such law will have to include a section that defines the scope of regulated activities, technologies, tools, products. Just take a stab at it.

Sam Altman:

Yeah, thanks for asking. Senator Ossoff, I think, I think it’s super important. I think there are very different levels here. And I think it’s important that any, any new approach, any new law does not stop the innovation from happening with smaller companies, open source models, researchers that are doing work at a smaller scale. That’s a wonderful part of this ecosystem and of America, and we don’t wanna slow that down. There still may need to be some rules there but I think we could draw a line at systems that need to be licensed in a very intense way. The easiest way to do it, I’m not sure if it’s the best, but the easiest would be to talk about the amount of compute that goes into such a model. So we could dev, you know, we could define a threshold of compute and it’ll have to go, it’ll have to change.

It could go up or down as we discover more efficient algorithms that says above this amount of compute you are in this regime. What I would prefer it’s harder to do, but I think more accurate is to define some capability thresholds and say a model that can do things x y and z up to you all to decide that’s now in this licensing regime. But models that are less capable, you know, we don’t wanna stop our open source community. We don’t want to stop individual researchers. We don’t wanna stop. New startups can proceed, you know, with a different framework.

Sen. Jon Ossoff (D-GA):

Thank you. Yes. Concisely as you can please state which capabilities you’d propose we consider for the purposes of this definition.

Sam Altman:

I would love rather than to do that off the cuff, to follow up with your office with like a follow,

Sen. Jon Ossoff (D-GA):

Well, perhaps opine.

Sam Altman:

Thanks.

Sen. Jon Ossoff (D-GA):

Opine, understanding that you’re just responding and you’re not making law.

Sam Altman:

Alright. In the spirit of just opining I think a model that can persuade, manipulate, influence person’s behavior or a person’s beliefs, that would be a good threshold. I think a model that could help create novel biological agents would be a great threshold. Okay. Things like that.

Sen. Jon Ossoff (D-GA):

I want to talk about the predictive capabilities of a technology, and we’re gonna have to think about a lot of very complicated constitutional questions that arise from it. With massive data sets the integrity and accuracy with which such technology can predict future human behavior is potentially pretty significant at the individual level. Correct?

Sam Altman:

I think we don’t know the answer to that for sure, but let’s say it can at least have some impact there.

Sen. Jon Ossoff (D-GA):

Okay. So we may be confronted by situations where, for example, a law enforcement agency deploying such technology seeks some kind of judicial consent to execute a search or to take some other police action on the basis of a modeled prediction about some individual’s behavior. But that’s very different from the kind of evidentiary predicate that normally police would take to a judge in order to get a warrant. Talk me through how you’re thinking about that issue.

Sam Altman:

Yeah, I think it’s very important that we continue to understand that these are tools that humans use to make human judgments, and that we don’t take away human judgment. I don’t think that people should be prosecuted based on the output of an AI system. For example,

Sen. Jon Ossoff (D-GA):

We have no national privacy law, Europe has, has rolled one out to mixed reviews. Do you think we need one?

Sam Altman:

I think it’d be good.

Sen. Jon Ossoff (D-GA):

And what would be the qualities or purposes of such a law that you think would make the most sense based on your experience?

Sam Altman:

Again, this is very far out of my area of expertise. I think there’s many, many people that could, that are privacy experts that could weigh on what a law needs.

Sen. Jon Ossoff (D-GA):

I’d still like you to weigh in.

Sam Altman:

I mean, I think a minimum is that users should be able to, to sort of opt out from having their data used by companies like ours or the social media companies. It should be easy to delete your data. I think those are, it should, but the thing that I think is important from my perspective running an AI company is that if you don’t want your data used for training these systems you have the right to do that.

Sen. Jon Ossoff (D-GA):

So let’s think about how that will be practically implemented. I mean as I understand it your tool and, and certainly similar tools, one of the inputs will be scraping, for lack of a better word, data off of the open web, right? As a low-cost way of gathering information. And there’s a vast amount of information out there about all of us. How would such a restriction on the access or use or analysis of such data be practically implemented?

Sam Altman:

So I was speaking about something a little bit different, which is the data that someone generates, the questions they ask our system, things that they input their training on, that data that’s on the public web that’s accessible. Even if we don’t train on that, the models can certainly link out to it. So that was not what I was referring to. I think that, you know, there’s ways to have your data, or there should be more ways to have your data taken down from the public web, but certainly models with web, web browsing capabilities will be able to search the web and link out to it.

Sen. Jon Ossoff (D-GA):

When you think about implementing a safety or a regulatory regime to constrain such software and to mitigate some risk, is your view that the federal government would make laws such that certain capabilities or functionalities themselves are forbidden in potential? In other words, one cannot deploy or execute code capable of X? Yes. Or is it the act itself X only when actually executed?

Sam Altman:

Well, I think both. I’m a believer in defense in depth. I think that there should be limits on what a deployed model is capable of, and then what it actually does too.

Sen. Jon Ossoff (D-GA):

How are you thinking about how kids use your product?

Sam Altman:

We, well, you have to be, I mean, you have to be 18 or up or have your parents’ permission at 13 and up to use their product. But we understand that people get around those safeguards all the time. And so what we try to do is just design a safe product. And there are decisions that we make that we would allow if we knew only adults were using it that we just don’t allow in the product because we know children will use it some way or other two. In particular, given how much these systems are being used in education we like want to be aware that that’s happening.

Sen. Jon Ossoff (D-GA):

I think what, and, and Senator Blumenthal has done extensive work investigating this. What we’ve seen repeatedly is that companies whose revenues depend upon volume of use, screen time, intensity of use design these systems in order to maximize the engagement of all users, including children with, with perverse results in many cases. And what I would humbly advise you is that you get way ahead of this issue, the safety for children of your product, or I think you’re gonna find that Senator Blumenthal, Senator Hawley, others on the subcommittee and I are will, will look very harshly on the deployment of technology that harms children.

Sam Altman:

We couldn’t agree more. I think we’re outta time, but I’m happy to talk about that. If I can respond.

Sen. Jon Ossoff (D-GA):

Go ahead. Well, it’s up to the Chairman.

Sam Altman:

I, first of all, I think we try to design systems that do not maximize for engagement. In fact, we’re so short on GPUs. The less people use our products, the better. But we’re not an advertising based model. We’re not trying to get people to use it more and more. And I think that’s d that’s a different shape than ad supported social media. Second these systems do have the capability to, to influence in obvious and in very nuanced ways. And I think that’s particularly important for the safety of children. But that will, that will impact all of us. One of the things that will do ourselves regulation or not, but I think a regulatory approach would be good for also is requirements about how the values of these systems are set and how these systems respond to questions that can cause influence. So we’d love to partner with you. Couldn’t agree more on the importance.

Sen. Jon Ossoff (D-GA):

Thank you,

Sen. Cory Booker (D-NJ):

Mr. Chairman. For the record, I just wanna say that the Senator from Georgia is also very handsome and brilliant too, <laugh>.

Sen. Richard Blumenthal (D-CT):

I will allow that comment to stand without objection.

Sen. Cory Booker (D-NJ):

Without objection. Okay. <laugh> Mr. Chairman and Ranking Members been, are now recognized. Thank you very much, <laugh>. Thank you. It’s nice that we finally got down to the ball guys down here at the end. I just wanna thank you both. This has been one of the best hearings I’ve had this Congress and just a testimony to you two and seeing the, the challenges and the opportunities that AI presents. So I appreciate you both. I want to just jump in, I, I think very broadly, and then I’ll get a little more narrow. Sam, you said very broadly, technology has been moving like this and we are a lot of people have been talking about regulation. And so I use the example of, of the automobile, what an extraordinary piece of technology. I mean, New York City did not know what to do with horse manure.

They were having crises forming commissions, and the automobile comes along, ends that problem. But at the same time, we have tens of thousands of people dying on highways every day. We have emissions crises and the, like, there are multiple federal agencies, multiple federal agencies who are created or are specifically focused on regulating cars. And, and so this idea that this equally transforming technology is coming and for Congress to do nothing, which is not what anybody hears is, is calling for little or nothing is, is obviously unacceptable. I really appreciate Senator Welsh and I who’ve been going back and forth during this hearing. And him and Bennett have a bill talking about trying to regulate in this space. Not doing so for social media has been, I think, very destructive and allowed a lot of things to go on that are really causing a lot of harm.

And so the question is, is what kind of regulation? You all have spoken that to a lot of my colleagues. And I, and I want to say Ms. Montgomery, and I have to give full disclosure, I’m the child of two IBM parents. But I I, you know, you talked about defining the highest risk uses. We don’t know all of them. We really don’t. We can’t see where this is going regulating at the point of risk. And you, you sort of called not for an agency. And I think when you, when somebody else asks you to specify, because you don’t wanna slow things down, we should build on what we have in place. But you can envision that we can try to work on two different ways that ultimately a specific, like we have in in cars, epa, nitsa, the Federal Motor Car Carrier Safety Administration, all of these things. You can imagine something specific that is as Mr. Marcus points out a nimble agency that could do monitoring other things. You can imagine the need for something like that, correct?

Christina Montgomery:

Oh, absolutely.

Sen. Cory Booker (D-NJ):

Yeah. And, and so just for the record, then, in addition to trying to regulate with what we have now, you would encourage Congress and my colleague, Senator Welsh, to move forward in trying to figure out the right tailored agency to deal with what we know and perhaps things that might come up in the future.

Christina Montgomery:

I would encourage Congress to make sure it understands the technology has the skills and resources in place to impose regulatory requirements on the uses of the technology and to understand emerging risks as well. So, yes.

Sen. Cory Booker (D-NJ):

Yeah, Mr. Marcus, there’s no way to, no way to put this genie in the bottle globally. This is it, it’s exploding. I appreciate your thoughts and I shared some with my staff about your ideas of what the international context is, but there’s no way to stop this moving forward. So with that understanding, just building on what Ms. Montgomery said, what kind of encouragement do you have as specifically as possible to forming an agency to using current rules and regulations? Can you just put some clarity on what you’ve already stated?

Gary Marcus:

Let me just insert, there are more genies yet to come from more bottles. Some genie are already out, but we don’t have machines that can really, for example, self-improve themselves. We don’t really have machines that have self-awareness and we might not ever want to go there. So there are other genies to be concerned about. Onto the main part of your question I, I think that we need to have some international meetings very quickly with people who have expertise in how you grow agencies in the history of growing agencies. We need to do that in the federal level. We need to do that in the international level. I’ll just emphasize one thing I haven’t as much as I would like to, which is that I think science has to be a really important part of it. And I’ll give an example. We’ve talked about misinformation. We don’t really have the tools right now to detect and label misinformation with nutrition labels that we would like to, we have to build new technologies for that. We don’t really have tools yet to detect a wide uptick in cyber crime. Probably. We probably need new tools there. We need science to probably help us to figure out what we need to build and also what it is that we need to have transparency around and so forth.

Sen. Cory Booker (D-NJ):

Understood. Understood. Sam, I’m just going to you for the little bit of time I have left real quick. First of all, you’re a bit of a unicorn when I sat down with you first could you explain why nonprofit, in other words, you’re, you’re, you’re, you’re not looking necess and you’ve even capped the VC people. Just really quickly, I want folks to understand that.

Sam Altman:

We started as a nonprofit really focused on how this technology was going to be built at the time. It was very outside the Overton window that something like AGI was even possible. That’s, that shifted a lot. We didn’t know at the time how important scale was going to be, but we did know that we wanted to build this with humanity’s best interest at heart and a belief that this technology could, if it goes the way we, we want, if we can do some of those things. Professor Marcus mentioned really deeply transformed the world and we wanted to be as much of a force for getting to a positive issue.

Sen. Cory Booker (D-NJ):

I’m gonna interrupt you. I think that’s all good. I hope more of that gets out on the record. The second part of my question as well I found it fascinating. But are you ever gonna, for a revenue model for return on your investors? Are you ever gonna do ads or something like that?

Sam Altman:

I wouldn’t say never. I don’t think, like, I think there may be people that we wanna services to, and there’s no other model that works. But I really like having a subscription based model. We have API developers pay us and we have ChatGPT.

Sen. Cory Booker (D-NJ):

Okay. Can I, so that then, then, can I just jump Sure. Real quickly, one of my biggest concerns about this space is what I’ve already seen in the space of web two, web three is this massive corporate concentration. It is really terrifying to see how few companies now control and affect the lives of so many of us. And these companies are getting bigger and more powerful. And I see, you know, OpenAI backed by Microsoft. Philanthropic is backed by Google. Google has its own in-house products we know bar. So I’m really worried about that. And I’m wondering if, if Sam, you can gimme a quick acknowledgement. Are you worried about the corporate concentration in this space and what effect it might have? And the associated risks perhaps with market concentration in ai And then Mr. Mark Marcus, can you answer that as well?

Sam Altman:

I think there will be many people that develop models. What’s happening in the open source community is amazing, but there will be a relatively small number of providers that can make models at the, at the true.

Sen. Cory Booker (D-NJ):

Is there danger in that?

Sam Altman:

I think there is benefits and danger to that. Like, as we were talking about all of the dangers with ai, the fewer of us that you really have to keep a careful eye on, on the absolute, like bleeding edge capabilities, there’s benefits there. But there, I think there needs to be enough and their will cuz there’s so much value that consumers have choice that we have different ideas.

Sen. Cory Booker (D-NJ):

Mr. Marcus, real quick.

Gary Marcus:

There is a real risk of a kind of technocracy combined with oligarchy, where small number of companies influence people’s beliefs through the nature of these systems. Again, I put something in the wall in the record about the Wall Street Journal, about how these systems can subtly shape our beliefs and as enormous influence on how we live our lives. And having a small number of players do that with data that we don’t even know about. That scares me.

Sen. Cory Booker (D-NJ):

Sam, I’m sorry.

Sam Altman:

One more thing I wanted to add. One thing that I think is very important is that the, what these systems get aligned to, whose values, what those bounds are, that that is somehow set by society as a whole, by governments as a whole. And so creating that data set, the align that our alignment data set, it could be, you know, an AI constitu, whatever it is, the, that has got to come very broadly from society.

Sen. Cory Booker (D-NJ):

Thank you very much, Mr. Chairman, to expired and I guess the best for last.

Sen. Richard Blumenthal (D-CT):

Thank you, Senator Booker. Senator Welch.

Sen. Peter Welch (D-VT):

First of all, I wanna thank you Senator Blumenthal and you Senator Hawley. This has been a tremendous hearing. Senators are noted for their short attention spans, but I’ve sat through this entire hearing and enjoyed every minute of it.

Sen. Richard Blumenthal (D-CT):

You do have one of our longer attention spans in the United States Senate. <Laugh> to your great credit.

Sen. Peter Welch (D-VT):

Well, we’ve had good witnesses and it’s incredibly important issue. And here’s just, I don’t, all the questions I have have been asked really, but here’s a kind of a takeaway and what I think is the major question that we’re gonna have to answer as a Congress. Number one, you’re here because AI is this extraordinary new technology that everyone says can be transformative as much as the printing press. Number two is really unknown what’s gonna happen. But there’s a big fear you’ve expressed to all of you about what bad actors can do and will do if there’s no rules of the road. Number three, as a member who served in the house and now in the Senate, I’ve come to the conclusion that it’s impossible for Congress to keep up with the speed of technology. And there have been concerns expressed in so about social media now about a AI that relate to fundamental privacy rights, bias rights intellectual property disin the spread of disinformation, which in many ways for me is the biggest threat cuz that goes to the core of our capacity for self-governing.

 There’s the economic transformation, which can be profound. There’s safety concerns and I’ve come to the conclusion that we absolutely have to have an agency. What its scope of engagement is, is has to be defined by us. But I believe that unless we have an agency that is gonna address these questions from social media and, and ai we really don’t have much of a defense against the bad stuff and the bad stuff will come. So last year I introduced in the house side and, and Senator Bennett didn’t senate aside, it was the end of the year Digital Commission Act and we’re gonna be reintroducing that this year. And the two things that I want to ask, one, you’ve somewhat answered, cuz I think two of the three of you have said, you think we do need an independent commission, you know, and Congress established an Indi Independent commission when railroads were running rampant over the interest of farmers when Wall Street had no rules of the road, and we had the SEC.

And I think we’re at that point now. But what the commission does would have to be defined and circumscribed. But also there’s always a question about the use of regulatory authority and the recognition that it can be used for good. JD Vance actually mentioned that when we were considering his and Senator Brown’s bill about railroads in that event in East Palestine regulation for the public health. But there’s also legitimate concern about regulation getting in the way of things being too cumbersome and being a negative influence. So a two of the three of you have said you think we do need an agency. What are some of the perils of an agency that we would have to be mindful of in order to make certain that its goals of protecting many of those interests? I just mentioned privacy bias, intellectual property, disinformation would be the winners and not the losers. And I’ll start with you, Mr. Altman.

Sam Altman:

Thank you, Senator. One, I think America has got to continue to lead. This happened in America. I’m very proud that it happened in America,

Sen. Peter Welch (D-VT):

By the way. I think that’s right. And that’s why I’d be much more confident if we had our agency as opposed to got involved in international discussions. Ultimately, you want the rules of the road, but I think if we lead and get rules of the road that work for us, that is probably a more effective way to proceed.

Sam Altman:

I personally believe there’s a way to do both. And I think it is important to have the global view on this because this technology will impact Americans and all of us wherever it’s developed. But I think we want America to lead. We want we want.

Sen. Peter Welch (D-VT):

So get to the perils issue though, because I know,

Sam Altman:

Well, that’s one. I mean, that is peril, which is you slow down American industry in such a way that China or somebody else makes faster progress. A second. And I think this can happen with, like, the regulatory pressure should be on us, it should be on Google. It should be on the other small set of people in the lead the most. We don’t wanna slow down smaller startups. We don’t wanna slow down open source efforts. We still need them to comply with things. They can still, you can still cause great harm with a smaller model. But leaving the room and the space for new ideas and new companies and independent researchers to do their work and not putting a regulatory burden to say a company like us could handle, but a smaller one couldn’t. I think that’s another peril. And it’s clearly a way that regulation has gone,

Sen. Peter Welch (D-VT):

Professor Marcus.

Gary Marcus:

The other obvious peril is regulatory capture. If, if we make it as appear as if we are doing something, but it’s more like greenwashing and nothing really happens, we just keep out the little players because we put so much burden that only the big players can do it. So there are also those kinds of perils. I fully agree with everything that, that Mr. Altman said, and I would add that to the list.

Sen. Peter Welch (D-VT):

Okay, Ms. Montgomery,

Christina Montgomery:

One of the things I would add to the list is the risk of not holding companies accountable for the harms that they’re causing today. Right? So we talk about misinformation in electoral systems. So

Sen. Peter Welch (D-VT):

No agency or no waiver here.

Christina Montgomery:

We need to hold companies responsible today and accountable for AI that they’re deploying, that disseminates misinformation on things like elections and where the, where the

Sen. Peter Welch (D-VT):

Risk, you know, regulatory agency would do a lot of the things that Senator Graham was talking about. You know, you don’t build a nuclear reactor without getting a license. You don’t build an AI system without getting a license that gets tested independently.

Sam Altman:

I think it’s a great analogy.

Gary Marcus:

We, we need both pre-deployment, pre-deployment and post-deployment.

Sen. Peter Welch (D-VT):

Okay. Thank you all very much. I yield back, Mr. Chairman.

Sen. Richard Blumenthal (D-CT):

Thanks. Thanks, Senator Welch, let me ask a few more questions. You’ve all been very, very patient. And the turnout today, which is beyond our subcommittee, I think reflects both your value in what you’re contributing as well as the interest in this. That’s our problem topic. There are a number of subjects that we haven’t covered at all. One was just alluded to by professor Marcus, which is the monopolization danger, the dominance of markets that excludes new competition and thereby inhibits or prevents innovation and invention, which we have seen in social media as well as some of the old industries, airlines automobiles and others where consolidation has narrowed competition. And so I think we need to, to focus on kind of an old area of antitrust, which dates more than a century still inadequate to deal with the challenges we have right now in our economy.

And certainly we need to be mindful of the way that rules can enable the big guys to get bigger and exclude innovation and competition and responsible good guys such as are represented in this industry right now. We haven’t dealt with national security. There are huge implications for national security. I will tell you, as a member of the Armed Service Committee classified briefings on this issue have abounded and the threats that are posed by some of our adversaries. China has been mentioned here, but the sources of threats to this nation in this space are very real and urgent. We’re not gonna deal with him today, but we do need to deal with him and we will hopefully in this committee. And then on the issue of a new agency, you know, I’ve been doing this stuff for a while. I was Attorney General of Connecticut for 20 years.

I was a federal prosecutor to the US attorney. Most of my career has been an enforcement. And I will tell you something, you can create 10 new agencies, but if you don’t give them the resources, and I’m talking not just about dollars, I’m talking about scientific expertise, you guys will run circles around ’em. And it isn’t just the, the models or the generative AI that will run models around run circles around them, but it is the scientists in your companies. For every success story in government regulation, you can think of five failures that’s true of the fda, it’s true of the I A E A. It’s true of the scc, it’s true of the whole alphabet list of government agencies. And I hope our experience here will be different. But the Pandora’s box requires more than just the words or the concepts. Licensing new agency. There’s some real hard decision making as Ms. Montgomery has alluded to about how to frame the rules to fit the risks. First, do no harm.

Make it effective, make it enforceable, make it real. I think we need to grapple with the, the hard questions here that, you know, frankly, this initial hearing I think has raised very successfully but not answered. And I, I thank our colleagues who have participated and, and made these very creative suggestions. I’m very interested in enforcement. I, you know, literally 15 years ago, I think advocated abolishing Section two 30, what’s old is new again. You know, now people are talking about abolishing section two 30. Back then it was considered completely unrealistic. But enforcement really does matter. I want to ask Mr. Altman, because of the, the privacy issue and you’ve suggested that you have an interest in protecting the privacy of the data that may come to you or be available, how do you, what specific steps you take to protect privacy?

Sam Altman:

One is that we don’t train on any data submitted to our api. So if you’re a business customer of ours and submit data we don’t train it at all. We do retain it for 30 days solely for the purpose of trust and safety enforcement. But that’s different than training on it. If you use ChatGPT you can opt out of us training on your data. You can also delete your conversation history or your whole account.

Sen. Richard Blumenthal (D-CT):

Ms. Montgomery, I know you don’t deal directly with consumers, but do you take steps to protect privacy as well?

Christina Montgomery:

Absolutely. And we even filter our large language models for content that includes personal information that may have been pulled from public data sets as well. So we apply additional level of filtering.

Sen. Richard Blumenthal (D-CT):

Professor Marcus, you made reference to self-awareness, self-learning already we’re talking about the potential for jail breaks. How soon do you think that new kind of generative AI will be usable will be practical

Gary Marcus:

New AI that is self-aware and so forth, or, yes. I mean, I, I have no idea on that one. I think we don’t really understand what self-awareness is, and so it’s hard to put a date on it. In terms of self-improvement, there’s some modest self-improvement in current systems, but one could imagine a lot more, and that could happen in two years. It could happen in 20 years. The basic paradigms that haven’t been invented yet, some of them we might want to discourage. But it, it’s a bit hard to put timelines on them. And just going back to enforcement for one second, one thing that is absolutely paramount, I think is far greater transparency about what the models are and what the data are. That doesn’t necessarily mean everybody in the general public has to know exactly what’s in one of these systems, but I think it means that there needs to be some enforcement arm that can look at these systems, can look at the data can perform tests and so forth.

Sen. Richard Blumenthal (D-CT):

Let, let me ask you all of you I think there has been a reference to elections and banning outputs involving elections there other areas where you think, what are the, what are the other high risk or highest risk areas where you would either ban or establish especially strict rules? Ms. Montgomery,

Christina Montgomery:

The space around misinformation, I think is a hugely important one. And coming back to the points of transparency, you know, knowing what content was generated by AI is going to be a really critical area that we need to address.

Sen. Richard Blumenthal (D-CT):

Any others?

Gary Marcus:

I think medical misinformation is something to really worry about. We have systems that hallucinate things. They’re gonna hallucinate medical advice. Some of the advice they’ll give is good, some of it’s bad. We need really tight regulation around that. Same with psychiatric advice. People using these things as, as kind of azos therapists. I think we need to be very concerned about that. I think we need to be concerned about internet access for these tools. When they can start making requests, both of people and, and internet things, it’s probably okay if they just do search, but as they do more intrusive things on the internet, like do we want them to be able to order equipment or order canvas history and so forth? So as they, as we empower these systems more by giving them internet access, I think we need to be concerned about that. And then we’ve hardly talked at all about long-term risks. Sam alluded to it briefly. I don’t think that’s where we are right now, but as we start to approach machines that have a larger footprint on the world, beyond just having a conversation, we need to worry about that and think about how we’re going to regulate that and, and monitor it and so forth.

Sen. Richard Blumenthal (D-CT):

In a sense we’ve been talking about bad guys or certain bad actors manipulating AI to do harm.

Gary Marcus:

Manipulating people.

Sen. Richard Blumenthal (D-CT):

And manipulating people, but also generative AI can manipulate the manipulators.

Gary Marcus:

It, it can, I mean, there’s, there’s many layers of manipulation that are possible, and I think we don’t yet really understand the consequences. Dan Dennet just sent me a manuscript last night that will be in the Atlantic in a few days on what he calls counterfeit people. And it’s a wonderful metaphor. These systems are almost like counterfeit people, and we don’t really honestly understand what the consequence of that is. They’re not perfectly human-like yet, but they’re good enough to fool a lot of the people a lot of the time. And that introduces lots of problems, for example, cyber crime and how people might try to manipulate markets and so forth. So it’s a serious concern.

Sen. Richard Blumenthal (D-CT):

In my opening. I suggested three principles, transparency, accountability, and limits on use. Would you agree that those are a good starting point? Is Montgomery?

Christina Montgomery:

100%. And as you also mentioned, industry shouldn’t wait for Congress. That’s what we’re doing here at IBM.

Sen. Richard Blumenthal (D-CT):

There’s no reason that Absolutely wait for Congress. Yep. Professor Marcus.

Gary Marcus:

I think those three would be a great start. I mean, there are, there are things like the White House Bill of Rights, for example, that show I think a large consensus, the UNESCO guidelines and so forth. So a large consensus around what it is we need. And the real question is definitely now how are we gonna put some teeth in it, try to make these things actually enforced. So for example, we don’t have transparency yet. We all know we want it, but we’re not doing enough to enforce it.

Sen. Richard Blumenthal (D-CT):

Mr. Altman,

Sam Altman:

I certainly agree that those are important points. I would add that, and Professor Marcus touched on this, I would add that as we, we spend most of the time today on current risks, and I think that’s appropriate and I’m very glad we have done it as the systems do become more capable. And I’m not sure how far away that is, but maybe not, not super far. I think it’s important that we also spend time talking about how we’re going to confront those challenges.

Sen. Richard Blumenthal (D-CT):

I mean, talk to you privately.

Sam Altman:

You know how much I care.

Sen. Richard Blumenthal (D-CT):

I agree that you care deeply and intensely, but also that prospect of increased danger or risk resulting from even more complex and capable AI mechanisms, certainly maybe closer than a lot of people appreciate.

Gary Marcus:

Let, let me just add for the record that I’m sitting next to Sam closer than I’ve ever sat to him except once before in my life. And that his sincerity in talking about those fears is very apparent physically in a way that just doesn’t communicate on the television screen you communicates from here.

Sen. Richard Blumenthal (D-CT):

Thank you. Senator Hawley.

Sen. Josh Hawley (R-MO):

Thank you again, Mr. Chairman, for a great hearing. Thanks to the witnesses. So I’ve been keeping a little list here of the potential downsides or harms risks of generative ai even in its current form. Let’s just run through it. Loss of jobs, and this isn’t speculative. I think your company, miss Montgomery’s announced that it’s, it’s potentially laying off 7,800 people, third of your non-consumer facing workforce for, because of ai. So loss of jobs, invasion of privacy, personal privacy on a scale we’ve never before seen manipulation of personal behavior, manipulation of personal opinions, and potentially the degradation of free elections in America. Did I miss anything? I mean, this is, this is quite a list. I noticed that in eclectic group of about a thousand technology and AI leaders, everybody from Andrew Yang to Elon Musk recently called for a six month moratorium on any further AI development. Were they right. Do do you join those calls? Are they right to do that? Should we, should we pause for six months?

Gary Marcus:

You characterization’s not quite correct. I actually signed that letter, about 27,000 people signed it. It did not call for a ban on all AI research. It only called in, nor on all AI, but only on a very specific thing, which would be systems like GPT-5. Every other piece of research that’s ever been done, it was actually supportive or neutral about, and it specifically called for more AI, sorry, specifically called for more research on trustworthy and safe AI.

Sen. Josh Hawley (R-MO):

So you think, just so you think that we should take a moratorium, a six month moratorium or more on anything beyond ChatGPT-4?

Gary Marcus:

I took the letter. What is the famous phrase? Spiritually, not literally. What was the famous phrase? 

Sen. Josh Hawley (R-MO):

Well, I’m asking for your opinion now though. So do you endorse…

Gary Marcus:

My opinion is that the moratorium that we should focus on is actually deployment until we have good safety cases. I don’t know that we need to pause that particular project, but I do think it’s emphasis on focusing more on AI safety, on trustworthy, reliable AI is exactly right.

Sen. Josh Hawley (R-MO):

Deployment means not making it available to the public.

Gary Marcus:

Yeah. So you pause that. So my, my concern is about things that are deployed at a scale of let’s say, a hundred million people without any external review. I think that we should think very carefully about doing that.

Sen. Josh Hawley (R-MO):

What about you, Mr. Altman? Do you agree with that? Would you, would you pause any further development for six months or longer? So first of all, we, after we finished training g PT four, we waited more than six months to deploy it. We are not currently training what will be g PT five. We don’t have plans to do it in the next six months. But I think the frame of the letter is wrong. What matters is audits, red teaming safety standards that a model needs to pass before training. If we pause for six months, then I’m not really sure, sure. What we do then, do we pause for another six? Do we kind of come up with some rules then the standards that we have developed and that we’ve used for g PT four deployment we wanna build on those, but we think that’s the right direction.

Not a calendar clock pause. There may be times I expect there will be times when we find something that we don’t understand and we really do need to take a pause, but we don’t see that yet. Nevermind all the benefits. What, what you, you don’t see what yet you’re comfortable with all of the potential ramifications from the current existing technology. I’m sorry. We don’t see the reasons to not train a new one for deploying. As I mentioned, I think there’s all sorts of risky behavior and there’s limits we put, we have to pull things back sometimes, add new ones. I meant we don’t see something that would stop us from training the next model where we’d be so worried that we’d create something dangerous even in that process, let alone the deployment would happen. What about you Ms. Montgomery?

Christina Montgomery:

I think we need to use the time to prioritize ethics and responsible technology as opposed to posing development.

Sen. Josh Hawley (R-MO):

Well, wouldn’t a pause in development help the development of protocols for safety standards and ethics?

Christina Montgomery:

I’m not sure how practical it is to pause, but we absolutely should be prioritizing safety protocols.

Sen. Josh Hawley (R-MO):

Okay. The point about practicality leads me to this. I’m interested in this talk about an agency and, you know, maybe that would work. Although having seen how agencies work in this government, they usually get captured by the interests that they’re supposed to regulate. They usually get controlled by the people who they’re supposed to be watching. I mean, that’s just been our history for a hundred years. Maybe this agency would be different. I have a little different idea. Why don’t we just let people sue you? Why don’t we just make you liable in court? We can do that. We know how to do that. We can pass a statute, we can create a federal right of action that will allow private individuals who are harmed by this technology to get into court and to bring evidence into court. And it can be anybody.

I mean, you wanna talk about crowdsourcing? We’ll just open the courthouse doors. We’ll define a broad right of action. Private right of action. Private citizens to be class actions. We’ll just open it up. We’ll allow people to go into court. We’ll allow them to present evidence. They say that they were harmed by, they were given medical misinformation, they were given election misinformation, whatever. Why not do that, Mr. Altman? I mean, please forgive my ignorance. Can’t people sue us? Well, you’re not protection by protected by section 230. But there’s not currently a, I don’t think a federal right of action, private right of action that says that if you are harmed by generative AI technology, we will guarantee you the ability to get into court. Oh, well, I think there’s like a lot of other laws where if, you know, technology harms you there’s standards that we could be sued under, unless I’m really misunderstanding how things work. If the question is are more, are clearer laws about the specifics of this technology and consumer protection’s a good thing? I would say definitely yes.

Gary Marcus:

The laws that we have today were designed long before we had artificial intelligence. And I do not think they give us enough coverage. The plan that you propose, I think is a hypothetical, would certainly make a lot of lawyers wealthy, but I think it would be too slow to affect a lot of the things that we care about. And there are gaps in the law, for example. We don’t really…

Sen. Josh Hawley (R-MO):

Wait, you think it’d be slower than Congress?

Gary Marcus:

Yes, I do. In some ways <laugh>

Sen. Josh Hawley (R-MO):

Really?

Gary Marcus:

Do, you know, litigation can take a decade or more…

Sen. Josh Hawley (R-MO):

But the threat of litigation is a powerful tool. I mean, how would IBM like to be sued for a hundred billion dollars?

Gary Marcus:

I in no way asking to take litigation off the table among the tools. But I think for example, if I can continue we, there are areas like copyright where we don’t really have laws. We don’t really have a way of thinking about wholesale misinformation as opposed to individual pieces of it where say a foreign actor might make billions of pieces of misinformation or a local actor. We have some laws around market manipulation we could apply, but we get in a lot of situations where we don’t really know which laws apply, there would be loopholes. The system is really not thought through. In fact, we don’t even know that two 30 does or does not apply here. As far as I know. I think that that’s something a lot of people speculate about this afternoon. But it it’s not solid. Well,

Sen. Josh Hawley (R-MO):

We could fix that. 

Gary Marcus:

<Laugh> Well, the question is how.

Sen. Josh Hawley (R-MO):

Oh, easy. You, you just, you just it, it, it would be easy for us to say that Section 230 doesn’t apply to generative AI, Ms. Montgomery, I’ll the last word important. How you’ll determine…

Sen. Richard Blumenthal (D-CT):

Just on the point of suggested Ms. Montgomery a duty of care, which I think fits the idea of a private right of action.

Christina Montgomery:

No, that’s exactly right. And also AI is not a shield, right? So, if a company discriminates in granting credit, for example, or in the hiring process, the by virtue of the fact that they relied too significantly on an AI tool, they’re responsible for that today, regardless of whether they used a tool or a human to make that decision.

Sen. Richard Blumenthal (D-CT):

I’m gonna turn to Senator Booker for some final questions, but I just wanna make a quick point here on the, on the issue of the moratorium. I think we need to be careful. The world won’t wait. The rest of the global scientific community isn’t going to pause. We have adversaries that are moving ahead and sticking our head in the sand is not the answer. Safeguards and protections, yes. But a flat stop sign sticking our head in this sand. I would be very, very

Gary Marcus:

Worried about it without militating for any sort of pause. I would just, again, emphasize there is a difference between research, which surely we need to do to keep pace with our foreign rivals and deployment at really massive scale. You know, you could deploy things at the scale of a million people or 10 million people, but not a hundred million people or billion people. And if there are risks, you might find them out sooner and be able to close the barn doors before the horses leave rather than after.

Sen. Richard Blumenthal (D-CT):

Senator Booker.

Sen. Cory Booker (D-NJ):

Yeah, I just, there will be no pause. I mean, there’s no enforcement body divorce applause. It’s just not, not gonna happen. It’s nice to call for it for any just reasons or whatsoever. But I’m, forgive me for sounding skeptical. Nobody’s pausing. This thing is a race.,

Gary Marcus:

I would agree. And, and I don’t think it’s a realistic thing in the world. The reason I personally signed the letter was to call attention to how serious the problems were and to emphasize spending more of our efforts on trustworthy and safe ai rather than just making a bigger version of something we already know to be unreliable. Yeah.

Sen. Cory Booker (D-NJ):

So I’m, I’m a futurist. I love exciting about the future. And I guess there’s a famous question if you couldn’t control for your race, your gender, where you would land on the planet Earth, or what time in humanity would you want to be born? Everyone would say, right now it’s the, it’s still the best time to be alive because of technology innovation and everything. And I’m excited about what the future holds. But the destructiveness that I’ve also seen as a person that’s seen the transformative technologies of, of a lot of the technologies of the last 25 years is, is what really concerns me. And one of the things, especially with companies that are designed to want to keep my attention on screens, and I’m not just talking about new media i, 24 hour cable news is a great example of people that want to keep your eyes on screens.

I have a lot of concerns about the corporate intention. And, and Sam, this is again, why I find your story so fascinating to me and your values that I believe in from our conversations so compelling to me. But, but absent that, I really want to just explore what happens when these companies that are already controlling so much of our lives. We, a lot has been written about the FANG companies. What happens when they are the ones that are dominating this technology as they did before. So Professor Marcus, does that have any concern, the role that corporate power, corporate concentration has in this realm that a few companies might, might control this whole area?

Gary Marcus:

I radically changed the shape of my own life in the last few months. And it was because of what happened with Microsoft releasing Sydney. And it didn’t go the way I thought it would. In one way it did, which is I anticipated the hallucinations. I wrote an essay, which I have in the appendix, What to Expect when you’re expecting GPT-4. And I said that it would still be a good tool for misinformation, that it would still have trouble with physical reasoning, psychological reasoning that it would hallucinate. And then along came Sidney and the initial press reports were quite favorable. And then there was the famous article by Kevin Roose, in which it recommended he get a divorce. And I had seen Tay and I had seen Galactica from Meta and those had been pulled after they had problems. And Sydney clearly had problems.

What I would’ve done had I run Microsoft, which clearly I do not, would’ve been to temporarily withdraw it from the market. And they didn’t. And that was a wake up call to me in a reminder that even if you have a company like OpenAI that is a nonprofit, and Sam’s values I think have come clear today, other people can buy those companies and do what they like with them. And, you know, maybe we have a stable set of actors now, but the amount of power that these systems have to shape our views and our lives is really, really significant. And that doesn’t even get into the risks that someone might repurpose them deliberately for all kinds of bad purposes. And so in the middle of February, I stopped writing much about technical issues in ai, which is most of what I’ve written about for the last decade, and said, I need to work on policy. This is frightening.

Sen. Cory Booker (D-NJ):

And Sam, I wanna give you an opportunity as my sort of last question or so do, do, don’t you have concerns about, I mean, you, you, I, I graduated from Stanford, the, the, I know so many of the players in the Valley from VC peel folks, angel folks, to a lot of founders of companies that we all know. Do you have some concern about a few players with extraordinary resources and power, power to influence Washington? I mean, I, I see us, I love, I’m a big believer in the free market, but the reason why I walk into a bodega and a twinkie is cheaper than an Apple or a Happy meal costs less than a bucket of salad is because of the way the government tips the scales to pick winners and losers. So the free market is, it is not what it should be when you have large corporate power that can even influence the game here. Do you have some concerns about that in this, in this next era of technological innovation?

Sam Altman:

Yeah, I mean, again, that’s, that’s so much of why we started OpenAI. We have huge concerns about that. I think it’s important to democratize the inputs to these systems, the values that we’re going to Alli align to. And I think it’s also important to give people wide use of these tools when we started the API strategy which is a big part of how we make our systems available for anyone to use, there was a huge amount of skepticism over that. And it does come with challenges, that’s for sure. But we think putting this in the hands of a lot of people and not in the hands of a few companies is really quite important. And we are seeing the result of an innovation boom from that. But, it is absolutely true that the number of companies that can train the true frontier models is going to be small just because of the resources required. And so I think there needs to be incredible scrutiny on us in our competitors. I think there is a rich and exciting industry happening of incredibly good research and new startups that are not just using our models but creating their own. And I, I think it’s important to make sure that whatever regulatory stuff happens, whatever new agencies may or may not happen, we preserve that fire. Cuz that’s

Sen. Cory Booker (D-NJ):

Critical. Well, I was, I’m a big believer in the democratizing potential of technology, but I’ve seen the promise of that fail time and time again where people said, oh, this is gonna have a big democratizing force. My team works on a lot of issues about the reinforcing of bias through algorithms. The failure to advertise certain opportunities and certain zip codes. But you seem to be saying, and I heard this with web three, yeah, that this is gonna be defi- decentralized finance, all these things are gonna happen. But this is not, this seems to me not even offer that promise because the people who are designing these, it takes so much power, energy, resources. Are you saying that this, that my dreams of technology further democratizing opportunity and more are possible within a technology that is ultimately I think gonna be very centralized to a few players who already control so much.

Sam Altman:

So this point that I made about use of use of the model and building on top of it as it, this is really a new platform, right? It is definitely important to talk about who’s gonna create the models. I wanna do that. I also think it’s really important to decide to whose values we’re going to align these models. But in terms of using the models the people that build on top of the OpenAI API do incredible things. And it’s, you know, people frequently comment like, I can’t believe you get this much technology for this little money. And so what people are, the companies people are building, putting AI everywhere using our api, which does let us put safeguards in place. I think that’s quite exciting. And I think that is how it is being demo. Not, not how it’s going to be, but how it is being democratized right now. There is a whole new Cambrian explosion of new businesses, new products, new services happening by lots of different companies on top of these models.

Sen. Cory Booker (D-NJ):

And so I’ll say Chairman, as I close that I have most industries resist even reasonable regulation from seatbelt laws to, we’ve been talking a lot recently about rail safety. That the only way we’re gonna see the democratization of values, I think, and while there are noble companies out there, is if we create rules of the road that enforce certain safety measures like we’ve seen with other technology. Thank you.

Sen. Richard Blumenthal (D-CT):

Thanks, Senator Booker. And I, I couldn’t agree more that in terms of consumer protection, which I’ve been doing for a while participation by the industry is tremendously important. And not just rhetorically, but in real terms. Cuz we have a lot of industries that come before us and say, oh, we’re all in favor of rules, but not those rules. Those rules we don’t like. And it’s every rule in fact that they don’t like. And I sense that there is a willingness to participate here that is genuine and authentic. I thought about asking ChatGPT to do a new version of Don’t Stop Thinking about Tomorrow, <laugh>. Cuz that’s what we need to be doing here. And as Senator Hawley has pointed out, Congress doesn’t always move at the pace of technology. And that may be a reason why we need a new agency, but we also need to recognize the world.

Rest of the world’s gonna be moving as well. And you’ve been enormously helpful in focusing us and illuminating some of these questions and performed a great service by being here today. So thank you to every one of our witnesses and I’m gonna close the hearing leave the record open for one week in case anyone wants to submit anything. I encourage any of you who have either manuscripts that are gonna be published or observations from your companies to, to submit them to us. And we look forward to our next hearing. This one is closed.