OpenAI and Anthropic sign AI safety deals with US government

AI startups OpenAI and Anthropic have agreed with the US Artificial Intelligence Safety Institute to collaborate on research, testing, and evaluating their advanced AI models. The deals come as regulatory scrutiny over AI’s safe and ethical development increases across the tech industry.

The agreements allow the US AI Safety Institute early access to significant new AI models from both companies before and after their release. The partnership will evaluate their capabilities and potential risks and provide feedback on safety improvements. OpenAI’s chief strategy officer, Jason Kwon, supported the initiative, citing its importance in setting a global framework for AI safety.

Anthropic, backed by Amazon and Alphabet, did not immediately comment on the deal. The US AI Safety Institute is also working closely with its counterpart in the UK to ensure international collaboration on AI safety. The institute was established as part of an executive order by President Biden to address risks associated with emerging AI technologies.

Apple and Nvidia eye investment in OpenAI

Apple and Nvidia are reportedly in discussions to invest in OpenAI, potentially pushing the valuation of the ChatGPT creator above $100 billion. The speculation of the two tech giants over the investment follows reports that venture capital firm Thrive Capital plans to invest around $1 billion in OpenAI, leading the latest funding round. While Apple and OpenAI have not commented on the news, sources indicate that Apple is increasingly integrating OpenAI’s technology into its products, including bringing ChatGPT to Apple devices earlier this year.

Microsoft, OpenAI’s largest investor with over $10 billion already committed, is also expected to join this fundraising effort. The exact amounts that Apple, Nvidia, and Microsoft will invest remain undisclosed. OpenAI’s high valuation underscores the intense competition in the AI industry, which has seen companies across various sectors invest heavily to leverage the technology and stay ahead.

The rapid rise in OpenAI’s worth reflects its pivotal role in the ongoing AI race, particularly following the launch of ChatGPT in late 2022. The company’s valuation reached $80 billion earlier this year through a tender offer led by Thrive Capital, highlighting its growing influence and strategic importance in the tech industry.

Thrive Capital to lead OpenAI’s funding at $100 billion valuation

OpenAI is nearing a major funding round that would value the company more than $100 billion, with Thrive Capital expected to invest around $1 billion. Sources familiar with the matter have indicated that the fundraising is progressing but has yet to be made public.

Sarah Friar, OpenAI’s CFO, informed employees that the company seeks new capital to cover increasing operational costs and fuel the computing power needed for its AI models. The announcement did not specify exact figures but highlighted the growing need for resources as the company scales.

If the funding round is successful, OpenAI could become one of the world’s most valuable venture-backed startups, underscoring the global demand for generative AI tools like ChatGPT. The rise of OpenAI has also sparked increased competition among tech giants eager to integrate AI into their products.

Friar additionally hinted at plans for a tender event later this year, allowing employees to sell some of their shares. Details of this event are still in the early stages and yet to be confirmed.

OpenAI backs California bill on AI content labeling

OpenAI, the developer behind ChatGPT, is backing a new California bill, AB 3211, to ensure transparency in AI-generated content. The proposed bill would require tech companies to label content created by AI, which ranges from innocuous memes to deepfakes that could potentially mislead voters in political campaigns. The legislation has gained attention as concerns grow over the impact of AI-generated material, especially in an election year.

The bill has somewhat been overshadowed by another California AI bill, SB 1047, which mandates safety testing for AI models and has faced resistance from the tech industry, including OpenAI. This resistance highlights the complexity of regulating AI while balancing innovation and public safety.

California lawmakers have introduced 65 AI-related bills in this legislative session, covering algorithmic fairness and protecting intellectual property from AI exploitation. However, many of these proposals have yet to advance, leaving AB 3211 as one of the more prominent measures still in play.

OpenAI has expressed the importance of transparency for AI-generated content, especially during elections, advocating for measures like watermarking to help users identify the origins of what they see online. Considering that AI-generated content is a global issue, there are strong concerns that it could influence the upcoming elections in the USA and in other countries.

AB 3211 has already passed the state Assembly with unanimous support and recently cleared the Senate Appropriations Committee. The bill requires a full Senate vote before the legislative session ends on 31 August. If it passes, it will go to Governor Gavin Newsom for approval or veto by 30 September.

Former Meta executive joins OpenAI to lead key initiatives

OpenAI has appointed a former Meta executive, Irina Kofman, as head of strategic initiatives. The recruiting of the new entry follows a series of high-profile hires from major tech firms as OpenAI expands. Kofman, who worked on generative AI for five years at Meta, will report directly to Mira Murati, OpenAI’s chief technology officer.

Kofman’s role at OpenAI will involve addressing critical areas such as AI safety and preparedness. Her appointment is part of a broader strategy by OpenAI to bring in seasoned professionals to navigate the competitive landscape, which includes rivals like Google and Meta.

In recent months, OpenAI has also brought in other prominent figures from the tech industry. These include Kevin Weil, a former Instagram executive now serving as chief product officer, and Sarah Friar, the former CEO of Nextdoor, who has taken on the role of chief financial officer.

Meta has yet to comment on Kofman’s departure. The company increasingly relies on AI to enhance its advertising business, using the technology to optimise ad placements and provide marketers with tools for better campaign design.

OpenAI opposes California’s AI regulation bill

OpenAI is pushing back against a proposed California bill, SB 1047, which aims to impose new safety requirements on AI companies. The San Francisco-based startup argues that the legislation would stifle innovation and that AI regulation should be managed at the federal level rather than by individual states. OpenAI also expressed concerns that the bill could negatively impact US AI and national security competitiveness.

The bill, introduced by state Senator Scott Wiener, seeks to establish safety standards for companies developing large AI models to prevent misuse in harmful ways, such as creating bioweapons or causing significant financial damage. The legislation has faced strong opposition from tech companies, who claim it could drive AI businesses out of California and hinder technological progress.

Despite amendments made by Wiener to address some of the industry’s concerns, including removing criminal liability for non-compliance and protecting smaller developers, major tech players like OpenAI remain opposed. OpenAI argues that the bill’s provisions could lead to a talent drain from California and disrupt the state’s leadership in AI innovation.

Wiener defended the bill, stating it requires companies to do what they’ve already committed regarding safety measures. He dismissed concerns about a talent exodus, noting that the law would apply to any company operating in California, regardless of location.

The bill will be voted on in the California State Assembly this month. If it passes, it will go to Governor Gavin Newsom, who has yet to express a clear stance on whether he will sign it into law, though he has spoken about balancing AI innovation with safety concerns.

OpenAI cracks down on Iranian influence campaign

OpenAI has intensified its efforts to prevent the misuse of AI, especially in light of the numerous elections scheduled for 2024. The company recently identified and turned off a cluster of ChatGPT accounts linked to an Iranian covert influence operation named Storm-2035. The operation aimed to manipulate public opinion during the US presidential election using AI-generated content on social media and websites but failed to gain significant engagement or reach a broad audience.

The operation generated articles and social media comments on various topics, including US politics, global events, and the conflict in Gaza. The content was published on websites posing as news outlets and shared on platforms like X and Instagram. Despite their efforts, the operation saw minimal interaction, with most posts receiving little to no attention.

OpenAI’s investigation into this operation was bolstered by information from Microsoft, and it revealed that the influence campaign was largely ineffective, scoring low on a scale assessing the impact of covert operations. The company remains vigilant against such threats and has shared its findings with government and industry stakeholders.

OpenAI is committed to collaborating with industry, civil society, and government to counter these influence operations. The company emphasises the importance of transparency and continues to monitor and disrupt any attempts to exploit its AI technologies for manipulative purposes.

Emotional attachment to AI could impact real-life interactions, says OpenAI

The potential impact of OpenAI’s realistic voice feature on human interactions has raised concerns, with the company warning that people might form emotional bonds with AI at the expense of real-life relationships. The company noted that users of its GPT-4 model have shown signs of anthropomorphizing the AI, attributing human-like qualities to it, which could lead to misplaced trust and dependency. OpenAI’s report highlighted that the high-quality voice interaction might exacerbate these issues, raising questions about the long-term effects on social norms.

The company observed that some testers of the AI voice feature interacted with it in ways that suggested an emotional connection, such as expressing sadness over the end of their session. While these behaviours might seem harmless, OpenAI emphasised the need to study their potential evolution over time. The report also suggested that reliance on AI for social interaction could diminish users’ abilities or willingness to engage in human relationships, altering how people interact with one another.

Concerns were also raised about the AI’s ability to recall details and handle tasks, which might lead to over-reliance on the technology. OpenAI further noted that its AI models, designed to be deferential in conversations, might inadvertently promote anti-social norms when users become accustomed to behaviours, such as interrupting, that are inappropriate in human interactions. The company pledged to continue testing how these voice capabilities could affect emotional attachment and social behaviour.

The issue gained attention following a controversy in June when OpenAI was criticized for allegedly using a voice similar to actress Scarlett Johansson‘s in its chatbot. Although the company denied the voice belonged to Johansson, the incident underscored the risks associated with voice-cloning technology. As AI models continue to advance toward human-like reasoning, experts are increasingly urging a pause to consider the broader implications for human relationships and societal norms.

Humans to maintain control over AI, says OpenAI executive

OpenAI’s chief strategy officer, Jason Kwon, has expressed confidence that humans will continue to control AI, downplaying concerns about the technology developing unchecked. Speaking at an forum in Seoul, Kwon emphasised that the core of safety lies in ensuring human oversight. As those systems grow more advanced, Kwon believes they will become easier to manage, countering fears of them becoming uncontrollable.

The company is actively working on creating a framework that allows AI systems to reflect the cultural values of different countries. Kwon highlighted the importance of making certain models adaptable to local contexts, ensuring that users in various regions feel the technology is designed with them in mind. However, approach like this one aims to foster a sense of ownership and relevance across diverse cultures.

Despite some scepticism surrounding the future of AI, Kwon remains optimistic about its trajectory. He compared it’s potential growth to that of the internet, which has become an indispensable tool globally. While acknowledging that AI is still in its early stages, he pointed out that adoption rates are gradually increasing, with significant room for growth.

Kwon noted that in South Korea, a country with over 50 million people, only 1 million are daily active users of ChatGPT. Even in the US, fewer than 20 per cent of the population has tried the tool. Kwon’s remarks suggest that AI’s journey is just beginning, with significant expansion expected in the coming years.

OpenAI appoints AI safety expert as director

One of the largest AI research organizations has appointed Zico Kolter, a distinguished professor and director of the machine learning department at Carnegie Mellon University, to its board of directors. Renowned for his focus on AI safety, Kolter will also join the company’s safety and security committee, which is tasked with overseeing the safe deployment of OpenAI’s projects. The appointment comes as OpenAI’s board undergoes changes in response to growing concerns about the safety of generative AI, which has seen rapid adoption across various sectors.

Following the departure of co-founder John Schulman, Kolter’s addition to the OpenAI board underscores a commitment to addressing these safety concerns. He brings a wealth of experience from his roles as the chief expert at Bosch and chief technical adviser at Gray Swan, a startup dedicated to AI safety. Notably, Kolter has contributed to developing methods that automatically assess the safety of large language models, a crucial area as AI systems become increasingly sophisticated. His expertise will be invaluable in guiding OpenAI as it navigates the challenges posed by the widespread use of generative AI technologies such as ChatGPT.

The formation of the safety and security committee in May, preceded by Ilya Sutskever‘s leaving, which includes Kolter alongside CEO Sam Altman and other directors, underlines OpenAI’s proactive approach to ensuring AI is developed and deployed responsibly. The committee is responsible for making recommendations on safety decisions across all of OpenAI’s projects, reflecting the company’s recognition of the potential risks associated with AI advancements.

In a related move, Microsoft relinquished its board observer seat at OpenAI in July, aiming to address antitrust concerns from regulators in the United States and the United Kingdom. This decision was seen as a step towards maintaining a balance of power within OpenAI, as the company continues to play a leading role in the rapidly evolving AI landscape.