OpenAI, the developer behind ChatGPT, is backing a new California bill, AB 3211, to ensure transparency in AI-generated content. The proposed bill would require tech companies to label content created by AI, which ranges from innocuous memes to deepfakes that could potentially mislead voters in political campaigns. The legislation has gained attention as concerns grow over the impact of AI-generated material, especially in an election year.
The bill has somewhat been overshadowed by another California AI bill, SB 1047, which mandates safety testing for AI models and has faced resistance from the tech industry, including OpenAI. This resistance highlights the complexity of regulating AI while balancing innovation and public safety.
California lawmakers have introduced 65 AI-related bills in this legislative session, covering algorithmic fairness and protecting intellectual property from AI exploitation. However, many of these proposals have yet to advance, leaving AB 3211 as one of the more prominent measures still in play.
OpenAI has expressed the importance of transparency for AI-generated content, especially during elections, advocating for measures like watermarking to help users identify the origins of what they see online. Considering that AI-generated content is a global issue, there are strong concerns that it could influence the upcoming elections in the USA and in other countries.
AB 3211 has already passed the state Assembly with unanimous support and recently cleared the Senate Appropriations Committee. The bill requires a full Senate vote before the legislative session ends on 31 August. If it passes, it will go to Governor Gavin Newsom for approval or veto by 30 September.
OpenAI has appointed a former Meta executive, Irina Kofman, as head of strategic initiatives. The recruiting of the new entry follows a series of high-profile hires from major tech firms as OpenAI expands. Kofman, who worked on generative AI for five years at Meta, will report directly to Mira Murati, OpenAI’s chief technology officer.
Kofman’s role at OpenAI will involve addressing critical areas such as AI safety and preparedness. Her appointment is part of a broader strategy by OpenAI to bring in seasoned professionals to navigate the competitive landscape, which includes rivals like Google and Meta.
Meta has yet to comment on Kofman’s departure. The company increasingly relies on AI to enhance its advertising business, using the technology to optimise ad placements and provide marketers with tools for better campaign design.
OpenAI is pushing back against a proposed California bill, SB 1047, which aims to impose new safety requirements on AI companies. The San Francisco-based startup argues that the legislation would stifle innovation and that AI regulation should be managed at the federal level rather than by individual states. OpenAI also expressed concerns that the bill could negatively impact US AI and national security competitiveness.
The bill, introduced by state Senator Scott Wiener, seeks to establish safety standards for companies developing large AI models to prevent misuse in harmful ways, such as creating bioweapons or causing significant financial damage. The legislation has faced strong opposition from tech companies, who claim it could drive AI businesses out of California and hinder technological progress.
Despite amendments made by Wiener to address some of the industry’s concerns, including removing criminal liability for non-compliance and protecting smaller developers, major tech players like OpenAI remain opposed. OpenAI argues that the bill’s provisions could lead to a talent drain from California and disrupt the state’s leadership in AI innovation.
Wiener defended the bill, stating it requires companies to do what they’ve already committed regarding safety measures. He dismissed concerns about a talent exodus, noting that the law would apply to any company operating in California, regardless of location.
The bill will be voted on in the California State Assembly this month. If it passes, it will go to Governor Gavin Newsom, who has yet to express a clear stance on whether he will sign it into law, though he has spoken about balancing AI innovation with safety concerns.
OpenAI has intensified its efforts to prevent the misuse of AI, especially in light of the numerous elections scheduled for 2024. The company recently identified and turned off a cluster of ChatGPT accounts linked to an Iranian covert influence operation named Storm-2035. The operation aimed to manipulate public opinion during the US presidential election using AI-generated content on social media and websites but failed to gain significant engagement or reach a broad audience.
According to Reuters’ latest news..
The US has accused Iran of launching cyber and influence operations aimed at the campaigns of US presidential candidates and sowing political discord among the American public. A joint statement from the FBI, the Office of the Director of National Intelligence, and the Cybersecurity and Infrastructure Security Agency highlighted increasingly aggressive Iranian activity during the election cycle. The statement follows earlier allegations from Donald Trump’s campaign regarding an Iranian hack on one of its websites. Iran has denied the accusations, describing them as ‘unsubstantiated and devoid of any standing.’ The US intelligence community remains confident in its assessment, citing attempts to access individuals within the presidential campaigns and activities intended to influence the election process.
The operation generated articles and social media comments on various topics, including US politics, global events, and the conflict in Gaza. The content was published on websites posing as news outlets and shared on platforms like X and Instagram. Despite their efforts, the operation saw minimal interaction, with most posts receiving little to no attention.
OpenAI’s investigation into this operation was bolstered by information from Microsoft, and it revealed that the influence campaign was largely ineffective, scoring low on a scale assessing the impact of covert operations. The company remains vigilant against such threats and has shared its findings with government and industry stakeholders.
OpenAI is committed to collaborating with industry, civil society, and government to counter these influence operations. The company emphasises the importance of transparency and continues to monitor and disrupt any attempts to exploit its AI technologies for manipulative purposes.
The potential impact of OpenAI’s realistic voice feature on human interactions has raised concerns, with the company warning that people might form emotional bonds with AI at the expense of real-life relationships. The company noted that users of its GPT-4 model have shown signs of anthropomorphizing the AI, attributing human-like qualities to it, which could lead to misplaced trust and dependency. OpenAI’s report highlighted that the high-quality voice interaction might exacerbate these issues, raising questions about the long-term effects on social norms.
The company observed that some testers of the AI voice feature interacted with it in ways that suggested an emotional connection, such as expressing sadness over the end of their session. While these behaviours might seem harmless, OpenAI emphasised the need to study their potential evolution over time. The report also suggested that reliance on AI for social interaction could diminish users’ abilities or willingness to engage in human relationships, altering how people interact with one another.
Concerns were also raised about the AI’s ability to recall details and handle tasks, which might lead to over-reliance on the technology. OpenAI further noted that its AI models, designed to be deferential in conversations, might inadvertently promote anti-social norms when users become accustomed to behaviours, such as interrupting, that are inappropriate in human interactions. The company pledged to continue testing how these voice capabilities could affect emotional attachment and social behaviour.
The issue gained attention following a controversy in June when OpenAI was criticized for allegedly using a voice similar to actress Scarlett Johansson‘s in its chatbot. Although the company denied the voice belonged to Johansson, the incident underscored the risks associated with voice-cloning technology. As AI models continue to advance toward human-like reasoning, experts are increasingly urging a pause to consider the broader implications for human relationships and societal norms.
OpenAI’s chief strategy officer, Jason Kwon, has expressed confidence that humans will continue to control AI, downplaying concerns about the technology developing unchecked. Speaking at an forum in Seoul, Kwon emphasised that the core of safety lies in ensuring human oversight. As those systems grow more advanced, Kwon believes they will become easier to manage, countering fears of them becoming uncontrollable.
The company is actively working on creating a framework that allows AI systems to reflect the cultural values of different countries. Kwon highlighted the importance of making certain models adaptable to local contexts, ensuring that users in various regions feel the technology is designed with them in mind. However, approach like this one aims to foster a sense of ownership and relevance across diverse cultures.
Despite some scepticism surrounding the future of AI, Kwon remains optimistic about its trajectory. He compared it’s potential growth to that of the internet, which has become an indispensable tool globally. While acknowledging that AI is still in its early stages, he pointed out that adoption rates are gradually increasing, with significant room for growth.
Kwon noted that in South Korea, a country with over 50 million people, only 1 million are daily active users of ChatGPT. Even in the US, fewer than 20 per cent of the population has tried the tool. Kwon’s remarks suggest that AI’s journey is just beginning, with significant expansion expected in the coming years.
One of the largest AI research organizations has appointed Zico Kolter, a distinguished professor and director of the machine learning department at Carnegie Mellon University, to its board of directors. Renowned for his focus on AI safety, Kolter will also join the company’s safety and security committee, which is tasked with overseeing the safe deployment of OpenAI’s projects. The appointment comes as OpenAI’s board undergoes changes in response to growing concerns about the safety of generative AI, which has seen rapid adoption across various sectors.
Following the departure of co-founder John Schulman, Kolter’s addition to the OpenAI board underscores a commitment to addressing these safety concerns. He brings a wealth of experience from his roles as the chief expert at Bosch and chief technical adviser at Gray Swan, a startup dedicated to AI safety. Notably, Kolter has contributed to developing methods that automatically assess the safety of large language models, a crucial area as AI systems become increasingly sophisticated. His expertise will be invaluable in guiding OpenAI as it navigates the challenges posed by the widespread use of generative AI technologies such as ChatGPT.
The formation of the safety and security committee in May, preceded by Ilya Sutskever‘s leaving, which includes Kolter alongside CEO Sam Altman and other directors, underlines OpenAI’s proactive approach to ensuring AI is developed and deployed responsibly. The committee is responsible for making recommendations on safety decisions across all of OpenAI’s projects, reflecting the company’s recognition of the potential risks associated with AI advancements.
In a related move, Microsoft relinquished its board observer seat at OpenAI in July, aiming to address antitrust concerns from regulators in the United States and the United Kingdom. This decision was seen as a step towards maintaining a balance of power within OpenAI, as the company continues to play a leading role in the rapidly evolving AI landscape.
Around seven years ago, Intel had the opportunity to invest in OpenAI, a nascent research organisation focused on generative artificial intelligence. Discussions between Intel and OpenAI spanned several months in 2017 and 2018, considering options like Intel acquiring a 15% stake for $1 billion. However, Intel decided against the deal, partly due to then-CEO Bob Swan’s scepticism about the commercial viability of generative AI models.
OpenAI, seeking to reduce its reliance on Nvidia’s chips, saw value in an investment from Intel. Yet, the deal fell through due to Intel’s reluctance to produce hardware at cost for the startup. The missed opportunity remained undisclosed until now, with OpenAI later becoming a major player in AI, launching the groundbreaking ChatGPT in 2022 and achieving a reported valuation of $80 billion.
Intel’s decision not to invest is part of a broader struggle to maintain relevance in the AI age. Once a leader in computer chips, Intel has been outpaced by competitors like Nvidia and AMD. Nvidia’s shift from gaming to AI chips has left Intel struggling to produce a competitive AI product, contributing to a sharp decline in its market value.
Despite its challenges, Intel continues to push forward with new AI chip developments, including the upcoming third-generation Gaudi AI chip and the next-generation Falcon Shores chip. CEO Pat Gelsinger remains optimistic about capturing a greater share of the AI market, but Intel’s journey serves as a cautionary tale of missed opportunities in a rapidly evolving industry.
OpenAI is developing Project Strawberry to improve its AI models’ ability to handle long-horizon tasks, which involve planning and executing complex actions over extended periods. Sam Altman, OpenAI’s chief, hinted at this project in a cryptic social media post, sharing an image of strawberries with the caption, ‘I love summer in the garden.’ That led to speculation about the project’s potential impact on AI capabilities.
Project Strawberry, also known as Q*, aims to significantly enhance the reasoning abilities of OpenAI’s AI models. According to a recent Reuters report, some at OpenAI believe Q* could be a breakthrough in the pursuit of artificial general intelligence (AGI). The project involves innovative approaches that allow AI models to plan ahead and navigate the internet autonomously, addressing common sense issues and logical fallacies that often result in inaccurate outputs.
OpenAI has announced DevDay 2024, a global developer event series with stops in San Francisco, London, and Singapore. The focus will be on advancements in the API and developer tools, though there is speculation that OpenAI might preview its next frontier model. Recent developments in the LMsys chatbot arena, where a new model showed strong performance in math, suggest significant progress in AI capabilities.
Internal documents reveal that Project Strawberry includes a “deep-research” dataset for training and evaluating the models, although the contents remain undisclosed. The innovation is expected to enable AI to conduct research autonomously, using a computer-using agent to act based on its findings. OpenAI plans to test Strawberry’s capabilities in performing tasks typically done by software and machine learning engineers, highlighting its potential to revolutionise AI applications.
John Schulman, co-founder of OpenAI, has departed the company for rival Anthropic. Schulman announced his decision on social media, citing a desire to focus more on AI alignment and return to hands-on technical work.
OpenAI is undergoing significant personnel shifts. Greg Brockman, another co-founder and President, is taking a sabbatical until the end of the year. Meanwhile, product manager Peter Deng has also left the firm.
Earlier this year, other key figures exited OpenAI. Chief scientist Ilya Sutskever departed in May, and founding member Andrej Karpathy left in February to start an AI-integrated education platform. AI safety leader Aleksander Madry was reassigned to a different role in July.
These changes come amid renewed legal challenges from Elon Musk, another OpenAI co-founder. Musk, who left OpenAI three years after its inception, has revived a lawsuit against the company, accusing it of prioritising profits over the public good.