ChatGPT gains over million subscribers, new pricing plans discussed

OpenAI announced on Thursday that it now has over 1 million paying users across its ChatGPT business products, including Enterprise, Team, and Edu. The increase from 600,000 users in April highlights CEO Sam Altman’s success in driving enterprise adoption of the AI tool.

Recent reports suggest OpenAI executives are discussing premium subscriptions for upcoming large-language models, such as the reasoning-focused Strawberry and a new flagship model called Orion. Subscription prices could reach as high as $2,000 per month for these advanced AI tools.

ChatGPT Plus currently costs $20 per month, while the free tier continues to be used by hundreds of millions every month. OpenAI is also working on Strawberry to enable its AI models to perform deep research, refining them after their initial training.

The discussion around premium pricing follows news that Apple and Nvidia are in talks to invest in OpenAI, with the AI company expected to be valued at over $100 billion. ChatGPT currently has more than 200 million weekly active users, doubling its user base since last autumn.

Former OpenAI scientist aims to develop superintelligent AI safely

Ilya Sutskever, OpenAI’s former chief scientist, has launched a new company called Safe Superintelligence (SSI) to develop safe AI systems that significantly surpass human intelligence. In an interview, Sutskever explained that SSI aims to take a different approach to AI scaling compared to OpenAI, emphasising the need for safety in superintelligent systems. He believes that once superintelligence is achieved, it will transform our understanding of AI and introduce new challenges for ensuring its safe use.

Sutskever acknowledged that defining what constitutes ‘safe’ AI is still a work in progress, requiring significant research to address the complexities involved. He also highlighted that as AI becomes more powerful, safety concerns will intensify, making it essential to test and evaluate AI systems rigorously. While the company does not plan to open-source all of its work, there may be opportunities to share parts of its research related to superintelligence safety.

SSI aims to contribute to the broader AI community’s safety efforts, which Sutskever views positively. He believes that as AI companies progress, they will realise the gravity of the safety challenges they face and that SSI can make a valuable contribution to this ongoing conversation.

OpenAI’s Model Spec to shape ethical and effective AI

OpenAI recently unveiled the Model Spec, a comprehensive framework designed to guide the behaviour of its GPT models in the OpenAI API and ChatGPT. The document is a crucial resource for researchers and data labellers involved in reinforcement learning from human feedback (RLHF), ensuring that models align with user intent and adhere to ethical standards.

The Model Spec is organised into three main components: Objectives provide broad directional goals, Rules establish specific instructions to prevent harmful outcomes and maintain legality, and Defaults offer basic style guidance and allow user flexibility while ensuring consistency.

The initiative serves multiple important purposes. It provides a framework for businesses to implement ethical AI, improve customer service quality, navigate regulations, and gain a competitive advantage through reliable AI systems. The Spec also addresses common issues by preventing users from prompting the model to ignore instructions and providing guidance on how models should refuse tasks.

OpenAI’s Model Spec represents a significant advancement in AI models’ fine-tuning and ethical alignment. As a living document, it will evolve based on community feedback and practical applications, contributing to the broader discourse on responsible AI development and public engagement in determining model behaviour.

OpenAI and Anthropic sign AI safety deals with US government

AI startups OpenAI and Anthropic have agreed with the US Artificial Intelligence Safety Institute to collaborate on research, testing, and evaluating their advanced AI models. The deals come as regulatory scrutiny over AI’s safe and ethical development increases across the tech industry.

The agreements allow the US AI Safety Institute early access to significant new AI models from both companies before and after their release. The partnership will evaluate their capabilities and potential risks and provide feedback on safety improvements. OpenAI’s chief strategy officer, Jason Kwon, supported the initiative, citing its importance in setting a global framework for AI safety.

Anthropic, backed by Amazon and Alphabet, did not immediately comment on the deal. The US AI Safety Institute is also working closely with its counterpart in the UK to ensure international collaboration on AI safety. The institute was established as part of an executive order by President Biden to address risks associated with emerging AI technologies.

Apple and Nvidia eye investment in OpenAI

Apple and Nvidia are reportedly in discussions to invest in OpenAI, potentially pushing the valuation of the ChatGPT creator above $100 billion. The speculation of the two tech giants over the investment follows reports that venture capital firm Thrive Capital plans to invest around $1 billion in OpenAI, leading the latest funding round. While Apple and OpenAI have not commented on the news, sources indicate that Apple is increasingly integrating OpenAI’s technology into its products, including bringing ChatGPT to Apple devices earlier this year.

Microsoft, OpenAI’s largest investor with over $10 billion already committed, is also expected to join this fundraising effort. The exact amounts that Apple, Nvidia, and Microsoft will invest remain undisclosed. OpenAI’s high valuation underscores the intense competition in the AI industry, which has seen companies across various sectors invest heavily to leverage the technology and stay ahead.

The rapid rise in OpenAI’s worth reflects its pivotal role in the ongoing AI race, particularly following the launch of ChatGPT in late 2022. The company’s valuation reached $80 billion earlier this year through a tender offer led by Thrive Capital, highlighting its growing influence and strategic importance in the tech industry.

Thrive Capital to lead OpenAI’s funding at $100 billion valuation

OpenAI is nearing a major funding round that would value the company more than $100 billion, with Thrive Capital expected to invest around $1 billion. Sources familiar with the matter have indicated that the fundraising is progressing but has yet to be made public.

Sarah Friar, OpenAI’s CFO, informed employees that the company seeks new capital to cover increasing operational costs and fuel the computing power needed for its AI models. The announcement did not specify exact figures but highlighted the growing need for resources as the company scales.

If the funding round is successful, OpenAI could become one of the world’s most valuable venture-backed startups, underscoring the global demand for generative AI tools like ChatGPT. The rise of OpenAI has also sparked increased competition among tech giants eager to integrate AI into their products.

Friar additionally hinted at plans for a tender event later this year, allowing employees to sell some of their shares. Details of this event are still in the early stages and yet to be confirmed.

OpenAI backs California bill on AI content labeling

OpenAI, the developer behind ChatGPT, is backing a new California bill, AB 3211, to ensure transparency in AI-generated content. The proposed bill would require tech companies to label content created by AI, which ranges from innocuous memes to deepfakes that could potentially mislead voters in political campaigns. The legislation has gained attention as concerns grow over the impact of AI-generated material, especially in an election year.

The bill has somewhat been overshadowed by another California AI bill, SB 1047, which mandates safety testing for AI models and has faced resistance from the tech industry, including OpenAI. This resistance highlights the complexity of regulating AI while balancing innovation and public safety.

California lawmakers have introduced 65 AI-related bills in this legislative session, covering algorithmic fairness and protecting intellectual property from AI exploitation. However, many of these proposals have yet to advance, leaving AB 3211 as one of the more prominent measures still in play.

OpenAI has expressed the importance of transparency for AI-generated content, especially during elections, advocating for measures like watermarking to help users identify the origins of what they see online. Considering that AI-generated content is a global issue, there are strong concerns that it could influence the upcoming elections in the USA and in other countries.

AB 3211 has already passed the state Assembly with unanimous support and recently cleared the Senate Appropriations Committee. The bill requires a full Senate vote before the legislative session ends on 31 August. If it passes, it will go to Governor Gavin Newsom for approval or veto by 30 September.

Former Meta executive joins OpenAI to lead key initiatives

OpenAI has appointed a former Meta executive, Irina Kofman, as head of strategic initiatives. The recruiting of the new entry follows a series of high-profile hires from major tech firms as OpenAI expands. Kofman, who worked on generative AI for five years at Meta, will report directly to Mira Murati, OpenAI’s chief technology officer.

Kofman’s role at OpenAI will involve addressing critical areas such as AI safety and preparedness. Her appointment is part of a broader strategy by OpenAI to bring in seasoned professionals to navigate the competitive landscape, which includes rivals like Google and Meta.

In recent months, OpenAI has also brought in other prominent figures from the tech industry. These include Kevin Weil, a former Instagram executive now serving as chief product officer, and Sarah Friar, the former CEO of Nextdoor, who has taken on the role of chief financial officer.

Meta has yet to comment on Kofman’s departure. The company increasingly relies on AI to enhance its advertising business, using the technology to optimise ad placements and provide marketers with tools for better campaign design.

OpenAI opposes California’s AI regulation bill

OpenAI is pushing back against a proposed California bill, SB 1047, which aims to impose new safety requirements on AI companies. The San Francisco-based startup argues that the legislation would stifle innovation and that AI regulation should be managed at the federal level rather than by individual states. OpenAI also expressed concerns that the bill could negatively impact US AI and national security competitiveness.

The bill, introduced by state Senator Scott Wiener, seeks to establish safety standards for companies developing large AI models to prevent misuse in harmful ways, such as creating bioweapons or causing significant financial damage. The legislation has faced strong opposition from tech companies, who claim it could drive AI businesses out of California and hinder technological progress.

Despite amendments made by Wiener to address some of the industry’s concerns, including removing criminal liability for non-compliance and protecting smaller developers, major tech players like OpenAI remain opposed. OpenAI argues that the bill’s provisions could lead to a talent drain from California and disrupt the state’s leadership in AI innovation.

Wiener defended the bill, stating it requires companies to do what they’ve already committed regarding safety measures. He dismissed concerns about a talent exodus, noting that the law would apply to any company operating in California, regardless of location.

The bill will be voted on in the California State Assembly this month. If it passes, it will go to Governor Gavin Newsom, who has yet to express a clear stance on whether he will sign it into law, though he has spoken about balancing AI innovation with safety concerns.

OpenAI cracks down on Iranian influence campaign

OpenAI has intensified its efforts to prevent the misuse of AI, especially in light of the numerous elections scheduled for 2024. The company recently identified and turned off a cluster of ChatGPT accounts linked to an Iranian covert influence operation named Storm-2035. The operation aimed to manipulate public opinion during the US presidential election using AI-generated content on social media and websites but failed to gain significant engagement or reach a broad audience.

The operation generated articles and social media comments on various topics, including US politics, global events, and the conflict in Gaza. The content was published on websites posing as news outlets and shared on platforms like X and Instagram. Despite their efforts, the operation saw minimal interaction, with most posts receiving little to no attention.

OpenAI’s investigation into this operation was bolstered by information from Microsoft, and it revealed that the influence campaign was largely ineffective, scoring low on a scale assessing the impact of covert operations. The company remains vigilant against such threats and has shared its findings with government and industry stakeholders.

OpenAI is committed to collaborating with industry, civil society, and government to counter these influence operations. The company emphasises the importance of transparency and continues to monitor and disrupt any attempts to exploit its AI technologies for manipulative purposes.