The company behind the popular AI chatbot ChatGPT, OpenAI, has announced that its newly established Safety and Security Committee will now operate independently to oversee the development and deployment of its AI models. This decision follows the committee’s recent recommendations, which were released publicly for the first time. Formed in May, the committee’s goal is to enhance and refine OpenAI’s safety practices amid growing concerns about AI’s ethical use and potential biases.
The committee will be led by Zico Kolter, a professor at Carnegie Mellon University and a member of OpenAI’s board. Under its guidance, OpenAI plans to implement an ‘Information Sharing and Analysis Center’ to facilitate cybersecurity information exchange within the AI industry. Additionally, the company is focusing on improving internal security measures and increasing transparency regarding the capabilities and risks associated with its AI technologies.
In a related development, OpenAI has also partnered with the US government to research and evaluate its AI models further. This move underscores the company’s commitment to addressing both the opportunities and challenges posed by AI as it continues to evolve.
Sam Altman, known for his leadership at OpenAI, has another ambitious project called Worldcoin, which seeks to address the potential fallout from AGI. He envisions AGI reshaping the global economy, and Worldcoin aims to build a framework to identify humans online and eventually offer universal basic income through its cryptocurrency.
Worldcoin’s plan involves the use of biometric data, particularly scanning people’s irises, to create digital IDs. These unique identifiers ensure that only humans can participate in online activities, preventing bots from infiltrating online spaces. While this technology may seem dystopian, the project insists on the safety and encryption of personal data, immediately deleting images after processing.
Despite concerns, Worldcoin has garnered substantial interest, including backing from major investors. CEO Alex Blania acknowledges the need to communicate the project’s vision clearly, especially as it faces regulatory challenges in various countries. Collaboration with governments is essential to ensure smooth deployment of the technology.
With AGI on the horizon, projects like Worldcoin are positioning themselves to shape the future. Altman believes that once AGI becomes widespread, the digital identity and financial framework offered by Worldcoin could play a vital role in adapting to this new reality.
A new task force has been launched by the White House to address the growing demands of AI infrastructure. Led by the National Economic Council and the National Security Council, the group aims to balance AI development with national security, economic, and environmental goals. Senior US officials and executives from major technology companies, including OpenAI and Google, took part in the meeting on Thursday.
The focus of the discussion was on the power requirements for advanced AI systems. Leaders explored how to meet clean energy targets and infrastructure needs, particularly in the face of increasing demand from data centres. AI has raised both hopes for efficiency gains and concerns over potential misuse, with its energy consumption being a significant challenge.
The Biden administration is pushing tech firms to invest in eco-friendly power solutions. The AI industry’s energy needs could complicate the government’s ambition to decarbonise the power grid by 2035. Representatives from major agencies, including Energy Secretary Jennifer Granholm, were part of the conversation on tackling these issues.
AI infrastructure plays a crucial role in the future of the US economy, according to OpenAI. The company emphasised the importance of expanding data centres domestically, not only to support industrial growth but also to ensure that AI’s benefits reach all corners of society.
Oprah Winfrey aired a special titled ‘AI and the Future of Us,’ featuring guests like OpenAI CEO Sam Altman, tech influencer Marques Brownlee, and FBI director Christopher Wray. The discussion was largely focused on the potential risks and ethical concerns surrounding AI. Winfrey highlighted the need for humanity to adapt to AI’s rapid development, while Altman emphasised the importance of safety regulations.
Altman defended AI’s learning capabilities but acknowledged the need for government involvement in safety testing. However, his company has opposed California’s AI safety bill, which experts believe would provide essential safeguards. He also discussed the dangers of deepfakes and urged caution as AI technology advances.
Wray pointed out AI’s role in rising cybercrimes like sextortion and disinformation. He warned of its potential to be exploited for election interference, urging the public to remain vigilant in the face of increasing AI-generated content.
For balance, Bill Gates expressed optimism about AI’s positive impact on education and healthcare. He envisioned AI improving medical transcription and classroom learning, though concerns about bias and misuse remain.
OpenAI’s latest version of ChatGPT, GPT o1, a nomenclature indicative of resetting the counter clock to 1, and its less costly mini version, represents a watershed moment in the company’s LLM stockpile. Designed to replicate superhuman-level intelligence, the models can already answer questions a lot faster than humans. This series of models will be unlike previous ones. In responding to queries, they utilise a human-like ‘chain of thought’ processing combined with reinforcement learning on specialised datasets and optimisation algorithms.
The model outperforms older models by a significant margin. For example, when tested against GPT-4o at the International Mathematics Olympiad, it scored 83 percent to GPT-4o’s 13 percent. What’s unique about the model is its ability to not only provide step-by-step reasoning for outputs but to show human-like patterns of hesitation during the process, ‘I’m curious about…’ and ‘Ok, let me see’ or ‘Oh, I’m running out of time, let me get to an answer quickly’. The new design has also resulted in a reduced occurrence of hallucinations. Yet, despite their many pros, the models have limitations. For instance, they cannot browse the internet, lack world knowledge, and cannot process files and images.
According to the lead researcher on the project, Jerry Tworek, the next level is for the models to perform similarly to PhD students on challenging benchmark tasks in areas such as physics, chemistry and biology. He assures that the intention here is not to equate AI with human thinking but rather to illustrate the model’s ability to dive cognitively deep. For the company, reasoning is a step up from pattern recognition, which is the design model used with previous versions. Ultimately, OpenAI aims to develop a product that can make decisions and take action on behalf of humans, a venture estimated to cost a further $USD 150 billion. Removing the current kinks in the system will mean that the models can work on complex global problems we face today in areas such as engineering and medicine.
More breakthroughs will also mean reduced access costs for developers and users. According to Chief Research Officer Bob McGrew, developer access to o1-preview is currently $15 per 1 million input tokens (chunks of text parsed by the model) and $60 per 1 million output tokens. GPT -o4 costs $5 per 1 million input tokens and $15 per 1 million output tokens.
OpenAI is reportedly in talks to secure $6.5 billion in funding, aiming for a $150 billion valuation. Such a move would significantly boost its position among the world’s top startups, following an earlier $86 billion valuation this year.
Led by CEO Sam Altman and backed by Microsoft, OpenAI’s success with the ChatGPT chatbot has driven its rapid rise. The firm has revived Silicon Valley’s interest in AI, further solidifying its position. A significant portion of the new funding may come in the form of a revolving credit facility, adding $5 billion in debt from banks.
The capital injection will help OpenAI remain a private company, avoiding the regulatory challenges and stock market volatility that often come with public listings. Many high-profile startups are choosing to stay private for longer, bolstered by private equity funding.
Some investors, however, may push for liquidity through a public offering or company sale. Meanwhile, OpenAI has been added to Forge Global’s prestigious list of “Private Magnificent Seven” startups, further highlighting its dominance in the AI sector.
OpenAI is set to launch its new AI model ‘Strawberry’ within the next two weeks as part of its ChatGPT service. The model is designed to focus on reasoning rather than instant responses, could offer a more thoughtful conversational experience.
Led by Sam Altman, OpenAI has generated strong interest and investment in AI technology. Businesses are increasingly turning to artificial intelligence to enhance their products, with OpenAI reporting over one million paying users across its services.
OpenAI announced on Thursday that it now has over 1 million paying users across its ChatGPT business products, including Enterprise, Team, and Edu. The increase from 600,000 users in April highlights CEO Sam Altman’s success in driving enterprise adoption of the AI tool.
Recent reports suggest OpenAI executives are discussing premium subscriptions for upcoming large-language models, such as the reasoning-focused Strawberry and a new flagship model called Orion. Subscription prices could reach as high as $2,000 per month for these advanced AI tools.
ChatGPT Plus currently costs $20 per month, while the free tier continues to be used by hundreds of millions every month. OpenAI is also working on Strawberry to enable its AI models to perform deep research, refining them after their initial training.
The discussion around premium pricing follows news that Apple and Nvidia are in talks to invest in OpenAI, with the AI company expected to be valued at over $100 billion. ChatGPT currently has more than 200 million weekly active users, doubling its user base since last autumn.
Ilya Sutskever, OpenAI’s former chief scientist, has launched a new company called Safe Superintelligence (SSI) to develop safe AI systems that significantly surpass human intelligence. In an interview, Sutskever explained that SSI aims to take a different approach to AI scaling compared to OpenAI, emphasising the need for safety in superintelligent systems. He believes that once superintelligence is achieved, it will transform our understanding of AI and introduce new challenges for ensuring its safe use.
Sutskever acknowledged that defining what constitutes ‘safe’ AI is still a work in progress, requiring significant research to address the complexities involved. He also highlighted that as AI becomes more powerful, safety concerns will intensify, making it essential to test and evaluate AI systems rigorously. While the company does not plan to open-source all of its work, there may be opportunities to share parts of its research related to superintelligence safety.
SSI aims to contribute to the broader AI community’s safety efforts, which Sutskever views positively. He believes that as AI companies progress, they will realise the gravity of the safety challenges they face and that SSI can make a valuable contribution to this ongoing conversation.
OpenAI recently unveiled the Model Spec, a comprehensive framework designed to guide the behaviour of its GPT models in the OpenAI API and ChatGPT. The document is a crucial resource for researchers and data labellers involved in reinforcement learning from human feedback (RLHF), ensuring that models align with user intent and adhere to ethical standards.
The Model Spec is organised into three main components: Objectives provide broad directional goals, Rules establish specific instructions to prevent harmful outcomes and maintain legality, and Defaults offer basic style guidance and allow user flexibility while ensuring consistency.
The initiative serves multiple important purposes. It provides a framework for businesses to implement ethical AI, improve customer service quality, navigate regulations, and gain a competitive advantage through reliable AI systems. The Spec also addresses common issues by preventing users from prompting the model to ignore instructions and providing guidance on how models should refuse tasks.
OpenAI’s Model Spec represents a significant advancement in AI models’ fine-tuning and ethical alignment. As a living document, it will evolve based on community feedback and practical applications, contributing to the broader discourse on responsible AI development and public engagement in determining model behaviour.