OpenAI is expanding its Advanced Voice Mode (AVM) to more ChatGPT users, beginning with those subscribed to the Plus and Teams plans, while Enterprise and education customers will gain access next week. The updated AVM includes a redesigned interface, now featuring a blue animated sphere, and introduces five new voices: Arbor, Maple, Sol, Spruce, and Vale. These additions bring the total voice options to nine, replacing ‘Sky,’ which was removed after legal issues arose over its similarity to actress Scarlett Johansson’s voice.
The AVM update also includes improvements like better accent recognition and smoother conversations. OpenAI has incorporated customisation options, including Custom Instructions, which allow users to personalise ChatGPT’s responses, and Memory, which enables ChatGPT to recall past conversations. However, previously showcased features such as video and screen-sharing remain unavailable, with no confirmed timeline for their release.
Despite the updates, AVM is not yet available in certain regions, including the EU, the UK, and several others. OpenAI is actively refining the feature based on early user feedback, working to resolve glitches and improve overall performance for a smoother experience.
Generative AI is significantly more energy-intensive than traditional search engines, according to researcher Sasha Luccioni, who has raised concerns about the environmental impact of the technology. Generating new information requires vast computing power and energy, particularly for models like ChatGPT, which rely on extensive data training.
The AI and cryptocurrency sectors consumed nearly 460 terawatt hours of electricity in 2022, around two percent of global production, according to the International Energy Agency. Luccioni, a leading expert on AI’s climate impact, has developed tools to quantify the carbon footprint of AI technologies, helping developers make informed decisions.
Efforts to mitigate the environmental consequences of AI are underway. Luccioni is working on a certification system to rate the energy efficiency of AI models, aiming to encourage more sustainable practices. Transparency from tech giants like Google and OpenAI is essential, as their greenhouse gas emissions have surged due to AI development.
The solution, Luccioni argues, lies in a combination of government legislation, increased transparency, and better public understanding of AI’s limitations and environmental costs. She advocates for ‘energy sobriety’ by using AI tools more judiciously and making environmentally conscious decisions.
A group of technology experts has launched a global call for ‘Humanity’s Last Exam‘ aiming to push AI systems to their limits by posing the most difficult questions possible. The Center for AI Safety (CAIS) and Scale AI are leading an initiative to establish when AI achieves expert-level capabilities. Current benchmark tests have become too easy for many AI models, so this effort aims to create a new exam that emphasises abstract reasoning, an area in which AI still faces challenges. The organisers hope this new exam will remain relevant as AI technology evolves.
The demand for more rigorous tests comes after OpenAI released its newest model, OpenAI o1, which has shown strong performance in traditional reasoning benchmarks. Dan Hendricks, executive director of CAIS, stated that AI systems like Anthropic’s Claude model had significantly improved standard tests, rendering these benchmarks less valuable. However, AI has struggled with more intricate tasks like planning and visual pattern recognition, highlighting the necessity for more advanced assessments.
The exam will include over 1,000 crowd-sourced questions that are challenging even for non-experts. Its goal is to prevent AI from simply memorising answers by keeping some questions private. Participants have until 1 November to submit questions, and there will be rewards for the best contributions. While the exam is designed to test AI thoroughly, questions about weapons will be excluded to avoid potential risks.
The company behind the popular AI chatbot ChatGPT, OpenAI, has announced that its newly established Safety and Security Committee will now operate independently to oversee the development and deployment of its AI models. This decision follows the committee’s recent recommendations, which were released publicly for the first time. Formed in May, the committee’s goal is to enhance and refine OpenAI’s safety practices amid growing concerns about AI’s ethical use and potential biases.
The committee will be led by Zico Kolter, a professor at Carnegie Mellon University and a member of OpenAI’s board. Under its guidance, OpenAI plans to implement an ‘Information Sharing and Analysis Center’ to facilitate cybersecurity information exchange within the AI industry. Additionally, the company is focusing on improving internal security measures and increasing transparency regarding the capabilities and risks associated with its AI technologies.
In a related development, OpenAI has also partnered with the US government to research and evaluate its AI models further. This move underscores the company’s commitment to addressing both the opportunities and challenges posed by AI as it continues to evolve.
Sam Altman, known for his leadership at OpenAI, has another ambitious project called Worldcoin, which seeks to address the potential fallout from AGI. He envisions AGI reshaping the global economy, and Worldcoin aims to build a framework to identify humans online and eventually offer universal basic income through its cryptocurrency.
Worldcoin’s plan involves the use of biometric data, particularly scanning people’s irises, to create digital IDs. These unique identifiers ensure that only humans can participate in online activities, preventing bots from infiltrating online spaces. While this technology may seem dystopian, the project insists on the safety and encryption of personal data, immediately deleting images after processing.
Despite concerns, Worldcoin has garnered substantial interest, including backing from major investors. CEO Alex Blania acknowledges the need to communicate the project’s vision clearly, especially as it faces regulatory challenges in various countries. Collaboration with governments is essential to ensure smooth deployment of the technology.
With AGI on the horizon, projects like Worldcoin are positioning themselves to shape the future. Altman believes that once AGI becomes widespread, the digital identity and financial framework offered by Worldcoin could play a vital role in adapting to this new reality.
A new task force has been launched by the White House to address the growing demands of AI infrastructure. Led by the National Economic Council and the National Security Council, the group aims to balance AI development with national security, economic, and environmental goals. Senior US officials and executives from major technology companies, including OpenAI and Google, took part in the meeting on Thursday.
The focus of the discussion was on the power requirements for advanced AI systems. Leaders explored how to meet clean energy targets and infrastructure needs, particularly in the face of increasing demand from data centres. AI has raised both hopes for efficiency gains and concerns over potential misuse, with its energy consumption being a significant challenge.
The Biden administration is pushing tech firms to invest in eco-friendly power solutions. The AI industry’s energy needs could complicate the government’s ambition to decarbonise the power grid by 2035. Representatives from major agencies, including Energy Secretary Jennifer Granholm, were part of the conversation on tackling these issues.
AI infrastructure plays a crucial role in the future of the US economy, according to OpenAI. The company emphasised the importance of expanding data centres domestically, not only to support industrial growth but also to ensure that AI’s benefits reach all corners of society.
Oprah Winfrey aired a special titled ‘AI and the Future of Us,’ featuring guests like OpenAI CEO Sam Altman, tech influencer Marques Brownlee, and FBI director Christopher Wray. The discussion was largely focused on the potential risks and ethical concerns surrounding AI. Winfrey highlighted the need for humanity to adapt to AI’s rapid development, while Altman emphasised the importance of safety regulations.
Altman defended AI’s learning capabilities but acknowledged the need for government involvement in safety testing. However, his company has opposed California’s AI safety bill, which experts believe would provide essential safeguards. He also discussed the dangers of deepfakes and urged caution as AI technology advances.
Wray pointed out AI’s role in rising cybercrimes like sextortion and disinformation. He warned of its potential to be exploited for election interference, urging the public to remain vigilant in the face of increasing AI-generated content.
For balance, Bill Gates expressed optimism about AI’s positive impact on education and healthcare. He envisioned AI improving medical transcription and classroom learning, though concerns about bias and misuse remain.
OpenAI’s latest version of ChatGPT, GPT o1, a nomenclature indicative of resetting the counter clock to 1, and its less costly mini version, represents a watershed moment in the company’s LLM stockpile. Designed to replicate superhuman-level intelligence, the models can already answer questions a lot faster than humans. This series of models will be unlike previous ones. In responding to queries, they utilise a human-like ‘chain of thought’ processing combined with reinforcement learning on specialised datasets and optimisation algorithms.
The model outperforms older models by a significant margin. For example, when tested against GPT-4o at the International Mathematics Olympiad, it scored 83 percent to GPT-4o’s 13 percent. What’s unique about the model is its ability to not only provide step-by-step reasoning for outputs but to show human-like patterns of hesitation during the process, ‘I’m curious about…’ and ‘Ok, let me see’ or ‘Oh, I’m running out of time, let me get to an answer quickly’. The new design has also resulted in a reduced occurrence of hallucinations. Yet, despite their many pros, the models have limitations. For instance, they cannot browse the internet, lack world knowledge, and cannot process files and images.
According to the lead researcher on the project, Jerry Tworek, the next level is for the models to perform similarly to PhD students on challenging benchmark tasks in areas such as physics, chemistry and biology. He assures that the intention here is not to equate AI with human thinking but rather to illustrate the model’s ability to dive cognitively deep. For the company, reasoning is a step up from pattern recognition, which is the design model used with previous versions. Ultimately, OpenAI aims to develop a product that can make decisions and take action on behalf of humans, a venture estimated to cost a further $USD 150 billion. Removing the current kinks in the system will mean that the models can work on complex global problems we face today in areas such as engineering and medicine.
More breakthroughs will also mean reduced access costs for developers and users. According to Chief Research Officer Bob McGrew, developer access to o1-preview is currently $15 per 1 million input tokens (chunks of text parsed by the model) and $60 per 1 million output tokens. GPT -o4 costs $5 per 1 million input tokens and $15 per 1 million output tokens.
OpenAI is reportedly in talks to secure $6.5 billion in funding, aiming for a $150 billion valuation. Such a move would significantly boost its position among the world’s top startups, following an earlier $86 billion valuation this year.
Led by CEO Sam Altman and backed by Microsoft, OpenAI’s success with the ChatGPT chatbot has driven its rapid rise. The firm has revived Silicon Valley’s interest in AI, further solidifying its position. A significant portion of the new funding may come in the form of a revolving credit facility, adding $5 billion in debt from banks.
The capital injection will help OpenAI remain a private company, avoiding the regulatory challenges and stock market volatility that often come with public listings. Many high-profile startups are choosing to stay private for longer, bolstered by private equity funding.
Some investors, however, may push for liquidity through a public offering or company sale. Meanwhile, OpenAI has been added to Forge Global’s prestigious list of “Private Magnificent Seven” startups, further highlighting its dominance in the AI sector.
OpenAI is set to launch its new AI model ‘Strawberry’ within the next two weeks as part of its ChatGPT service. The model is designed to focus on reasoning rather than instant responses, could offer a more thoughtful conversational experience.
Led by Sam Altman, OpenAI has generated strong interest and investment in AI technology. Businesses are increasingly turning to artificial intelligence to enhance their products, with OpenAI reporting over one million paying users across its services.