OpenAI responds to US lawmakers’ concerns

OpenAI has assured US lawmakers it is committed to safely deploying its AI tools. The ChatGPT’s owner decided to address US officials after concerns were raised by five senators, including Senator Brian Schatz of Hawaii, regarding the company’s safety practices. In response, OpenAI’s Chief Strategy Officer, Jason Kwon, emphasised the company’s mission to ensure AI benefits all of humanity and highlighted the rigorous safety protocols they implement at every stage of their process.

Over multiple years, OpenAI pledged to allocate 20% of its computing resources to safety-related research. The company also stated that it would no longer enforce non-disparagement agreements for current and former employees, addressing concerns about previously restrictive policies. On social media, OpenAI’s CEO, Sam Altman, shared that the company is collaborating with the US AI Safety Institute to provide early access to their next foundation model to advance AI evaluation science.

Kwon mentioned the recent establishment of a safety and security committee, which is currently reviewing OpenAI’s processes and policies. The review is part of a broader effort to address the controversies OpenAI has faced regarding its commitment to safety and the ability of employees to voice their concerns.

Recent resignations from key members of OpenAI’s safety teams, including co-founders Ilya Sutskever and Jan Leike, have highlighted internal concerns. Leike, in particular, has publicly criticised the company for prioritising product development over safety, underscoring the ongoing debate within the organisation about its approach to balancing innovation with security.

OpenAI launches advanced voice mode for ChatGPT

OpenAI has begun rolling out an advanced voice mode to a select group of ChatGPT Plus users, according to a post on X by the Microsoft-backed AI startup. Initially slated for a late June release, the launch was delayed to July to ensure the new feature met the company’s standards. This voice mode enables users to engage in real-time conversations with ChatGPT, including the ability to interrupt the AI while it is speaking, enhancing the realism of interactions.

The new audio capabilities address a common challenge for AI assistants, making conversations more fluid and responsive. In preparation for this release, OpenAI has been refining the model’s ability to detect and reject certain types of content while also enhancing the overall user experience and ensuring its infrastructure can support the new feature at scale.

The following development is part of OpenAI’s broader strategy to introduce innovative generative AI products, as the company aims to stay ahead in the competitive AI market. Businesses are rapidly adopting AI technology, and OpenAI’s efforts to improve and expand its offerings are crucial to maintaining its leadership position in this fast-growing field.

Tesla CEO considers $5 billion investment in xAI, raising concerns

Elon Musk announced plans to discuss a $5 billion investment in his AI startup, xAI, with Tesla’s board. This potential move, preceded by a poll launched on social medial platform X, has sparked concerns about a conflict of interest, as Musk launched xAI to compete with Microsoft-backed OpenAI. A recent social media poll showed strong public support for the investment, with over two-thirds of respondents in favor.

Tesla recently reported lower-than-expected second-quarter results, with declining automotive gross margins and profits. Musk highlighted the potential benefits of integrating xAI’s technologies with Tesla, including advancements in full self-driving and new data centre development. However, critics argue that the investment might not be in the best interest of Tesla shareholders.

xAI, launched by Musk last year, has already raised $6 billion in funding, attracting major investors such as Andreessen Horowitz and Sequoia Capital. Despite Musk’s ambitious plans for xAI, his past ventures have faced scrutiny over conflicts of interest, including the controversial acquisition of SolarCity by Tesla in 2016.

Crypto giant Coinbase expands board, aims for political impact

Coinbase has added three new members to its board of directors, including an executive from OpenAI, as part of its strategy to influence US crypto policy. The new board members are Chris Lehane from OpenAI, former US Solicitor General Paul Clement, and Christa Davies, CFO of Aon and a board member for Stripe and Workday. This expansion brings the board from seven to ten members.

The additions come as Coinbase and the cryptocurrency industry aim to strengthen their political influence in the upcoming presidential election. Clement will guide Coinbase’s efforts to counter the SEC and advocate for clear digital asset regulations. Lehane, a former Airbnb policy chief, will provide strategic counsel, while Davies will focus on enhancing Coinbase’s financial and operational excellence globally.

Stand With Crypto, a non-profit organisation funded by Coinbase, now boasts 1.3 million members, and three major pro-crypto super political action committees have raised over $230 million to support favorable candidates.

OpenAI challenges Google with SearchGPT

The introduction of SearchGPT by OpenAI, an AI-powered search engine with real-time internet access, challenges Google’s dominance in the search market. Announced on Thursday, the launch places OpenAI in competition not only with Google but also with its major backer, Microsoft, and emerging AI search tools like Perplexity. The announcement caused Alphabet’s shares to drop by 3%.

SearchGPT is currently in its prototype stage, with a limited number of users and publishers testing it. The tool aims to provide summarised search results with source links, allowing users to ask follow-up questions for more contextual responses. OpenAI plans to integrate SearchGPT’s best features into ChatGPT in the future. Publishers will have access to tools for managing their content’s appearance in search results.

Google, which holds a 91.1% market share in search engines, may feel the pressure to innovate as competitors like OpenAI and Perplexity enter the arena. Perplexity is already facing legal challenges from publishers, highlighting the difficulties newer AI-powered search providers might encounter.

SearchGPT marks a closer collaboration between OpenAI and publishers, with News Corp and The Atlantic as initial partners. This follows OpenAI’s content licensing agreements with major media organisations. Google did not comment on the potential impact of SearchGPT on its business.

OpenAI CEO emphasises democratic control in the future of AI

Sam Altman, co-founder and CEO of OpenAI, raises a critical question: ‘Who will control the future of AI?’. He frames it as a choice between a democratic vision, led by the US and its allies to disseminate AI benefits widely, and an authoritarian one, led by nations like Russia and China, aiming to consolidate power through AI. Altman underscores the urgency of this decision, given the rapid advancements in AI technology and the high stakes involved.

Altman warns that while the United States currently leads in AI development, this advantage is precarious due to substantial investments by authoritarian governments. He highlights the risks if these regimes take the lead, such as restricted AI benefits, enhanced surveillance, and advanced cyber weapons. To prevent this, Altman proposes a four-pronged strategy – robust security measures to protect intellectual property, significant investments in physical and human infrastructure, a coherent commercial diplomacy policy, and establishing international norms and safety protocols.

He emphasises proactive collaboration between the US government and the private sector to implement these measures swiftly. Altman believes that proactive efforts today in security, infrastructure, talent development, and global governance can secure a competitive advantage and broad societal benefits. Ultimately, Altman advocates for a democratic vision for AI, underpinned by strategic, timely, and globally inclusive actions to maximise the technology’s benefits while minimising risks.

OpenAI announces major reorganisation to bolster AI safety measures

OpenAI’s AI safety leader, Aleksander Madry, is now working on a new significant research project, according to CEO Sam Altman. OpenAI executives Joaquin Quinonero Candela and Lilian Weng will take over the preparedness team, which evaluates the readiness of the company’s models for general AI. The move is part of a broader strategy to unify OpenAI’s safety efforts.

OpenAI’s preparedness team ensures the safety and readiness of its AI models. Following Madry’s shift to a new research role, he will have an expanded position within the research organization. OpenAI is also addressing safety concerns surrounding its advanced chatbots, which can engage in human-like conversations and generate multimedia content from text prompts.

Joaquin Quinonero Candela and Lilian Weng will lead the preparedness team as part of this strategic change. Researcher Tejal Patwardhan will manage much of the team’s work, ensuring the continued focus on AI safety. The reorganization follows the recent formation of a Safety and Security Committee, led by board members including Sam Altman.

The reshuffle comes amid rising safety concerns as OpenAI’s technologies become more powerful and widely used. The Safety and Security Committee was established earlier this year in preparation for training the next generation of AI models. These developments reflect OpenAI’s ongoing commitment to AI safety and responsible innovation.

OpenAI considers developing own AI chip with Broadcom

OpenAI, the maker of ChatGPT, is in discussions with Broadcom and other chip designers about developing a new AI chip. This move aims to address the shortage of expensive graphic processing units required for developing its AI models, such as ChatGPT, GPT-4, and DALL-E3.

The Microsoft-backed company is hiring former Google employees who developed the tech giant’s own AI chip and plans to create an AI server chip. OpenAI is exploring the idea of making its own AI chips to ensure a more stable supply of essential components.

OpenAI CEO Sam Altman has ambitious plans to raise billions of dollars to establish semiconductor manufacturing facilities. Potential partners for this venture include Intel, Taiwan Semiconductor Manufacturing Co, and Samsung Electronics.

A spokesperson for OpenAI mentioned that the company is having ongoing conversations with industry and government stakeholders to enhance access to the infrastructure needed for making AI benefits widely accessible.

OpenAI whistleblowers call for SEC investigation

Whistle-blowers have filed a complaint with the US Securities and Exchange Commission (SEC) against OpenAI, calling for an investigation into the company’s allegedly restrictive non-disclosure agreements (NDAs). The complaint, alleges that OpenAI’s NDAs required employees to waive their federal rights to whistle-blower compensation, creating a chilling effect on their right to speak up.

Senator Chuck Grassley’s office provided the letter to Reuters, stating that OpenAI’s policies appear to prevent whistleblowers from receiving due compensation for their protected disclosures. The whistle-blowers have requested that the SEC fine OpenAI for each improper agreement and review all contracts containing NDAs, including employment, severance, and investor agreements. OpenAI did not immediately respond to requests for comment.

This complaint follows other legal and regulatory challenges faced by OpenAI. The company has been sued for allegedly stealing people’s data, and US authorities have called for companies to ensure their AI products do not violate civil rights. OpenAI recently formed a Safety and Security Committee to address safety concerns as it begins training its next AI model.

OpenAI’s project Strawberry: Transformative AI sparks ethical debate

According to a Reuters report, the fairly new OpenAI project, Strawberry, is set to create giant waves in the research industry. The project, which some claim could be a renamed version of the company’s project Q* from last year, has been tagged as potentially having capabilities to navigate the net to conduct deep research.

The company’s representative confirmed to the news agency that the reasoning ability of their models will invariably improve with time. Just last Tuesday, employees of OpenAI were treated to a demo of a model with human-like reasoning capabilities. The meeting came on the heels of the negative commentary the company has faced for placing a gag order on employees for publicly exposing the dangers its innovations can potentially pose to humanity.  

Earlier in July, employees sent a seven-page letter to the US Security Exchange Commission (SEC) chair, Gary Gensler, detailing what they deem as risks OpenAI’s projects can pose to humans. The letter was tinged with urgency as the agency was advised to take swift and aggressive action against the company for violating current regulations.