Elon Musk has reactivated his lawsuit against OpenAI and its CEO, Sam Altman, claiming the company prioritised profit over public good. Filed in a Northern California district court, the lawsuit accuses OpenAI of shifting its focus from advancing AI for humanity to commercial gain.
Musk had previously withdrawn the lawsuit in June, which initially alleged that OpenAI abandoned its mission of developing AI for the benefit of humanity. Initially filed in February, the legal action was briefly paused before Musk’s recent decision to revive it. The lawsuit argues that Altman shifted the company’s narrative to capitalise on its technology rather than uphold its founding principles.
OpenAI has developed a method to detect when ChatGPT is used to write essays or research papers, but the company still needs to release it. That decision results from an internal debate lasting two years, balancing the company’s commitment to transparency with the potential to deter users. One survey found nearly a third of loyal ChatGPT users would be turned off by the anti-cheating technology.
Concerns have been raised that the tool could disproportionately affect non-native English speakers. OpenAI’s spokeswoman emphasised the need for a deliberate approach due to the complexities involved. Employees supporting the tool argue that its benefits outweigh the risks, as AI-generated essays can be completed in seconds, posing a significant issue for educators.
The watermarking method would subtly alter token selection in AI-generated text, creating a detectable pattern invisible to human readers. That method is reported to be 99.9% effective, but there are concerns it could be bypassed through translation or text modifications. OpenAI is still determining how to provide access to the detector while preventing misuse.
Despite the effectiveness of watermarking, internal discussions at OpenAI have been ongoing since before ChatGPT’s launch in 2022. A 2023 survey showed global support for AI detection tools, but many ChatGPT users feared false accusations of AI use. OpenAI explores alternative approaches to address these concerns while maintaining AI transparency and credibility.
OpenAI, previously a close partner of Microsoft, is now officially recognised as a competitor. Microsoft’s recent SEC filing marks the first time the company has publicly acknowledged this shift. OpenAI is now listed alongside tech giants like Google and Amazon as a competitor in both AI and search technologies.
The relationship between the two companies has been under scrutiny, with antitrust concerns arising from the FTC. Microsoft’s decision to relinquish its board observer seat at OpenAI follows a series of significant events, including the brief dismissal of OpenAI’s CEO Sam Altman. The filing may reflect a strategic move to alter public perception amid these investigations.
Silicon Valley has a history of companies navigating complex relationships, balancing roles as both partners and competitors. The dynamic between Yahoo and Google in the early 2000s serves as a notable example. Microsoft and OpenAI might be experiencing a similar evolution, with both entities maintaining competitive and cooperative elements.
Meanwhile, Microsoft continues to expand its own AI initiatives. The hiring of Inflection AI co-founders to lead a new AI division and the development of Microsoft Copilot highlight the company’s broader strategy. The diversification suggests a strategic approach to AI that goes beyond its ties with OpenAI.
OpenAI has assured US lawmakers it is committed to safely deploying its AI tools. The ChatGPT’s owner decided to address US officials after concerns were raised by five senators, including Senator Brian Schatz of Hawaii, regarding the company’s safety practices. In response, OpenAI’s Chief Strategy Officer, Jason Kwon, emphasised the company’s mission to ensure AI benefits all of humanity and highlighted the rigorous safety protocols they implement at every stage of their process.
a few quick updates about safety at openai:
as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.
our team has been working with the US AI Safety Institute on an agreement where we would provide…
Over multiple years, OpenAI pledged to allocate 20% of its computing resources to safety-related research. The company also stated that it would no longer enforce non-disparagement agreements for current and former employees, addressing concerns about previously restrictive policies. On social media, OpenAI’s CEO, Sam Altman, shared that the company is collaborating with the US AI Safety Institute to provide early access to their next foundation model to advance AI evaluation science.
Kwon mentioned the recent establishment of a safety and security committee, which is currently reviewing OpenAI’s processes and policies. The review is part of a broader effort to address the controversies OpenAI has faced regarding its commitment to safety and the ability of employees to voice their concerns.
Recent resignations from key members of OpenAI’s safety teams, including co-founders Ilya Sutskever and Jan Leike, have highlighted internal concerns. Leike, in particular, has publicly criticised the company for prioritising product development over safety, underscoring the ongoing debate within the organisation about its approach to balancing innovation with security.
OpenAI has begun rolling out an advanced voice mode to a select group of ChatGPT Plus users, according to a post on X by the Microsoft-backed AI startup. Initially slated for a late June release, the launch was delayed to July to ensure the new feature met the company’s standards. This voice mode enables users to engage in real-time conversations with ChatGPT, including the ability to interrupt the AI while it is speaking, enhancing the realism of interactions.
The new audio capabilities address a common challenge for AI assistants, making conversations more fluid and responsive. In preparation for this release, OpenAI has been refining the model’s ability to detect and reject certain types of content while also enhancing the overall user experience and ensuring its infrastructure can support the new feature at scale.
We’re starting to roll out advanced Voice Mode to a small group of ChatGPT Plus users. Advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions. pic.twitter.com/64O94EhhXK
The following development is part of OpenAI’s broader strategy to introduce innovative generative AI products, as the company aims to stay ahead in the competitive AI market. Businesses are rapidly adopting AI technology, and OpenAI’s efforts to improve and expand its offerings are crucial to maintaining its leadership position in this fast-growing field.
Elon Musk announced plans to discuss a $5 billion investment in his AI startup, xAI, with Tesla’s board. This potential move, preceded by a poll launched on social medial platform X, has sparked concerns about a conflict of interest, as Musk launched xAI to compete with Microsoft-backed OpenAI. A recent social media poll showed strong public support for the investment, with over two-thirds of respondents in favor.
Tesla recently reported lower-than-expected second-quarter results, with declining automotive gross margins and profits. Musk highlighted the potential benefits of integrating xAI’s technologies with Tesla, including advancements in full self-driving and new data centre development. However, critics argue that the investment might not be in the best interest of Tesla shareholders.
xAI, launched by Musk last year, has already raised $6 billion in funding, attracting major investors such as Andreessen Horowitz and Sequoia Capital. Despite Musk’s ambitious plans for xAI, his past ventures have faced scrutiny over conflicts of interest, including the controversial acquisition of SolarCity by Tesla in 2016.
Coinbase has added three new members to its board of directors, including an executive from OpenAI, as part of its strategy to influence US crypto policy. The new board members are Chris Lehane from OpenAI, former US Solicitor General Paul Clement, and Christa Davies, CFO of Aon and a board member for Stripe and Workday. This expansion brings the board from seven to ten members.
The additions come as Coinbase and the cryptocurrency industry aim to strengthen their political influence in the upcoming presidential election. Clement will guide Coinbase’s efforts to counter the SEC and advocate for clear digital asset regulations. Lehane, a former Airbnb policy chief, will provide strategic counsel, while Davies will focus on enhancing Coinbase’s financial and operational excellence globally.
Stand With Crypto, a non-profit organisation funded by Coinbase, now boasts 1.3 million members, and three major pro-crypto super political action committees have raised over $230 million to support favorable candidates.
The introduction of SearchGPT by OpenAI, an AI-powered search engine with real-time internet access, challenges Google’s dominance in the search market. Announced on Thursday, the launch places OpenAI in competition not only with Google but also with its major backer, Microsoft, and emerging AI search tools like Perplexity. The announcement caused Alphabet’s shares to drop by 3%.
SearchGPT is currently in its prototype stage, with a limited number of users and publishers testing it. The tool aims to provide summarised search results with source links, allowing users to ask follow-up questions for more contextual responses. OpenAI plans to integrate SearchGPT’s best features into ChatGPT in the future. Publishers will have access to tools for managing their content’s appearance in search results.
Google, which holds a 91.1% market share in search engines, may feel the pressure to innovate as competitors like OpenAI and Perplexity enter the arena. Perplexity is already facing legal challenges from publishers, highlighting the difficulties newer AI-powered search providers might encounter.
SearchGPT marks a closer collaboration between OpenAI and publishers, with News Corp and The Atlantic as initial partners. This follows OpenAI’s content licensing agreements with major media organisations. Google did not comment on the potential impact of SearchGPT on its business.
Sam Altman, co-founder and CEO of OpenAI, raises a critical question: ‘Who will control the future of AI?’. He frames it as a choice between a democratic vision, led by the US and its allies to disseminate AI benefits widely, and an authoritarian one, led by nations like Russia and China, aiming to consolidate power through AI. Altman underscores the urgency of this decision, given the rapid advancements in AI technology and the high stakes involved.
Altman warns that while the United States currently leads in AI development, this advantage is precarious due to substantial investments by authoritarian governments. He highlights the risks if these regimes take the lead, such as restricted AI benefits, enhanced surveillance, and advanced cyber weapons. To prevent this, Altman proposes a four-pronged strategy – robust security measures to protect intellectual property, significant investments in physical and human infrastructure, a coherent commercial diplomacy policy, and establishing international norms and safety protocols.
He emphasises proactive collaboration between the US government and the private sector to implement these measures swiftly. Altman believes that proactive efforts today in security, infrastructure, talent development, and global governance can secure a competitive advantage and broad societal benefits. Ultimately, Altman advocates for a democratic vision for AI, underpinned by strategic, timely, and globally inclusive actions to maximise the technology’s benefits while minimising risks.
OpenAI’s AI safety leader, Aleksander Madry, is now working on a new significant research project, according to CEO Sam Altman. OpenAI executives Joaquin Quinonero Candela and Lilian Weng will take over the preparedness team, which evaluates the readiness of the company’s models for general AI. The move is part of a broader strategy to unify OpenAI’s safety efforts.
OpenAI’s preparedness team ensures the safety and readiness of its AI models. Following Madry’s shift to a new research role, he will have an expanded position within the research organization. OpenAI is also addressing safety concerns surrounding its advanced chatbots, which can engage in human-like conversations and generate multimedia content from text prompts.
Joaquin Quinonero Candela and Lilian Weng will lead the preparedness team as part of this strategic change. Researcher Tejal Patwardhan will manage much of the team’s work, ensuring the continued focus on AI safety. The reorganization follows the recent formation of a Safety and Security Committee, led by board members including Sam Altman.
The reshuffle comes amid rising safety concerns as OpenAI’s technologies become more powerful and widely used. The Safety and Security Committee was established earlier this year in preparation for training the next generation of AI models. These developments reflect OpenAI’s ongoing commitment to AI safety and responsible innovation.