Around seven years ago, Intel had the opportunity to invest in OpenAI, a nascent research organisation focused on generative artificial intelligence. Discussions between Intel and OpenAI spanned several months in 2017 and 2018, considering options like Intel acquiring a 15% stake for $1 billion. However, Intel decided against the deal, partly due to then-CEO Bob Swan’s scepticism about the commercial viability of generative AI models.
OpenAI, seeking to reduce its reliance on Nvidia’s chips, saw value in an investment from Intel. Yet, the deal fell through due to Intel’s reluctance to produce hardware at cost for the startup. The missed opportunity remained undisclosed until now, with OpenAI later becoming a major player in AI, launching the groundbreaking ChatGPT in 2022 and achieving a reported valuation of $80 billion.
Intel’s decision not to invest is part of a broader struggle to maintain relevance in the AI age. Once a leader in computer chips, Intel has been outpaced by competitors like Nvidia and AMD. Nvidia’s shift from gaming to AI chips has left Intel struggling to produce a competitive AI product, contributing to a sharp decline in its market value.
Despite its challenges, Intel continues to push forward with new AI chip developments, including the upcoming third-generation Gaudi AI chip and the next-generation Falcon Shores chip. CEO Pat Gelsinger remains optimistic about capturing a greater share of the AI market, but Intel’s journey serves as a cautionary tale of missed opportunities in a rapidly evolving industry.
OpenAI is developing Project Strawberry to improve its AI models’ ability to handle long-horizon tasks, which involve planning and executing complex actions over extended periods. Sam Altman, OpenAI’s chief, hinted at this project in a cryptic social media post, sharing an image of strawberries with the caption, ‘I love summer in the garden.’ That led to speculation about the project’s potential impact on AI capabilities.
Project Strawberry, also known as Q*, aims to significantly enhance the reasoning abilities of OpenAI’s AI models. According to a recent Reuters report, some at OpenAI believe Q* could be a breakthrough in the pursuit of artificial general intelligence (AGI). The project involves innovative approaches that allow AI models to plan ahead and navigate the internet autonomously, addressing common sense issues and logical fallacies that often result in inaccurate outputs.
OpenAI has announced DevDay 2024, a global developer event series with stops in San Francisco, London, and Singapore. The focus will be on advancements in the API and developer tools, though there is speculation that OpenAI might preview its next frontier model. Recent developments in the LMsys chatbot arena, where a new model showed strong performance in math, suggest significant progress in AI capabilities.
Internal documents reveal that Project Strawberry includes a “deep-research” dataset for training and evaluating the models, although the contents remain undisclosed. The innovation is expected to enable AI to conduct research autonomously, using a computer-using agent to act based on its findings. OpenAI plans to test Strawberry’s capabilities in performing tasks typically done by software and machine learning engineers, highlighting its potential to revolutionise AI applications.
John Schulman, co-founder of OpenAI, has departed the company for rival Anthropic. Schulman announced his decision on social media, citing a desire to focus more on AI alignment and return to hands-on technical work.
OpenAI is undergoing significant personnel shifts. Greg Brockman, another co-founder and President, is taking a sabbatical until the end of the year. Meanwhile, product manager Peter Deng has also left the firm.
Earlier this year, other key figures exited OpenAI. Chief scientist Ilya Sutskever departed in May, and founding member Andrej Karpathy left in February to start an AI-integrated education platform. AI safety leader Aleksander Madry was reassigned to a different role in July.
These changes come amid renewed legal challenges from Elon Musk, another OpenAI co-founder. Musk, who left OpenAI three years after its inception, has revived a lawsuit against the company, accusing it of prioritising profits over the public good.
Elon Musk has reactivated his lawsuit against OpenAI and its CEO, Sam Altman, claiming the company prioritised profit over public good. Filed in a Northern California district court, the lawsuit accuses OpenAI of shifting its focus from advancing AI for humanity to commercial gain.
Musk had previously withdrawn the lawsuit in June, which initially alleged that OpenAI abandoned its mission of developing AI for the benefit of humanity. Initially filed in February, the legal action was briefly paused before Musk’s recent decision to revive it. The lawsuit argues that Altman shifted the company’s narrative to capitalise on its technology rather than uphold its founding principles.
OpenAI has developed a method to detect when ChatGPT is used to write essays or research papers, but the company still needs to release it. That decision results from an internal debate lasting two years, balancing the company’s commitment to transparency with the potential to deter users. One survey found nearly a third of loyal ChatGPT users would be turned off by the anti-cheating technology.
Concerns have been raised that the tool could disproportionately affect non-native English speakers. OpenAI’s spokeswoman emphasised the need for a deliberate approach due to the complexities involved. Employees supporting the tool argue that its benefits outweigh the risks, as AI-generated essays can be completed in seconds, posing a significant issue for educators.
The watermarking method would subtly alter token selection in AI-generated text, creating a detectable pattern invisible to human readers. That method is reported to be 99.9% effective, but there are concerns it could be bypassed through translation or text modifications. OpenAI is still determining how to provide access to the detector while preventing misuse.
Despite the effectiveness of watermarking, internal discussions at OpenAI have been ongoing since before ChatGPT’s launch in 2022. A 2023 survey showed global support for AI detection tools, but many ChatGPT users feared false accusations of AI use. OpenAI explores alternative approaches to address these concerns while maintaining AI transparency and credibility.
OpenAI, previously a close partner of Microsoft, is now officially recognised as a competitor. Microsoft’s recent SEC filing marks the first time the company has publicly acknowledged this shift. OpenAI is now listed alongside tech giants like Google and Amazon as a competitor in both AI and search technologies.
The relationship between the two companies has been under scrutiny, with antitrust concerns arising from the FTC. Microsoft’s decision to relinquish its board observer seat at OpenAI follows a series of significant events, including the brief dismissal of OpenAI’s CEO Sam Altman. The filing may reflect a strategic move to alter public perception amid these investigations.
Silicon Valley has a history of companies navigating complex relationships, balancing roles as both partners and competitors. The dynamic between Yahoo and Google in the early 2000s serves as a notable example. Microsoft and OpenAI might be experiencing a similar evolution, with both entities maintaining competitive and cooperative elements.
Meanwhile, Microsoft continues to expand its own AI initiatives. The hiring of Inflection AI co-founders to lead a new AI division and the development of Microsoft Copilot highlight the company’s broader strategy. The diversification suggests a strategic approach to AI that goes beyond its ties with OpenAI.
OpenAI has assured US lawmakers it is committed to safely deploying its AI tools. The ChatGPT’s owner decided to address US officials after concerns were raised by five senators, including Senator Brian Schatz of Hawaii, regarding the company’s safety practices. In response, OpenAI’s Chief Strategy Officer, Jason Kwon, emphasised the company’s mission to ensure AI benefits all of humanity and highlighted the rigorous safety protocols they implement at every stage of their process.
a few quick updates about safety at openai:
as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.
our team has been working with the US AI Safety Institute on an agreement where we would provide…
Over multiple years, OpenAI pledged to allocate 20% of its computing resources to safety-related research. The company also stated that it would no longer enforce non-disparagement agreements for current and former employees, addressing concerns about previously restrictive policies. On social media, OpenAI’s CEO, Sam Altman, shared that the company is collaborating with the US AI Safety Institute to provide early access to their next foundation model to advance AI evaluation science.
Kwon mentioned the recent establishment of a safety and security committee, which is currently reviewing OpenAI’s processes and policies. The review is part of a broader effort to address the controversies OpenAI has faced regarding its commitment to safety and the ability of employees to voice their concerns.
Recent resignations from key members of OpenAI’s safety teams, including co-founders Ilya Sutskever and Jan Leike, have highlighted internal concerns. Leike, in particular, has publicly criticised the company for prioritising product development over safety, underscoring the ongoing debate within the organisation about its approach to balancing innovation with security.
OpenAI has begun rolling out an advanced voice mode to a select group of ChatGPT Plus users, according to a post on X by the Microsoft-backed AI startup. Initially slated for a late June release, the launch was delayed to July to ensure the new feature met the company’s standards. This voice mode enables users to engage in real-time conversations with ChatGPT, including the ability to interrupt the AI while it is speaking, enhancing the realism of interactions.
The new audio capabilities address a common challenge for AI assistants, making conversations more fluid and responsive. In preparation for this release, OpenAI has been refining the model’s ability to detect and reject certain types of content while also enhancing the overall user experience and ensuring its infrastructure can support the new feature at scale.
We’re starting to roll out advanced Voice Mode to a small group of ChatGPT Plus users. Advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions. pic.twitter.com/64O94EhhXK
The following development is part of OpenAI’s broader strategy to introduce innovative generative AI products, as the company aims to stay ahead in the competitive AI market. Businesses are rapidly adopting AI technology, and OpenAI’s efforts to improve and expand its offerings are crucial to maintaining its leadership position in this fast-growing field.
Elon Musk announced plans to discuss a $5 billion investment in his AI startup, xAI, with Tesla’s board. This potential move, preceded by a poll launched on social medial platform X, has sparked concerns about a conflict of interest, as Musk launched xAI to compete with Microsoft-backed OpenAI. A recent social media poll showed strong public support for the investment, with over two-thirds of respondents in favor.
Tesla recently reported lower-than-expected second-quarter results, with declining automotive gross margins and profits. Musk highlighted the potential benefits of integrating xAI’s technologies with Tesla, including advancements in full self-driving and new data centre development. However, critics argue that the investment might not be in the best interest of Tesla shareholders.
xAI, launched by Musk last year, has already raised $6 billion in funding, attracting major investors such as Andreessen Horowitz and Sequoia Capital. Despite Musk’s ambitious plans for xAI, his past ventures have faced scrutiny over conflicts of interest, including the controversial acquisition of SolarCity by Tesla in 2016.
Coinbase has added three new members to its board of directors, including an executive from OpenAI, as part of its strategy to influence US crypto policy. The new board members are Chris Lehane from OpenAI, former US Solicitor General Paul Clement, and Christa Davies, CFO of Aon and a board member for Stripe and Workday. This expansion brings the board from seven to ten members.
The additions come as Coinbase and the cryptocurrency industry aim to strengthen their political influence in the upcoming presidential election. Clement will guide Coinbase’s efforts to counter the SEC and advocate for clear digital asset regulations. Lehane, a former Airbnb policy chief, will provide strategic counsel, while Davies will focus on enhancing Coinbase’s financial and operational excellence globally.
Stand With Crypto, a non-profit organisation funded by Coinbase, now boasts 1.3 million members, and three major pro-crypto super political action committees have raised over $230 million to support favorable candidates.