The potential impact of OpenAI’s realistic voice feature on human interactions has raised concerns, with the company warning that people might form emotional bonds with AI at the expense of real-life relationships. The company noted that users of its GPT-4 model have shown signs of anthropomorphizing the AI, attributing human-like qualities to it, which could lead to misplaced trust and dependency. OpenAI’s report highlighted that the high-quality voice interaction might exacerbate these issues, raising questions about the long-term effects on social norms.
The company observed that some testers of the AI voice feature interacted with it in ways that suggested an emotional connection, such as expressing sadness over the end of their session. While these behaviours might seem harmless, OpenAI emphasised the need to study their potential evolution over time. The report also suggested that reliance on AI for social interaction could diminish users’ abilities or willingness to engage in human relationships, altering how people interact with one another.
Concerns were also raised about the AI’s ability to recall details and handle tasks, which might lead to over-reliance on the technology. OpenAI further noted that its AI models, designed to be deferential in conversations, might inadvertently promote anti-social norms when users become accustomed to behaviours, such as interrupting, that are inappropriate in human interactions. The company pledged to continue testing how these voice capabilities could affect emotional attachment and social behaviour.
The issue gained attention following a controversy in June when OpenAI was criticized for allegedly using a voice similar to actress Scarlett Johansson‘s in its chatbot. Although the company denied the voice belonged to Johansson, the incident underscored the risks associated with voice-cloning technology. As AI models continue to advance toward human-like reasoning, experts are increasingly urging a pause to consider the broader implications for human relationships and societal norms.
OpenAI’s chief strategy officer, Jason Kwon, has expressed confidence that humans will continue to control AI, downplaying concerns about the technology developing unchecked. Speaking at an forum in Seoul, Kwon emphasised that the core of safety lies in ensuring human oversight. As those systems grow more advanced, Kwon believes they will become easier to manage, countering fears of them becoming uncontrollable.
The company is actively working on creating a framework that allows AI systems to reflect the cultural values of different countries. Kwon highlighted the importance of making certain models adaptable to local contexts, ensuring that users in various regions feel the technology is designed with them in mind. However, approach like this one aims to foster a sense of ownership and relevance across diverse cultures.
Despite some scepticism surrounding the future of AI, Kwon remains optimistic about its trajectory. He compared it’s potential growth to that of the internet, which has become an indispensable tool globally. While acknowledging that AI is still in its early stages, he pointed out that adoption rates are gradually increasing, with significant room for growth.
Kwon noted that in South Korea, a country with over 50 million people, only 1 million are daily active users of ChatGPT. Even in the US, fewer than 20 per cent of the population has tried the tool. Kwon’s remarks suggest that AI’s journey is just beginning, with significant expansion expected in the coming years.
One of the largest AI research organizations has appointed Zico Kolter, a distinguished professor and director of the machine learning department at Carnegie Mellon University, to its board of directors. Renowned for his focus on AI safety, Kolter will also join the company’s safety and security committee, which is tasked with overseeing the safe deployment of OpenAI’s projects. The appointment comes as OpenAI’s board undergoes changes in response to growing concerns about the safety of generative AI, which has seen rapid adoption across various sectors.
Following the departure of co-founder John Schulman, Kolter’s addition to the OpenAI board underscores a commitment to addressing these safety concerns. He brings a wealth of experience from his roles as the chief expert at Bosch and chief technical adviser at Gray Swan, a startup dedicated to AI safety. Notably, Kolter has contributed to developing methods that automatically assess the safety of large language models, a crucial area as AI systems become increasingly sophisticated. His expertise will be invaluable in guiding OpenAI as it navigates the challenges posed by the widespread use of generative AI technologies such as ChatGPT.
The formation of the safety and security committee in May, preceded by Ilya Sutskever‘s leaving, which includes Kolter alongside CEO Sam Altman and other directors, underlines OpenAI’s proactive approach to ensuring AI is developed and deployed responsibly. The committee is responsible for making recommendations on safety decisions across all of OpenAI’s projects, reflecting the company’s recognition of the potential risks associated with AI advancements.
In a related move, Microsoft relinquished its board observer seat at OpenAI in July, aiming to address antitrust concerns from regulators in the United States and the United Kingdom. This decision was seen as a step towards maintaining a balance of power within OpenAI, as the company continues to play a leading role in the rapidly evolving AI landscape.
Around seven years ago, Intel had the opportunity to invest in OpenAI, a nascent research organisation focused on generative artificial intelligence. Discussions between Intel and OpenAI spanned several months in 2017 and 2018, considering options like Intel acquiring a 15% stake for $1 billion. However, Intel decided against the deal, partly due to then-CEO Bob Swan’s scepticism about the commercial viability of generative AI models.
OpenAI, seeking to reduce its reliance on Nvidia’s chips, saw value in an investment from Intel. Yet, the deal fell through due to Intel’s reluctance to produce hardware at cost for the startup. The missed opportunity remained undisclosed until now, with OpenAI later becoming a major player in AI, launching the groundbreaking ChatGPT in 2022 and achieving a reported valuation of $80 billion.
Intel’s decision not to invest is part of a broader struggle to maintain relevance in the AI age. Once a leader in computer chips, Intel has been outpaced by competitors like Nvidia and AMD. Nvidia’s shift from gaming to AI chips has left Intel struggling to produce a competitive AI product, contributing to a sharp decline in its market value.
Despite its challenges, Intel continues to push forward with new AI chip developments, including the upcoming third-generation Gaudi AI chip and the next-generation Falcon Shores chip. CEO Pat Gelsinger remains optimistic about capturing a greater share of the AI market, but Intel’s journey serves as a cautionary tale of missed opportunities in a rapidly evolving industry.
OpenAI is developing Project Strawberry to improve its AI models’ ability to handle long-horizon tasks, which involve planning and executing complex actions over extended periods. Sam Altman, OpenAI’s chief, hinted at this project in a cryptic social media post, sharing an image of strawberries with the caption, ‘I love summer in the garden.’ That led to speculation about the project’s potential impact on AI capabilities.
Project Strawberry, also known as Q*, aims to significantly enhance the reasoning abilities of OpenAI’s AI models. According to a recent Reuters report, some at OpenAI believe Q* could be a breakthrough in the pursuit of artificial general intelligence (AGI). The project involves innovative approaches that allow AI models to plan ahead and navigate the internet autonomously, addressing common sense issues and logical fallacies that often result in inaccurate outputs.
OpenAI has announced DevDay 2024, a global developer event series with stops in San Francisco, London, and Singapore. The focus will be on advancements in the API and developer tools, though there is speculation that OpenAI might preview its next frontier model. Recent developments in the LMsys chatbot arena, where a new model showed strong performance in math, suggest significant progress in AI capabilities.
Internal documents reveal that Project Strawberry includes a “deep-research” dataset for training and evaluating the models, although the contents remain undisclosed. The innovation is expected to enable AI to conduct research autonomously, using a computer-using agent to act based on its findings. OpenAI plans to test Strawberry’s capabilities in performing tasks typically done by software and machine learning engineers, highlighting its potential to revolutionise AI applications.
John Schulman, co-founder of OpenAI, has departed the company for rival Anthropic. Schulman announced his decision on social media, citing a desire to focus more on AI alignment and return to hands-on technical work.
OpenAI is undergoing significant personnel shifts. Greg Brockman, another co-founder and President, is taking a sabbatical until the end of the year. Meanwhile, product manager Peter Deng has also left the firm.
Earlier this year, other key figures exited OpenAI. Chief scientist Ilya Sutskever departed in May, and founding member Andrej Karpathy left in February to start an AI-integrated education platform. AI safety leader Aleksander Madry was reassigned to a different role in July.
These changes come amid renewed legal challenges from Elon Musk, another OpenAI co-founder. Musk, who left OpenAI three years after its inception, has revived a lawsuit against the company, accusing it of prioritising profits over the public good.
Elon Musk has reactivated his lawsuit against OpenAI and its CEO, Sam Altman, claiming the company prioritised profit over public good. Filed in a Northern California district court, the lawsuit accuses OpenAI of shifting its focus from advancing AI for humanity to commercial gain.
Musk had previously withdrawn the lawsuit in June, which initially alleged that OpenAI abandoned its mission of developing AI for the benefit of humanity. Initially filed in February, the legal action was briefly paused before Musk’s recent decision to revive it. The lawsuit argues that Altman shifted the company’s narrative to capitalise on its technology rather than uphold its founding principles.
OpenAI has developed a method to detect when ChatGPT is used to write essays or research papers, but the company still needs to release it. That decision results from an internal debate lasting two years, balancing the company’s commitment to transparency with the potential to deter users. One survey found nearly a third of loyal ChatGPT users would be turned off by the anti-cheating technology.
Concerns have been raised that the tool could disproportionately affect non-native English speakers. OpenAI’s spokeswoman emphasised the need for a deliberate approach due to the complexities involved. Employees supporting the tool argue that its benefits outweigh the risks, as AI-generated essays can be completed in seconds, posing a significant issue for educators.
The watermarking method would subtly alter token selection in AI-generated text, creating a detectable pattern invisible to human readers. That method is reported to be 99.9% effective, but there are concerns it could be bypassed through translation or text modifications. OpenAI is still determining how to provide access to the detector while preventing misuse.
Despite the effectiveness of watermarking, internal discussions at OpenAI have been ongoing since before ChatGPT’s launch in 2022. A 2023 survey showed global support for AI detection tools, but many ChatGPT users feared false accusations of AI use. OpenAI explores alternative approaches to address these concerns while maintaining AI transparency and credibility.
OpenAI, previously a close partner of Microsoft, is now officially recognised as a competitor. Microsoft’s recent SEC filing marks the first time the company has publicly acknowledged this shift. OpenAI is now listed alongside tech giants like Google and Amazon as a competitor in both AI and search technologies.
The relationship between the two companies has been under scrutiny, with antitrust concerns arising from the FTC. Microsoft’s decision to relinquish its board observer seat at OpenAI follows a series of significant events, including the brief dismissal of OpenAI’s CEO Sam Altman. The filing may reflect a strategic move to alter public perception amid these investigations.
Silicon Valley has a history of companies navigating complex relationships, balancing roles as both partners and competitors. The dynamic between Yahoo and Google in the early 2000s serves as a notable example. Microsoft and OpenAI might be experiencing a similar evolution, with both entities maintaining competitive and cooperative elements.
Meanwhile, Microsoft continues to expand its own AI initiatives. The hiring of Inflection AI co-founders to lead a new AI division and the development of Microsoft Copilot highlight the company’s broader strategy. The diversification suggests a strategic approach to AI that goes beyond its ties with OpenAI.
OpenAI has assured US lawmakers it is committed to safely deploying its AI tools. The ChatGPT’s owner decided to address US officials after concerns were raised by five senators, including Senator Brian Schatz of Hawaii, regarding the company’s safety practices. In response, OpenAI’s Chief Strategy Officer, Jason Kwon, emphasised the company’s mission to ensure AI benefits all of humanity and highlighted the rigorous safety protocols they implement at every stage of their process.
a few quick updates about safety at openai:
as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.
our team has been working with the US AI Safety Institute on an agreement where we would provide…
Over multiple years, OpenAI pledged to allocate 20% of its computing resources to safety-related research. The company also stated that it would no longer enforce non-disparagement agreements for current and former employees, addressing concerns about previously restrictive policies. On social media, OpenAI’s CEO, Sam Altman, shared that the company is collaborating with the US AI Safety Institute to provide early access to their next foundation model to advance AI evaluation science.
Kwon mentioned the recent establishment of a safety and security committee, which is currently reviewing OpenAI’s processes and policies. The review is part of a broader effort to address the controversies OpenAI has faced regarding its commitment to safety and the ability of employees to voice their concerns.
Recent resignations from key members of OpenAI’s safety teams, including co-founders Ilya Sutskever and Jan Leike, have highlighted internal concerns. Leike, in particular, has publicly criticised the company for prioritising product development over safety, underscoring the ongoing debate within the organisation about its approach to balancing innovation with security.