OpenAI has announced that the ChatGPT app is now available to all macOS users. This update, shared via OpenAI’s official X account, extends access beyond the initial rollout to Plus subscribers.
After downloading the app, you can simply open the service by pressing the Option + Space, similar to Apple’s current Command + Space function for Spotlight Search. Clearly, this new app is ‘designed to integrate seamlessly’ with your Mac experience.
First introduced in May, the app’s announcement was somewhat overshadowed by the release of the chatbot’s newest version, GPT-4o. At the time it was reserved exclusively to users paying for the OpenAI Plus subscription, but now, any user running macOS 14.0 Sonoma or later can use the chatbot for various tasks. Making the app more accessible and integrated is in line with Apple’s vision for its partnership with OpenAI.
The release of the app is a first test in Apple’s strategy to incorporate external AI tools into its devices. A ChatGPT app already exists for the iPhone. However, at WWDC, it was revealed that OpenAI’s technologies would also be integrated on iPhones and iPads. Soon, users will be able to use ChatGPT through Siri and other AI-powered tools with Apple’s upcoming operating systems.
OpenAI has announced a delay in launching new voice and emotion-reading features for its ChatGPT chatbot, citing the need for more safety testing. Originally set to be available to some paying subscribers in late June, these features will be rolled out in the fall.
We're sharing an update on the advanced Voice Mode we demoed during our Spring Update, which we remain very excited about:
We had planned to start rolling this out in alpha to a small group of ChatGPT Plus users in late June, but need one more month to reach our bar to launch.…
The postponement follows a demonstration last month that garnered user excitement and sparked controversy, including a potential lawsuit from actress Scarlett Johansson, who claimed her voice was mimicked for an AI persona.
OpenAI’s demo showcased the chatbot’s ability to speak in synthetic voices and respond to users’ tones and expressions, with one voice resembling Johansson’s role in the movie ‘Her.’ However, CEO Sam Altman denied using Johansson’s voice, clarifying that a different actor was used for training. The company aims to ensure the new features meet high safety and reliability standards before release.
The delay highlights ongoing challenges in the AI industry. Companies like Google and Microsoft have faced similar setbacks, dealing with errors and controversial outputs from their AI tools.
OpenAI emphasised the complexity of designing chatbots that interpret and mimic emotions, which can introduce new risks and potential for misuse. Additionally, the competition in AI industry is growing swiftly to satisfy the demand of a more and more demanding market and customer field. However, the company seems to be committed to releasing these advanced features thoughtfully and safely.
Chinese AI companies are swiftly responding to reports that OpenAI intends to restrict access to its technology in certain regions, including China. OpenAI, the creator of ChatGPT, is reportedly planning to block access to its API for entities in China and other countries. While ChatGPT is not directly available in mainland China, many Chinese startups have used OpenAI’s API platform to develop their applications. Users in China have received emails warning about restrictions, with measures set to take effect from 9 July.
In light of these developments, Chinese tech giants like Baidu and Alibaba Cloud are stepping in to attract users affected by OpenAI’s restrictions. Baidu announced an ‘inclusive Program,’ offering free migration to its Ernie platform for new users and additional Ernie 3.5 flagship model tokens to match their OpenAI usage. Similarly, Alibaba Cloud provides free tokens and migration services for OpenAI API users through its AI platform, offering competitive pricing compared to GPT-4.
Zhipu AI, another prominent player in China’s AI sector, has also announced a ‘Special Migration Program’ for OpenAI API users. The company emphasises its GLM model as a benchmark against OpenAI’s ecosystem, highlighting its self-developed technology for security and controllability. Over the past year, numerous Chinese companies have launched chatbots powered by their proprietary AI models, indicating a growing trend towards domestic AI development and innovation.
Ilya Sutskever, co-founder and former chief scientist at OpenAI, announced on Wednesday the launch of a new AI company named Safe Superintelligence. The company aims to create a secure AI environment amidst the competitive generative AI industry. Based in Palo Alto and Tel Aviv, Safe Superintelligence aims to prioritise safety and security over short-term commercial pressures.
Sutskever made the announcement on social media, emphasising the company’s focused approach without the distractions of traditional management overhead or product cycles. Joining him as co-founders are Daniel Levy, a former OpenAI researcher, and Daniel Gross, co-founder of Cue and former AI lead at Apple.
Sutskever’s departure from Microsoft-backed OpenAI in May followed his involvement in the dramatic firing and rehiring of CEO Sam Altman in November of the previous year. His new venture underscores a commitment to advancing AI technology in a manner that ensures safety and long-term progress.
OpenAI has announced the appointment of retired US Army General Paul M. Nakasone, former head of the National Security Agency (NSA), to its board of directors. Nakasone, who led the NSA from 2018 until earlier this year, will join OpenAI’s Safety and Security Committee. This committee, prioritised by CEO Sam Altman, focuses on enhancing the company’s understanding of how AI can be leveraged to improve cybersecurity by swiftly identifying and countering threats.
The addition of Nakasone follows notable departures from OpenAI related to safety concerns, including co-founder Ilya Sutskever and Jan Leike. Sutskever was involved in the controversial firing and reinstatement of CEO Sam Altman, while Leike has publicly criticised the company’s current focus on product development over safety measures.
OpenAI board chair Bret Taylor emphasised the importance of securely developing and deploying AI to realize its potential benefits for humanity. He highlighted Nakasone’s extensive experience in cybersecurity as a valuable asset to guiding the organisation toward this goal.
The current OpenAI board comprises Nakasone, Altman, Adam D’Angelo, Larry Summers, Bret Taylor, Dr Sue Desmond-Hellmann, Nicole Seligman, and Fidji Simo, with Microsoft’s Dee Templeton holding a non-voting observer position.
Brazil’s government has enlisted OpenAI’s services to streamline the assessment of thousands of lawsuits using AI, aiming to mitigate costly court losses that have burdened the federal budget. Through Microsoft’s Azure cloud-computing platform, OpenAI’s AI technology, including ChatGPT, will identify lawsuits requiring prompt government action and analyse trends and potential focus areas for the solicitor general’s office (AGU).
The AGU revealed that Microsoft would facilitate the AI services from OpenAI, though the exact cost of Brazil’s procurement remains undisclosed. The initiative responds to the escalating financial strain caused by court-ordered debt payments, which are anticipated to reach 70.7 billion reais ($13.2 billion) next year, excluding smaller claims. The surge from 37.3 billion reais in 2015, equivalent to about 1% of GDP, surpasses government expenditures on unemployment insurance and wage bonuses for low-income earners by 15%.
While the AGU has not clarified the reasons behind Brazil’s mounting court expenses, it assures that the AI project will not supplant human efforts but enhance efficiency and precision, all under human supervision. This move aligns with broader governmental efforts, including releasing 25 million reais in supplementary credits for AGU in March to implement strategic IT projects and bolster operational capacities.
Two partnerships were unveiled at Apple’s yearly Worldwide Developer Conference on Monday; one on stage and one in the fine-print. A partnership with OpenAI to use GPT 4.0 within Siri’s Apple Intelligence was openly publicised, but use of Google chips to build the AI tools was not.
Initially, it would seem as though the two companies are at odds. Apple would be set to compete with Google’s Gemini over their AI systems, while the OpenAI partnership could potentially mean reduced access to customer data through Siri.
However, a technical document published by Apple after the event reveals that in order to build their AI models, Apple used its own framework software. This software depended on various pieces of Apple’s own hardware, but also tensor processing chips (TPCs). These are exclusively available for purchase through Google’s cloud. Google is among various companies competing with Nvidia’s AI-capable chips, which have been dominating the market recently.
Apple did not immediately reply after Reuters requested comment. It has not detailed how much it depends on third-party chips for the development of its new AI system.
Billionaire entrepreneur Elon Musk has moved to dismiss his lawsuit against OpenAI, the AI company he co-founded, and its CEO Sam Altman. The lawsuit, filed in February, accused OpenAI of straying from its original mission of developing AI for the benefit of humanity and operating as a non-profit. Musk’s attorneys filed the dismissal request in the California state court without specifying a reason. The case’s dismissal comes just before a scheduled hearing, during which a judge was set to consider OpenAI’s motion to dismiss the lawsuit.
Musk’s lawsuit alleged that OpenAI had abandoned its founding principles when it released its advanced language model, GPT-4, focusing instead on generating profit. He sought a court order to make OpenAI’s research and technology publicly accessible and to stop the company from using its assets for financial gains benefitting entities like Microsoft. In response, OpenAI argued that the lawsuit was a baseless attempt by Musk to further his own interests in the AI sector.
Despite withdrawing the lawsuit, Musk dismissed the case without prejudice, leaving the door open for potential future legal action. The legal battle highlighted Musk’s ongoing conflict with OpenAI, which he helped establish in 2015 but has since criticised. Meanwhile, Musk has launched his own AI venture, xAI, securing significant funding and marking a new chapter in his involvement in the AI industry.
Apple is integrating OpenAI’s ChatGPT into Siri, as announced at its WWDC 2024 keynote. The partnership will allow iOS 18 and macOS Sequoia users to access ChatGPT for free, with privacy measures ensuring that queries aren’t logged. Additionally, paid ChatGPT subscribers can link their accounts to access premium features on Apple devices.
Apple had been negotiating with Google and OpenAI to enhance its AI capabilities, ultimately partnering with OpenAI. The enhanced feature will utilise OpenAI’s GPT-4o model, which will power ChatGPT in Apple’s upcoming operating systems.
OpenAI CEO Sam Altman expressed enthusiasm for the partnership, highlighting shared commitments to safety and innovation. However, Elon Musk, the billionaire CEO of Tesla, SpaceX, and the social media company X announced a ban on Apple devices from his companies if Apple integrates OpenAI technology at the operating system level. Musk labelled this move an ‘unacceptable security violation’ and stated that visitors would be required to leave their Apple devices in a Faraday cage at the entrance to his facilities.
Why does it matter?
The new business plan aims to significantly enhance Siri’s capabilities with advanced AI features. The chatbot will be seamlessly integrated into Apple’s systemwide writing tools, enriching the user experience across Apple devices.
Central to this integration is a robust consent mechanism that requires users’ permission before sending any questions, documents, or photos to ChatGPT. Siri will present the responses directly, emphasising Apple’s commitment to user privacy and transparent data handling practices.
Elon Musk, the billionaire CEO of Tesla, SpaceX, and the social media company X announced on Monday that he would ban Apple devices from his companies if Apple integrates OpenAI technology at the operating system level. Musk called this move an ‘unacceptable security violation’ and declared that visitors would have to leave their Apple devices in a Faraday cage at the entrance to his facilities.
The statement followed Apple’s announcement of new AI features across its apps and operating platforms, including a partnership with OpenAI to incorporate ChatGPT technology into its devices. Apple emphasised that these AI features are designed with privacy at their core, using both on-device processing and cloud computing to ensure data security. Musk, however, expressed scepticism, arguing that Apple’s reliance on OpenAI undermines its ability to protect user privacy and security effectively.
If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation.
Industry experts, such as Ben Bajarin, CEO of Creative Strategies, believe that Musk’s stance is unlikely to gain widespread support. Bajarin noted that Apple aims to reassure users that its private cloud services are as secure as on-device data storage. He explained that Apple anonymises and firewalls user data, ensuring that Apple itself does not access it.
Musk’s criticism of OpenAI is not new; he co-founded the organisation in 2015 but sued it earlier this year, alleging it strayed from its mission to develop AI for the benefit of humanity. Musk has since launched his own AI startup, xAI, valued at $24 billion after a recent funding round, to compete directly with OpenAI and develop alternatives to its popular ChatGPT.