Two partnerships were unveiled at Apple’s yearly Worldwide Developer Conference on Monday; one on stage and one in the fine-print. A partnership with OpenAI to use GPT 4.0 within Siri’s Apple Intelligence was openly publicised, but use of Google chips to build the AI tools was not.
Initially, it would seem as though the two companies are at odds. Apple would be set to compete with Google’s Gemini over their AI systems, while the OpenAI partnership could potentially mean reduced access to customer data through Siri.
However, a technical document published by Apple after the event reveals that in order to build their AI models, Apple used its own framework software. This software depended on various pieces of Apple’s own hardware, but also tensor processing chips (TPCs). These are exclusively available for purchase through Google’s cloud. Google is among various companies competing with Nvidia’s AI-capable chips, which have been dominating the market recently.
Apple did not immediately reply after Reuters requested comment. It has not detailed how much it depends on third-party chips for the development of its new AI system.
Billionaire entrepreneur Elon Musk has moved to dismiss his lawsuit against OpenAI, the AI company he co-founded, and its CEO Sam Altman. The lawsuit, filed in February, accused OpenAI of straying from its original mission of developing AI for the benefit of humanity and operating as a non-profit. Musk’s attorneys filed the dismissal request in the California state court without specifying a reason. The case’s dismissal comes just before a scheduled hearing, during which a judge was set to consider OpenAI’s motion to dismiss the lawsuit.
Musk’s lawsuit alleged that OpenAI had abandoned its founding principles when it released its advanced language model, GPT-4, focusing instead on generating profit. He sought a court order to make OpenAI’s research and technology publicly accessible and to stop the company from using its assets for financial gains benefitting entities like Microsoft. In response, OpenAI argued that the lawsuit was a baseless attempt by Musk to further his own interests in the AI sector.
Despite withdrawing the lawsuit, Musk dismissed the case without prejudice, leaving the door open for potential future legal action. The legal battle highlighted Musk’s ongoing conflict with OpenAI, which he helped establish in 2015 but has since criticised. Meanwhile, Musk has launched his own AI venture, xAI, securing significant funding and marking a new chapter in his involvement in the AI industry.
Apple is integrating OpenAI’s ChatGPT into Siri, as announced at its WWDC 2024 keynote. The partnership will allow iOS 18 and macOS Sequoia users to access ChatGPT for free, with privacy measures ensuring that queries aren’t logged. Additionally, paid ChatGPT subscribers can link their accounts to access premium features on Apple devices.
Apple had been negotiating with Google and OpenAI to enhance its AI capabilities, ultimately partnering with OpenAI. The enhanced feature will utilise OpenAI’s GPT-4o model, which will power ChatGPT in Apple’s upcoming operating systems.
OpenAI CEO Sam Altman expressed enthusiasm for the partnership, highlighting shared commitments to safety and innovation. However, Elon Musk, the billionaire CEO of Tesla, SpaceX, and the social media company X announced a ban on Apple devices from his companies if Apple integrates OpenAI technology at the operating system level. Musk labelled this move an ‘unacceptable security violation’ and stated that visitors would be required to leave their Apple devices in a Faraday cage at the entrance to his facilities.
Why does it matter?
The new business plan aims to significantly enhance Siri’s capabilities with advanced AI features. The chatbot will be seamlessly integrated into Apple’s systemwide writing tools, enriching the user experience across Apple devices.
Central to this integration is a robust consent mechanism that requires users’ permission before sending any questions, documents, or photos to ChatGPT. Siri will present the responses directly, emphasising Apple’s commitment to user privacy and transparent data handling practices.
Elon Musk, the billionaire CEO of Tesla, SpaceX, and the social media company X announced on Monday that he would ban Apple devices from his companies if Apple integrates OpenAI technology at the operating system level. Musk called this move an ‘unacceptable security violation’ and declared that visitors would have to leave their Apple devices in a Faraday cage at the entrance to his facilities.
The statement followed Apple’s announcement of new AI features across its apps and operating platforms, including a partnership with OpenAI to incorporate ChatGPT technology into its devices. Apple emphasised that these AI features are designed with privacy at their core, using both on-device processing and cloud computing to ensure data security. Musk, however, expressed scepticism, arguing that Apple’s reliance on OpenAI undermines its ability to protect user privacy and security effectively.
If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation.
Industry experts, such as Ben Bajarin, CEO of Creative Strategies, believe that Musk’s stance is unlikely to gain widespread support. Bajarin noted that Apple aims to reassure users that its private cloud services are as secure as on-device data storage. He explained that Apple anonymises and firewalls user data, ensuring that Apple itself does not access it.
Musk’s criticism of OpenAI is not new; he co-founded the organisation in 2015 but sued it earlier this year, alleging it strayed from its mission to develop AI for the benefit of humanity. Musk has since launched his own AI startup, xAI, valued at $24 billion after a recent funding round, to compete directly with OpenAI and develop alternatives to its popular ChatGPT.
The US Justice Department and the Federal Trade Commission (FTC) have agreed to proceed with antitrust investigations into Microsoft, OpenAI, and Nvidia’s dominance in the AI industry. Under the agreement, the Justice Department will focus on Nvidia’s potential antitrust violations, while the FTC will examine Microsoft and OpenAI’s conduct. Microsoft has a significant stake in OpenAI, having invested $13 billion in its for-profit subsidiary.
The regulators’ deal, expected to be finalised soon, reflects increased scrutiny of the AI sector. The FTC is also investigating Microsoft’s $650 million deal with AI startup Inflection AI. This action follows a January order requiring several tech giants, including Microsoft and OpenAI, to provide information on AI investments and partnerships.
Why does it matter?
Last year, the FTC began investigating OpenAI for potential consumer protection law violations. US antitrust chief Jonathan Kanter recently expressed concerns about the AI industry’s reliance on vast data and computing power, which could reinforce the dominance of major firms. Microsoft, OpenAI, Nvidia, the Justice Department, and the FTC have not commented on the ongoing investigations.
On Tuesday, a group of current and former OpenAI employees issued an open letter warning that leading AI companies lack necessary transparency and accountability to address potential risks. The letter highlights AI safety concerns, such as deepening inequalities, misinformation, and loss of control over autonomous systems, potentially leading to catastrophic outcomes.
The 16 signatories, including Google DeepMind staff, emphasised that AI firms have financial incentives to avoid effective oversight and criticised their weak obligations to share critical information. They called for stronger whistleblower protections, noting that confidentiality agreements often prevent employees from raising concerns. Some current OpenAI employees signed anonymously, fearing retaliation. AI pioneers like Yoshua Bengio, Geoffrey Hinton, and Stuart Russell also endorsed the letter, criticising inadequate preparations for AI’s dangers.
The letter also calls for AI companies to commit to main principles in order to maintain a curtain level of accountability and transparency. Those principles are – not to enter into or enforce any agreement that prohibits ‘disparagement’ or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism, facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise, and support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected.
Why does it matter?
In response, OpenAI defended its record, citing its commitment to safety, rigorous debate, and engagement with various stakeholders. The company highlighted its anonymous integrity hotline and newly formed Safety and Security Committee as channels for employee concerns. The critique of OpenAI comes amid growing scrutiny of CEO Sam Altman’s leadership. The concerns raised by OpenAI insiders highlights the critical need for transparency and accountability in AI development. Ensuring that AI companies are effectively overseen and held accountable and that insiders are enabled to speak out about unethical or dangerous practices without fear of retaliation represent pivotal safeguards to inform the public and the decision makers about AI’s potential capabilities and risks.
OpenAI, led by Sam Altman, announced it had disrupted five covert influence operations that misused its AI models for deceptive activities online. Over the past three months, actors from Russia, China, Iran, and Israel used AI to generate fake comments, articles, and social media profiles. These operations targeted issues such as Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, and politics in Europe and the US, aiming to manipulate public opinion and influence political outcomes.
Despite these efforts, OpenAI stated that the deceptive campaigns did not see increased audience engagement. The company emphasised that these operations included both AI-generated and manually-created content. OpenAI’s announcement highlights ongoing concerns about using AI technology to spread misinformation.
OpenAI has secured licensing agreements with The Atlantic and Vox Media, expanding its partnerships with publishers to enhance its AI products. These deals allow OpenAI to display news from these outlets in products like ChatGPT and use their content to train its AI models. Although financial terms were not disclosed, this move follows similar agreements with major publishers like News Corp., Dotdash Meredith, and The Financial Times.
Executives from The Atlantic and Vox Media emphasised that these partnerships will help readers discover their content more easily. Nicholas Thompson, CEO of The Atlantic, highlighted the importance of AI in future web navigation and expressed enthusiasm for making The Atlantic’s stories more accessible through OpenAI’s platforms.
Additionally, these agreements will provide the publishers access to OpenAI’s technology, aiding them in developing new AI-powered products. For instance, The Atlantic is working on Atlantic Labs, an initiative focused on creating AI-driven solutions using technology from OpenAI and other companies.
The European Centre for Digital Rights, or Noyb, has filed a complaint against OpenAI, claiming that ChatGPT fails to provide accurate information about individuals. According to Noyb, the General Data Protection Regulation (GDPR) mandates that information about individuals be accurate and that they have full access to this information, including its sources. However, OpenAI admits it cannot correct inaccurate information on ChatGPT, citing that factual accuracy in large language models remains an active research area.
Noyb highlights the potential dangers of ChatGPT’s inaccuracies, noting that while such errors may be tolerable for general uses like student homework, they are unacceptable when they involve personal information. The organisation cites a case where ChatGPT provided an incorrect date of birth for a public figure, and OpenAI refused to correct or delete the inaccurate data. Noyb argues this refusal breaches the GDPR, which grants individuals the right to rectify incorrect data.
Furthermore, Noyb points out that the EU law requires all personal data to be accurate, and ChatGPT’s tendency to produce false information, known as ‘hallucinations’, constitutes another violation of the GDPR. Data protection lawyer Maartje de Graaf emphasises that the inability to ensure factual accuracy can have serious consequences for individuals, making it clear that current chatbot technologies like ChatGPT are not compliant with the EU laws regarding data processing.
Noyb has requested that the Austrian data protection authority (DSB) investigate OpenAI’s data processing practices and enforce measures to ensure compliance with the GDPR. The organisation also seeks a fine against OpenAI to promote future adherence to data protection regulations.
OpenAI has established a Safety and Security Committee to oversee the training of its next AI model, the company announced on Tuesday. CEO Sam Altman will lead the committee alongside directors Bret Taylor, Adam D’Angelo, and Nicole Seligman. The committee makes safety and security recommendations to OpenAI’s board.
The committee’s initial task is to review and enhance OpenAI’s existing safety practices over the next 90 days, after which it will present its findings to the board. Following the board’s review, OpenAI plans to share the adopted recommendations publicly. This move follows the disbanding of OpenAI’s Superalignment team earlier this month, which led to the departure of key figures like former Chief Scientist Ilya Sutskever and Jan Leike.
Other members of the new committee include technical and policy experts Aleksander Madry, Lilian Weng, and head of alignment sciences John Schulman. Newly appointed Chief Scientist Jakub Pachocki and head of security Matt Knight will also be part of the committee, contributing to the safety and security oversight of OpenAI’s projects and operations.