OpenAI has announced the appointment of retired US Army General Paul M. Nakasone, former head of the National Security Agency (NSA), to its board of directors. Nakasone, who led the NSA from 2018 until earlier this year, will join OpenAI’s Safety and Security Committee. This committee, prioritised by CEO Sam Altman, focuses on enhancing the company’s understanding of how AI can be leveraged to improve cybersecurity by swiftly identifying and countering threats.
The addition of Nakasone follows notable departures from OpenAI related to safety concerns, including co-founder Ilya Sutskever and Jan Leike. Sutskever was involved in the controversial firing and reinstatement of CEO Sam Altman, while Leike has publicly criticised the company’s current focus on product development over safety measures.
OpenAI board chair Bret Taylor emphasised the importance of securely developing and deploying AI to realize its potential benefits for humanity. He highlighted Nakasone’s extensive experience in cybersecurity as a valuable asset to guiding the organisation toward this goal.
The current OpenAI board comprises Nakasone, Altman, Adam D’Angelo, Larry Summers, Bret Taylor, Dr Sue Desmond-Hellmann, Nicole Seligman, and Fidji Simo, with Microsoft’s Dee Templeton holding a non-voting observer position.
Brazil’s government has enlisted OpenAI’s services to streamline the assessment of thousands of lawsuits using AI, aiming to mitigate costly court losses that have burdened the federal budget. Through Microsoft’s Azure cloud-computing platform, OpenAI’s AI technology, including ChatGPT, will identify lawsuits requiring prompt government action and analyse trends and potential focus areas for the solicitor general’s office (AGU).
The AGU revealed that Microsoft would facilitate the AI services from OpenAI, though the exact cost of Brazil’s procurement remains undisclosed. The initiative responds to the escalating financial strain caused by court-ordered debt payments, which are anticipated to reach 70.7 billion reais ($13.2 billion) next year, excluding smaller claims. The surge from 37.3 billion reais in 2015, equivalent to about 1% of GDP, surpasses government expenditures on unemployment insurance and wage bonuses for low-income earners by 15%.
While the AGU has not clarified the reasons behind Brazil’s mounting court expenses, it assures that the AI project will not supplant human efforts but enhance efficiency and precision, all under human supervision. This move aligns with broader governmental efforts, including releasing 25 million reais in supplementary credits for AGU in March to implement strategic IT projects and bolster operational capacities.
Two partnerships were unveiled at Apple’s yearly Worldwide Developer Conference on Monday; one on stage and one in the fine-print. A partnership with OpenAI to use GPT 4.0 within Siri’s Apple Intelligence was openly publicised, but use of Google chips to build the AI tools was not.
Initially, it would seem as though the two companies are at odds. Apple would be set to compete with Google’s Gemini over their AI systems, while the OpenAI partnership could potentially mean reduced access to customer data through Siri.
However, a technical document published by Apple after the event reveals that in order to build their AI models, Apple used its own framework software. This software depended on various pieces of Apple’s own hardware, but also tensor processing chips (TPCs). These are exclusively available for purchase through Google’s cloud. Google is among various companies competing with Nvidia’s AI-capable chips, which have been dominating the market recently.
Apple did not immediately reply after Reuters requested comment. It has not detailed how much it depends on third-party chips for the development of its new AI system.
Billionaire entrepreneur Elon Musk has moved to dismiss his lawsuit against OpenAI, the AI company he co-founded, and its CEO Sam Altman. The lawsuit, filed in February, accused OpenAI of straying from its original mission of developing AI for the benefit of humanity and operating as a non-profit. Musk’s attorneys filed the dismissal request in the California state court without specifying a reason. The case’s dismissal comes just before a scheduled hearing, during which a judge was set to consider OpenAI’s motion to dismiss the lawsuit.
Musk’s lawsuit alleged that OpenAI had abandoned its founding principles when it released its advanced language model, GPT-4, focusing instead on generating profit. He sought a court order to make OpenAI’s research and technology publicly accessible and to stop the company from using its assets for financial gains benefitting entities like Microsoft. In response, OpenAI argued that the lawsuit was a baseless attempt by Musk to further his own interests in the AI sector.
Despite withdrawing the lawsuit, Musk dismissed the case without prejudice, leaving the door open for potential future legal action. The legal battle highlighted Musk’s ongoing conflict with OpenAI, which he helped establish in 2015 but has since criticised. Meanwhile, Musk has launched his own AI venture, xAI, securing significant funding and marking a new chapter in his involvement in the AI industry.
Apple is integrating OpenAI’s ChatGPT into Siri, as announced at its WWDC 2024 keynote. The partnership will allow iOS 18 and macOS Sequoia users to access ChatGPT for free, with privacy measures ensuring that queries aren’t logged. Additionally, paid ChatGPT subscribers can link their accounts to access premium features on Apple devices.
Apple had been negotiating with Google and OpenAI to enhance its AI capabilities, ultimately partnering with OpenAI. The enhanced feature will utilise OpenAI’s GPT-4o model, which will power ChatGPT in Apple’s upcoming operating systems.
OpenAI CEO Sam Altman expressed enthusiasm for the partnership, highlighting shared commitments to safety and innovation. However, Elon Musk, the billionaire CEO of Tesla, SpaceX, and the social media company X announced a ban on Apple devices from his companies if Apple integrates OpenAI technology at the operating system level. Musk labelled this move an ‘unacceptable security violation’ and stated that visitors would be required to leave their Apple devices in a Faraday cage at the entrance to his facilities.
Why does it matter?
The new business plan aims to significantly enhance Siri’s capabilities with advanced AI features. The chatbot will be seamlessly integrated into Apple’s systemwide writing tools, enriching the user experience across Apple devices.
Central to this integration is a robust consent mechanism that requires users’ permission before sending any questions, documents, or photos to ChatGPT. Siri will present the responses directly, emphasising Apple’s commitment to user privacy and transparent data handling practices.
Elon Musk, the billionaire CEO of Tesla, SpaceX, and the social media company X announced on Monday that he would ban Apple devices from his companies if Apple integrates OpenAI technology at the operating system level. Musk called this move an ‘unacceptable security violation’ and declared that visitors would have to leave their Apple devices in a Faraday cage at the entrance to his facilities.
The statement followed Apple’s announcement of new AI features across its apps and operating platforms, including a partnership with OpenAI to incorporate ChatGPT technology into its devices. Apple emphasised that these AI features are designed with privacy at their core, using both on-device processing and cloud computing to ensure data security. Musk, however, expressed scepticism, arguing that Apple’s reliance on OpenAI undermines its ability to protect user privacy and security effectively.
If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation.
Industry experts, such as Ben Bajarin, CEO of Creative Strategies, believe that Musk’s stance is unlikely to gain widespread support. Bajarin noted that Apple aims to reassure users that its private cloud services are as secure as on-device data storage. He explained that Apple anonymises and firewalls user data, ensuring that Apple itself does not access it.
Musk’s criticism of OpenAI is not new; he co-founded the organisation in 2015 but sued it earlier this year, alleging it strayed from its mission to develop AI for the benefit of humanity. Musk has since launched his own AI startup, xAI, valued at $24 billion after a recent funding round, to compete directly with OpenAI and develop alternatives to its popular ChatGPT.
The US Justice Department and the Federal Trade Commission (FTC) have agreed to proceed with antitrust investigations into Microsoft, OpenAI, and Nvidia’s dominance in the AI industry. Under the agreement, the Justice Department will focus on Nvidia’s potential antitrust violations, while the FTC will examine Microsoft and OpenAI’s conduct. Microsoft has a significant stake in OpenAI, having invested $13 billion in its for-profit subsidiary.
The regulators’ deal, expected to be finalised soon, reflects increased scrutiny of the AI sector. The FTC is also investigating Microsoft’s $650 million deal with AI startup Inflection AI. This action follows a January order requiring several tech giants, including Microsoft and OpenAI, to provide information on AI investments and partnerships.
Why does it matter?
Last year, the FTC began investigating OpenAI for potential consumer protection law violations. US antitrust chief Jonathan Kanter recently expressed concerns about the AI industry’s reliance on vast data and computing power, which could reinforce the dominance of major firms. Microsoft, OpenAI, Nvidia, the Justice Department, and the FTC have not commented on the ongoing investigations.
On Tuesday, a group of current and former OpenAI employees issued an open letter warning that leading AI companies lack necessary transparency and accountability to address potential risks. The letter highlights AI safety concerns, such as deepening inequalities, misinformation, and loss of control over autonomous systems, potentially leading to catastrophic outcomes.
The 16 signatories, including Google DeepMind staff, emphasised that AI firms have financial incentives to avoid effective oversight and criticised their weak obligations to share critical information. They called for stronger whistleblower protections, noting that confidentiality agreements often prevent employees from raising concerns. Some current OpenAI employees signed anonymously, fearing retaliation. AI pioneers like Yoshua Bengio, Geoffrey Hinton, and Stuart Russell also endorsed the letter, criticising inadequate preparations for AI’s dangers.
The letter also calls for AI companies to commit to main principles in order to maintain a curtain level of accountability and transparency. Those principles are – not to enter into or enforce any agreement that prohibits ‘disparagement’ or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism, facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise, and support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected.
Why does it matter?
In response, OpenAI defended its record, citing its commitment to safety, rigorous debate, and engagement with various stakeholders. The company highlighted its anonymous integrity hotline and newly formed Safety and Security Committee as channels for employee concerns. The critique of OpenAI comes amid growing scrutiny of CEO Sam Altman’s leadership. The concerns raised by OpenAI insiders highlights the critical need for transparency and accountability in AI development. Ensuring that AI companies are effectively overseen and held accountable and that insiders are enabled to speak out about unethical or dangerous practices without fear of retaliation represent pivotal safeguards to inform the public and the decision makers about AI’s potential capabilities and risks.
OpenAI, led by Sam Altman, announced it had disrupted five covert influence operations that misused its AI models for deceptive activities online. Over the past three months, actors from Russia, China, Iran, and Israel used AI to generate fake comments, articles, and social media profiles. These operations targeted issues such as Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, and politics in Europe and the US, aiming to manipulate public opinion and influence political outcomes.
Despite these efforts, OpenAI stated that the deceptive campaigns did not see increased audience engagement. The company emphasised that these operations included both AI-generated and manually-created content. OpenAI’s announcement highlights ongoing concerns about using AI technology to spread misinformation.
OpenAI has secured licensing agreements with The Atlantic and Vox Media, expanding its partnerships with publishers to enhance its AI products. These deals allow OpenAI to display news from these outlets in products like ChatGPT and use their content to train its AI models. Although financial terms were not disclosed, this move follows similar agreements with major publishers like News Corp., Dotdash Meredith, and The Financial Times.
Executives from The Atlantic and Vox Media emphasised that these partnerships will help readers discover their content more easily. Nicholas Thompson, CEO of The Atlantic, highlighted the importance of AI in future web navigation and expressed enthusiasm for making The Atlantic’s stories more accessible through OpenAI’s platforms.
Additionally, these agreements will provide the publishers access to OpenAI’s technology, aiding them in developing new AI-powered products. For instance, The Atlantic is working on Atlantic Labs, an initiative focused on creating AI-driven solutions using technology from OpenAI and other companies.