The US Justice Department and the Federal Trade Commission (FTC) have agreed to proceed with antitrust investigations into Microsoft, OpenAI, and Nvidia’s dominance in the AI industry. Under the agreement, the Justice Department will focus on Nvidia’s potential antitrust violations, while the FTC will examine Microsoft and OpenAI’s conduct. Microsoft has a significant stake in OpenAI, having invested $13 billion in its for-profit subsidiary.
The regulators’ deal, expected to be finalised soon, reflects increased scrutiny of the AI sector. The FTC is also investigating Microsoft’s $650 million deal with AI startup Inflection AI. This action follows a January order requiring several tech giants, including Microsoft and OpenAI, to provide information on AI investments and partnerships.
Why does it matter?
Last year, the FTC began investigating OpenAI for potential consumer protection law violations. US antitrust chief Jonathan Kanter recently expressed concerns about the AI industry’s reliance on vast data and computing power, which could reinforce the dominance of major firms. Microsoft, OpenAI, Nvidia, the Justice Department, and the FTC have not commented on the ongoing investigations.
On Tuesday, a group of current and former OpenAI employees issued an open letter warning that leading AI companies lack necessary transparency and accountability to address potential risks. The letter highlights AI safety concerns, such as deepening inequalities, misinformation, and loss of control over autonomous systems, potentially leading to catastrophic outcomes.
The 16 signatories, including Google DeepMind staff, emphasised that AI firms have financial incentives to avoid effective oversight and criticised their weak obligations to share critical information. They called for stronger whistleblower protections, noting that confidentiality agreements often prevent employees from raising concerns. Some current OpenAI employees signed anonymously, fearing retaliation. AI pioneers like Yoshua Bengio, Geoffrey Hinton, and Stuart Russell also endorsed the letter, criticising inadequate preparations for AI’s dangers.
The letter also calls for AI companies to commit to main principles in order to maintain a curtain level of accountability and transparency. Those principles are – not to enter into or enforce any agreement that prohibits ‘disparagement’ or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism, facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise, and support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected.
Why does it matter?
In response, OpenAI defended its record, citing its commitment to safety, rigorous debate, and engagement with various stakeholders. The company highlighted its anonymous integrity hotline and newly formed Safety and Security Committee as channels for employee concerns. The critique of OpenAI comes amid growing scrutiny of CEO Sam Altman’s leadership. The concerns raised by OpenAI insiders highlights the critical need for transparency and accountability in AI development. Ensuring that AI companies are effectively overseen and held accountable and that insiders are enabled to speak out about unethical or dangerous practices without fear of retaliation represent pivotal safeguards to inform the public and the decision makers about AI’s potential capabilities and risks.
OpenAI, led by Sam Altman, announced it had disrupted five covert influence operations that misused its AI models for deceptive activities online. Over the past three months, actors from Russia, China, Iran, and Israel used AI to generate fake comments, articles, and social media profiles. These operations targeted issues such as Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, and politics in Europe and the US, aiming to manipulate public opinion and influence political outcomes.
Despite these efforts, OpenAI stated that the deceptive campaigns did not see increased audience engagement. The company emphasised that these operations included both AI-generated and manually-created content. OpenAI’s announcement highlights ongoing concerns about using AI technology to spread misinformation.
OpenAI has secured licensing agreements with The Atlantic and Vox Media, expanding its partnerships with publishers to enhance its AI products. These deals allow OpenAI to display news from these outlets in products like ChatGPT and use their content to train its AI models. Although financial terms were not disclosed, this move follows similar agreements with major publishers like News Corp., Dotdash Meredith, and The Financial Times.
Executives from The Atlantic and Vox Media emphasised that these partnerships will help readers discover their content more easily. Nicholas Thompson, CEO of The Atlantic, highlighted the importance of AI in future web navigation and expressed enthusiasm for making The Atlantic’s stories more accessible through OpenAI’s platforms.
Additionally, these agreements will provide the publishers access to OpenAI’s technology, aiding them in developing new AI-powered products. For instance, The Atlantic is working on Atlantic Labs, an initiative focused on creating AI-driven solutions using technology from OpenAI and other companies.
The European Centre for Digital Rights, or Noyb, has filed a complaint against OpenAI, claiming that ChatGPT fails to provide accurate information about individuals. According to Noyb, the General Data Protection Regulation (GDPR) mandates that information about individuals be accurate and that they have full access to this information, including its sources. However, OpenAI admits it cannot correct inaccurate information on ChatGPT, citing that factual accuracy in large language models remains an active research area.
Noyb highlights the potential dangers of ChatGPT’s inaccuracies, noting that while such errors may be tolerable for general uses like student homework, they are unacceptable when they involve personal information. The organisation cites a case where ChatGPT provided an incorrect date of birth for a public figure, and OpenAI refused to correct or delete the inaccurate data. Noyb argues this refusal breaches the GDPR, which grants individuals the right to rectify incorrect data.
Furthermore, Noyb points out that the EU law requires all personal data to be accurate, and ChatGPT’s tendency to produce false information, known as ‘hallucinations’, constitutes another violation of the GDPR. Data protection lawyer Maartje de Graaf emphasises that the inability to ensure factual accuracy can have serious consequences for individuals, making it clear that current chatbot technologies like ChatGPT are not compliant with the EU laws regarding data processing.
Noyb has requested that the Austrian data protection authority (DSB) investigate OpenAI’s data processing practices and enforce measures to ensure compliance with the GDPR. The organisation also seeks a fine against OpenAI to promote future adherence to data protection regulations.
OpenAI has established a Safety and Security Committee to oversee the training of its next AI model, the company announced on Tuesday. CEO Sam Altman will lead the committee alongside directors Bret Taylor, Adam D’Angelo, and Nicole Seligman. The committee makes safety and security recommendations to OpenAI’s board.
The committee’s initial task is to review and enhance OpenAI’s existing safety practices over the next 90 days, after which it will present its findings to the board. Following the board’s review, OpenAI plans to share the adopted recommendations publicly. This move follows the disbanding of OpenAI’s Superalignment team earlier this month, which led to the departure of key figures like former Chief Scientist Ilya Sutskever and Jan Leike.
Other members of the new committee include technical and policy experts Aleksander Madry, Lilian Weng, and head of alignment sciences John Schulman. Newly appointed Chief Scientist Jakub Pachocki and head of security Matt Knight will also be part of the committee, contributing to the safety and security oversight of OpenAI’s projects and operations.
OpenAI’s use of Scarlett Johansson’s voice likeness in its AI model, ChatGPT, has ignited controversy in Hollywood, with Johansson accusing the company of copying her performance from the movie ‘Her’ without consent. The dispute has intensified concerns among entertainment executives about the implications of AI technology for the creative industry, particularly regarding copyright infringement and the right to publicity.
Despite OpenAI’s claims that the voice in question was not intended to resemble Johansson’s, the incident has strained relations between content creators and tech companies. Some industry insiders view OpenAI’s actions as disrespectful and indicative of hubris, potentially hindering future collaborations between Hollywood and the tech giant.
The conflict with Johansson highlights broader concerns about using copyrighted material in OpenAI’s models and the need to protect performers’ rights. While some technologists see AI as a valuable tool for enhancing filmmaking processes, others worry about its potential misuse and infringement on intellectual property.
Johansson’s case could set a precedent for performers seeking to protect their voice and likeness rights in the age of AI. Legal experts and industry figures advocate for federal legislation to safeguard performers’ rights and address the growing impact of AI-generated content, signalling a broader dialogue about the need for regulatory measures in this evolving landscape.
OpenAI, led by Sam Altman, has entered a significant deal with media giant News Corp, securing access to content from its major publications. The agreement follows a recent content licensing deal with the Financial Times aimed at enhancing the capabilities of OpenAI’s ChatGPT. Such partnerships are essential for training AI models, providing a financial boost to news publishers traditionally excluded from the profits generated by internet companies distributing their content.
The financial specifics of the latest deal remain undisclosed, though the Wall Street Journal, a News Corp entity, reported that it could be valued at over $250 million across five years. The deal ensures that content from News Corp’s publications, including the Wall Street Journal, MarketWatch, and the Times, will not be immediately available on ChatGPT upon publication. The following move is part of OpenAI’s ongoing efforts to secure diverse data sources, following a similar agreement with Reddit.
The announcement has positively impacted News Corp’s market performance, with shares rising by approximately 4%. OpenAI’s continued collaboration with prominent media platforms underscores its commitment to developing sophisticated AI models capable of generating human-like responses and comprehensive text summaries.
Scarlett Johansson has accused OpenAI of creating a voice for its ChatGPT system that sounds ‘eerily similar’ to hers despite declining an offer to voice the chatbot herself. Johansson’s statement, released Monday, followed OpenAI’s announcement to withdraw the voice known as ‘Sky’.
OpenAI CEO Sam Altman clarified that a different professional actress performed Sky’s voice and was not meant to imitate Johansson. He expressed regret for not communicating better and paused the use of Sky’s voice out of respect for Johansson.
Johansson revealed that Altman had approached her last September with an offer to voice a ChatGPT feature, which she turned down. She stated that the resemblance of Sky’s voice to her own shocked and angered her, noting that even her friends and the public found the similarity striking. The actress suggested that Altman might have intentionally chosen a voice resembling hers, referencing his tweet about ‘Her’, a film where Johansson voices an AI assistant.
Why does it matter?
The controversy highlights a growing issue in Hollywood concerning the use of AI to replicate actors’ voices and likenesses. Johansson’s concerns reflect broader industry anxieties as AI technology advances, making computer-generated voices and images increasingly indistinguishable from human ones. She has hired legal counsel to investigate the creation process of Sky’s voice.
OpenAI recently introduced its latest AI model, GPT-4o, featuring audio capabilities that enable users to converse with the chatbot in real-time, showcasing a leap forward in creating more lifelike AI interactions. Scarlett Johansson’s accusations underline the ongoing challenges and ethical considerations of using AI in entertainment.
Reddit has announced a new partnership with OpenAI, allowing the popular chatbot ChatGPT to access Reddit’s content. The partnership caused Reddit’s shares to surge by 12% in extended trading, and it is part of Reddit’s strategy to diversify its revenue streams beyond advertising, complementing its recent agreement with Alphabet, which enables Google’s AI models to use Reddit’s data.
OpenAI will utilise Reddit’s application programming interface (API) to access and distribute content, marking a big step in integrating AI with social media data. Additionally, OpenAI will serve as a Reddit advertising partner, potentially boosting Reddit’s advertising revenue.
Why does it matter?
The development follows Reddit’s IPO in March and its lucrative deal with Alphabet worth around $60 million annually. Investors see these partnerships as crucial for generating revenue from data sales, supplementing the company’s advertising income. Reddit’s recent financial reports indicate strong revenue growth and improving profitability, reflecting the success of its AI data deals and advertising initiatives. Consequently, Reddit’s stock has risen by 10.5% to $62.31, showing a significant increase since its market debut.