A US appeals court has expedited the legal challenges to a new law requiring China-based ByteDance to divest TikTok’s US assets by 19 January or face a ban. The US Court of Appeals for the District of Columbia has scheduled oral arguments for September following a joint request from TikTok, ByteDance, a group of TikTok content creators, and the Justice Department for a swift resolution.
In May, TikTok creators and ByteDance filed lawsuits to block the law, arguing that TikTok, used by 170 million Americans, has significantly impacted American life. The appeals court has set deadlines for legal briefs from the creators, TikTok, and ByteDance by 20 June and from the Justice Department by 26 July, with reply briefs due by 15 August. TikTok aims to resolve the legal challenge quickly to avoid seeking emergency preliminary injunctive relief.
The law, signed by President Joe Biden on 24 April, mandates ByteDance to sell TikTok or face a ban, citing national security concerns over potential Chinese access to American data. It also prohibits app stores like Apple and Google from offering TikTok and bars internet hosting services from supporting it unless ByteDance divests. The measure, driven by fears of espionage, passed overwhelmingly in Congress shortly after being introduced.
The European Centre for Digital Rights, or Noyb, has filed a complaint against OpenAI, claiming that ChatGPT fails to provide accurate information about individuals. According to Noyb, the General Data Protection Regulation (GDPR) mandates that information about individuals be accurate and that they have full access to this information, including its sources. However, OpenAI admits it cannot correct inaccurate information on ChatGPT, citing that factual accuracy in large language models remains an active research area.
Noyb highlights the potential dangers of ChatGPT’s inaccuracies, noting that while such errors may be tolerable for general uses like student homework, they are unacceptable when they involve personal information. The organisation cites a case where ChatGPT provided an incorrect date of birth for a public figure, and OpenAI refused to correct or delete the inaccurate data. Noyb argues this refusal breaches the GDPR, which grants individuals the right to rectify incorrect data.
Furthermore, Noyb points out that the EU law requires all personal data to be accurate, and ChatGPT’s tendency to produce false information, known as ‘hallucinations’, constitutes another violation of the GDPR. Data protection lawyer Maartje de Graaf emphasises that the inability to ensure factual accuracy can have serious consequences for individuals, making it clear that current chatbot technologies like ChatGPT are not compliant with the EU laws regarding data processing.
Noyb has requested that the Austrian data protection authority (DSB) investigate OpenAI’s data processing practices and enforce measures to ensure compliance with the GDPR. The organisation also seeks a fine against OpenAI to promote future adherence to data protection regulations.
OpenAI has established a Safety and Security Committee to oversee the training of its next AI model, the company announced on Tuesday. CEO Sam Altman will lead the committee alongside directors Bret Taylor, Adam D’Angelo, and Nicole Seligman. The committee makes safety and security recommendations to OpenAI’s board.
The committee’s initial task is to review and enhance OpenAI’s existing safety practices over the next 90 days, after which it will present its findings to the board. Following the board’s review, OpenAI plans to share the adopted recommendations publicly. This move follows the disbanding of OpenAI’s Superalignment team earlier this month, which led to the departure of key figures like former Chief Scientist Ilya Sutskever and Jan Leike.
Other members of the new committee include technical and policy experts Aleksander Madry, Lilian Weng, and head of alignment sciences John Schulman. Newly appointed Chief Scientist Jakub Pachocki and head of security Matt Knight will also be part of the committee, contributing to the safety and security oversight of OpenAI’s projects and operations.
Meta Platforms has agreed to limit the use of certain data from advertisers on its Facebook Marketplace as part of an updated proposal accepted by the UK’s Competition Market Authority (CMA). The request aims to prevent Meta from exploiting its advertising customers’ data. The initial commitments, accepted by the CMA in November, included allowing competitors to opt out of having their data used to enhance Facebook Marketplace.
The British competition regulator has provisionally accepted Meta’s updated changes and is now seeking feedback from interested parties, with the consultation period closing on 14 June. The details about any further amendments to Meta’s initial proposals in UK have yet to be disclosed. The following decision reflects a broader effort by regulators to ensure fair competition and prevent dominant platforms from misusing data.
In November, Amazon committed to avoiding the use of marketplace data from rival sellers, thereby promoting an even playing field for third-party sellers. Both cases highlight the increasing scrutiny of major tech companies regarding their data practices and market power, aiming to foster a more competitive and transparent digital marketplace.
The EU’s privacy watchdog task force has raised concerns over OpenAI’s ChatGPT chatbot, stating that the measures taken to ensure transparency are insufficient to comply with data accuracy principles. In a report released on Friday, the task force emphasised that while efforts to prevent misinterpretation of ChatGPT’s output are beneficial, they still need to address concerns regarding data accuracy fully.
The task force was established by Europe’s national privacy watchdogs following concerns raised by authorities in Italy regarding ChatGPT’s usage. Despite ongoing investigations by national regulators, a comprehensive overview of the results has yet to be provided. The findings presented in the report represent a common understanding among national authorities.
Data accuracy is a fundamental principle of the data protection regulations in the EU. The report highlights the probabilistic nature of ChatGPT’s system, which can lead to biassed or false outputs. Furthermore, the report warns that users may perceive ChatGPT’s outputs as factually accurate, regardless of their actual accuracy, posing potential risks, especially concerning information about individuals.
OpenAI’s use of Scarlett Johansson’s voice likeness in its AI model, ChatGPT, has ignited controversy in Hollywood, with Johansson accusing the company of copying her performance from the movie ‘Her’ without consent. The dispute has intensified concerns among entertainment executives about the implications of AI technology for the creative industry, particularly regarding copyright infringement and the right to publicity.
Despite OpenAI’s claims that the voice in question was not intended to resemble Johansson’s, the incident has strained relations between content creators and tech companies. Some industry insiders view OpenAI’s actions as disrespectful and indicative of hubris, potentially hindering future collaborations between Hollywood and the tech giant.
The conflict with Johansson highlights broader concerns about using copyrighted material in OpenAI’s models and the need to protect performers’ rights. While some technologists see AI as a valuable tool for enhancing filmmaking processes, others worry about its potential misuse and infringement on intellectual property.
Johansson’s case could set a precedent for performers seeking to protect their voice and likeness rights in the age of AI. Legal experts and industry figures advocate for federal legislation to safeguard performers’ rights and address the growing impact of AI-generated content, signalling a broader dialogue about the need for regulatory measures in this evolving landscape.
Microsoft’s recent deal with UAE-backed AI firm G42 could involve the transfer of advanced AI technology, raising concerns about national security implications. Microsoft President Brad Smith highlighted that the agreement might eventually include exporting sophisticated chips and AI model weights, although this phase has no set timeline. The deal, which necessitates US Department of Commerce approval, includes safeguards to prevent the misuse of technology by Chinese entities. However, details of these measures remain undisclosed, prompting scepticism among US lawmakers about their adequacy.
Concerns about the agreement have been voiced by senior US officials, who warn of the potential national security risks posed by advanced AI systems, such as the ease of engineering dangerous weapons. Representative Michael McCaul expressed frustration over the lack of a comprehensive briefing for Congress, citing fears of Chinese espionage through UAE channels. Current regulations require notifications and export licenses for AI chips, but gaps exist regarding the export of AI models, leading to legislative efforts to grant US officials more explicit control over such exports.
Why does it matter?
The deal, valued at $1.5 billion, was framed as a strategic move to extend US technology influence amid global competition, particularly with China. Although the exact technologies and security measures involved are not fully disclosed, the agreement aims to enhance AI capabilities in regions like Kenya and potentially Turkey and Egypt. Microsoft asserts that G42 will adhere to US regulatory requirements and has implemented a ‘know your customer’ rule to prevent Chinese firms from using the technology for training AI models.
Microsoft emphasises its commitment to ensuring secure global technology transfers, with provisions for imposing financial penalties on G42 through arbitration courts in London if compliance issues arise. While the US Commerce Department will oversee the deal under existing and potential future export controls, how Commerce Secretary Gina Raimondo will handle the approval process remains uncertain. Smith anticipates that the regulatory framework developed for this deal will likely be applied broadly across the industry.
A Wisconsin man, Steven Anderegg, has been charged by the FBI for creating over 10,000 sexually explicit and abusive images of children using AI. The 42-year-old allegedly used the popular AI tool Stable Diffusion to generate around 13,000 hyper-realistic images depicting prepubescent children in disturbing and explicit scenarios. Authorities discovered these images on his laptop following a tip-off from the National Center for Missing & Exploited Children (NCMEC), which had flagged his Instagram activity.
Anderegg’s charges include creating, distributing, and possessing child sexual abuse material (CSAM), as well as sending explicit content to a minor. If convicted, he faces up to 70 years in prison. The following case marks one of the first instances where the FBI has charged someone for generating AI-created child abuse material. The rise in such cases has prompted significant concern among child safety advocates and AI researchers, who warn of the increasing potential for AI to facilitate the creation of harmful content.
Reports of online child abuse have surged, partly due to the proliferation of AI-generated material. In 2023, the NCMEC noted a 12% increase in flagged incidents, straining their resources. The Department of Justice has reaffirmed its commitment to prosecuting those who exploit AI to create CSAM, emphasising that AI-generated explicit content is equally punishable under the law.
Stable Diffusion, an open-source AI model, has been identified as a tool used to generate such material. Stability AI, the company behind its development, has stated that the model used by Anderegg was an earlier version created by another startup, RunwayML. Stability AI asserts that it has since implemented stronger safeguards to prevent misuse and prohibits creating illegal content with its tools.
Scarlett Johansson has accused OpenAI of creating a voice for its ChatGPT system that sounds ‘eerily similar’ to hers despite declining an offer to voice the chatbot herself. Johansson’s statement, released Monday, followed OpenAI’s announcement to withdraw the voice known as ‘Sky’.
OpenAI CEO Sam Altman clarified that a different professional actress performed Sky’s voice and was not meant to imitate Johansson. He expressed regret for not communicating better and paused the use of Sky’s voice out of respect for Johansson.
Johansson revealed that Altman had approached her last September with an offer to voice a ChatGPT feature, which she turned down. She stated that the resemblance of Sky’s voice to her own shocked and angered her, noting that even her friends and the public found the similarity striking. The actress suggested that Altman might have intentionally chosen a voice resembling hers, referencing his tweet about ‘Her’, a film where Johansson voices an AI assistant.
Why does it matter?
The controversy highlights a growing issue in Hollywood concerning the use of AI to replicate actors’ voices and likenesses. Johansson’s concerns reflect broader industry anxieties as AI technology advances, making computer-generated voices and images increasingly indistinguishable from human ones. She has hired legal counsel to investigate the creation process of Sky’s voice.
OpenAI recently introduced its latest AI model, GPT-4o, featuring audio capabilities that enable users to converse with the chatbot in real-time, showcasing a leap forward in creating more lifelike AI interactions. Scarlett Johansson’s accusations underline the ongoing challenges and ethical considerations of using AI in entertainment.
The UK’s AI safety institute is set to open an office in the US this summer, aiming to enhance international collaboration on AI regulation. The new office in San Francisco will recruit technical staff to support the institute’s efforts in London and strengthen connections with its US counterparts. The new office opening underscores the need for coordinated global efforts to manage AI’s rapid advancements and potential risks. Experts have highlighted the existential threats AI could pose, comparable to nuclear weapons or climate change, making international regulation crucial.
Why does it matter?
This announcement comes just before Seoul’s second global AI safety summit, co-hosted by the British and South Korean governments. The summit will bring together leaders to discuss AI safety, innovation, and inclusion.
The initiative follows significant concerns raised after OpenAI released ChatGPT in November 2022, prompting calls for a development pause due to unpredictable threats. The first AI safety summit at Britain’s Bletchley Park saw world leaders and tech executives, including US Vice President Kamala Harris and OpenAI’s Sam Altman, discuss regulatory approaches.
The summit fostered cooperation despite global tensions, with China signing the ‘Bletchley Declaration’ alongside the US and others. Britain’s technology minister, Michele Donelan, emphasised the importance of international standards on AI safety, which will be a key topic at the upcoming Seoul summit.