Noyb files a complaint against OpenAI for ChatGPT inaccuracies

The European Centre for Digital Rights, or Noyb, has filed a complaint against OpenAI, claiming that ChatGPT fails to provide accurate information about individuals. According to Noyb, the General Data Protection Regulation (GDPR) mandates that information about individuals be accurate and that they have full access to this information, including its sources. However, OpenAI admits it cannot correct inaccurate information on ChatGPT, citing that factual accuracy in large language models remains an active research area.

Noyb highlights the potential dangers of ChatGPT’s inaccuracies, noting that while such errors may be tolerable for general uses like student homework, they are unacceptable when they involve personal information. The organisation cites a case where ChatGPT provided an incorrect date of birth for a public figure, and OpenAI refused to correct or delete the inaccurate data. Noyb argues this refusal breaches the GDPR, which grants individuals the right to rectify incorrect data.

Furthermore, Noyb points out that the EU law requires all personal data to be accurate, and ChatGPT’s tendency to produce false information, known as ‘hallucinations’, constitutes another violation of the GDPR. Data protection lawyer Maartje de Graaf emphasises that the inability to ensure factual accuracy can have serious consequences for individuals, making it clear that current chatbot technologies like ChatGPT are not compliant with the EU laws regarding data processing.

Noyb has requested that the Austrian data protection authority (DSB) investigate OpenAI’s data processing practices and enforce measures to ensure compliance with the GDPR. The organisation also seeks a fine against OpenAI to promote future adherence to data protection regulations.

OpenAI CEO leads safety committee for AI model training

OpenAI has established a Safety and Security Committee to oversee the training of its next AI model, the company announced on Tuesday. CEO Sam Altman will lead the committee alongside directors Bret Taylor, Adam D’Angelo, and Nicole Seligman. The committee makes safety and security recommendations to OpenAI’s board.

The committee’s initial task is to review and enhance OpenAI’s existing safety practices over the next 90 days, after which it will present its findings to the board. Following the board’s review, OpenAI plans to share the adopted recommendations publicly. This move follows the disbanding of OpenAI’s Superalignment team earlier this month, which led to the departure of key figures like former Chief Scientist Ilya Sutskever and Jan Leike.

Other members of the new committee include technical and policy experts Aleksander Madry, Lilian Weng, and head of alignment sciences John Schulman. Newly appointed Chief Scientist Jakub Pachocki and head of security Matt Knight will also be part of the committee, contributing to the safety and security oversight of OpenAI’s projects and operations.

OpenAI’s use of Scarlett Johansson’s voice faces Hollywood backlash

OpenAI’s use of Scarlett Johansson’s voice likeness in its AI model, ChatGPT, has ignited controversy in Hollywood, with Johansson accusing the company of copying her performance from the movie ‘Her’ without consent. The dispute has intensified concerns among entertainment executives about the implications of AI technology for the creative industry, particularly regarding copyright infringement and the right to publicity.

Despite OpenAI’s claims that the voice in question was not intended to resemble Johansson’s, the incident has strained relations between content creators and tech companies. Some industry insiders view OpenAI’s actions as disrespectful and indicative of hubris, potentially hindering future collaborations between Hollywood and the tech giant.

The conflict with Johansson highlights broader concerns about using copyrighted material in OpenAI’s models and the need to protect performers’ rights. While some technologists see AI as a valuable tool for enhancing filmmaking processes, others worry about its potential misuse and infringement on intellectual property.

Johansson’s case could set a precedent for performers seeking to protect their voice and likeness rights in the age of AI. Legal experts and industry figures advocate for federal legislation to safeguard performers’ rights and address the growing impact of AI-generated content, signalling a broader dialogue about the need for regulatory measures in this evolving landscape.

OpenAI strikes major content deal with News Corp

OpenAI, led by Sam Altman, has entered a significant deal with media giant News Corp, securing access to content from its major publications. The agreement follows a recent content licensing deal with the Financial Times aimed at enhancing the capabilities of OpenAI’s ChatGPT. Such partnerships are essential for training AI models, providing a financial boost to news publishers traditionally excluded from the profits generated by internet companies distributing their content.

The financial specifics of the latest deal remain undisclosed, though the Wall Street Journal, a News Corp entity, reported that it could be valued at over $250 million across five years. The deal ensures that content from News Corp’s publications, including the Wall Street Journal, MarketWatch, and the Times, will not be immediately available on ChatGPT upon publication. The following move is part of OpenAI’s ongoing efforts to secure diverse data sources, following a similar agreement with Reddit.

The announcement has positively impacted News Corp’s market performance, with shares rising by approximately 4%. OpenAI’s continued collaboration with prominent media platforms underscores its commitment to developing sophisticated AI models capable of generating human-like responses and comprehensive text summaries.

Scarlett Johansson slams OpenAI for voice likeness

Scarlett Johansson has accused OpenAI of creating a voice for its ChatGPT system that sounds ‘eerily similar’ to hers despite declining an offer to voice the chatbot herself. Johansson’s statement, released Monday, followed OpenAI’s announcement to withdraw the voice known as ‘Sky’.

OpenAI CEO Sam Altman clarified that a different professional actress performed Sky’s voice and was not meant to imitate Johansson. He expressed regret for not communicating better and paused the use of Sky’s voice out of respect for Johansson.

Johansson revealed that Altman had approached her last September with an offer to voice a ChatGPT feature, which she turned down. She stated that the resemblance of Sky’s voice to her own shocked and angered her, noting that even her friends and the public found the similarity striking. The actress suggested that Altman might have intentionally chosen a voice resembling hers, referencing his tweet about ‘Her’, a film where Johansson voices an AI assistant.

Why does it matter?

The controversy highlights a growing issue in Hollywood concerning the use of AI to replicate actors’ voices and likenesses. Johansson’s concerns reflect broader industry anxieties as AI technology advances, making computer-generated voices and images increasingly indistinguishable from human ones. She has hired legal counsel to investigate the creation process of Sky’s voice.

OpenAI recently introduced its latest AI model, GPT-4o, featuring audio capabilities that enable users to converse with the chatbot in real-time, showcasing a leap forward in creating more lifelike AI interactions. Scarlett Johansson’s accusations underline the ongoing challenges and ethical considerations of using AI in entertainment.

Reddit partners with OpenAI for ChatGPT integration

Reddit has announced a new partnership with OpenAI, allowing the popular chatbot ChatGPT to access Reddit’s content. The partnership caused Reddit’s shares to surge by 12% in extended trading, and it is part of Reddit’s strategy to diversify its revenue streams beyond advertising, complementing its recent agreement with Alphabet, which enables Google’s AI models to use Reddit’s data.

OpenAI will utilise Reddit’s application programming interface (API) to access and distribute content, marking a big step in integrating AI with social media data. Additionally, OpenAI will serve as a Reddit advertising partner, potentially boosting Reddit’s advertising revenue.

Why does it matter?

The development follows Reddit’s IPO in March and its lucrative deal with Alphabet worth around $60 million annually. Investors see these partnerships as crucial for generating revenue from data sales, supplementing the company’s advertising income. Reddit’s recent financial reports indicate strong revenue growth and improving profitability, reflecting the success of its AI data deals and advertising initiatives. Consequently, Reddit’s stock has risen by 10.5% to $62.31, showing a significant increase since its market debut.

OpenAI considers allowing AI-generated pornography

OpenAI is sparking debate by considering the possibility of allowing users to generate explicit content, including pornography, using its AI-powered tools like ChatGPT and DALL-E. While maintaining a ban on deepfakes, OpenAI’s proposal has raised concerns among campaigners who question its commitment to producing ‘safe and beneficial’ AI. The company sees potential for ‘not-safe-for-work’ (NSFW) content creation but stresses the importance of responsible usage and adherence to legal and ethical standards.

The proposal, outlined in a document discussing OpenAI’s AI development practices, aims to initiate discussions about the boundaries of content generation within its products. Joanne Jang, an OpenAI employee, stressed the need for maximum user control while ruling out deepfake creation. Despite acknowledging the importance of discussions around sexuality and nudity, OpenAI maintains strong safeguards against deepfakes and prioritises protecting users, particularly children.

Critics, however, have accused OpenAI of straying from its mission statement of developing safe and beneficial AI by delving into potentially harmful commercial endeavours like AI erotica. Concerns about the spread of AI-generated pornography have been underscored by recent incidents, prompting calls for tighter regulation and ethical considerations in the tech sector. While OpenAI’s policies prohibit sexually explicit content, questions remain about the effectiveness of safeguards and the company’s approach to handling sensitive content creation.

Why does it matter?

As discussions unfold, stakeholders, including lawmakers, experts, and campaigners, closely scrutinise OpenAI’s proposal and its potential implications for online safety and ethical AI development. With growing concerns about the misuse of AI technology, the debate surrounding OpenAI’s stance on explicit content generation highlights broader challenges in balancing innovation, responsibility, and societal well-being in the digital age.

OpenAI set to challenge Google with new AI-powered search product

OpenAI is gearing up to unveil its AI-powered search product, intensifying its rivalry with Google in the realm of search technology. The announcement, slated for Monday, comes amidst reports of OpenAI’s efforts to challenge Google’s dominance and compete with emerging players like Perplexity in the AI search space. While OpenAI has remained tight-lipped about the development, industry insiders anticipate a big step in the AI search landscape.

The timing of the announcement, just ahead of Google’s annual I/O conference, suggests OpenAI’s strategic positioning to capture attention in the tech world. Building on its flagship ChatGPT product, the new search offering promises to revolutionise information retrieval by leveraging AI to extract direct information from the web, complete with citations.

Why does it matter?

Despite ChatGPT’s initial success, OpenAI has faced challenges sustaining user growth and relevance during the chatbot’s evolution. The retirement of ChatGPT plugins in April indicates the company’s engagement to refine its offerings and adapt to user needs.

As OpenAI aims to expand its reach and enhance its product capabilities, the launch of its AI search product marks a breakthrough in its quest to redefine information access and reshape the future of AI-driven technologies.

Dotdash Meredith partners with OpenAI for AI integration

Dotdash Meredith, a prominent publisher overseeing titles like People and Better Homes & Gardens, has struck a deal with OpenAI, marking a big step in integrating AI technology into the media landscape. The agreement involves utilising AI models for Dotdash Meredith’s ad-targeting product, D/Cipher, which will enhance its precision and effectiveness. Additionally, licensing content to ChatGPT, OpenAI’s language model, will expand the reach of Dotdash Meredith’s content to a wider audience, thereby increasing its visibility and influence.

Through this partnership, OpenAI will integrate content from Dotdash Meredith’s publications into ChatGPT, offering users access to a wealth of informative articles. Moreover, both entities will collaborate on developing new AI features tailored for magazine readers, indicating a forward-looking approach to enhancing reader engagement.

One key collaboration aspect involves leveraging OpenAI’s models to enhance D/Cipher, Dotdash Meredith’s ad-targeting platform. With the impending shift towards a cookie-less online environment, the publisher aims to bolster its targeting technology by employing AI, ensuring advertisers can reach their desired audience effectively.

Dotdash Meredith’s CEO, Neil Vogel, emphasised the importance of fair compensation for publishers in the AI landscape, highlighting the need for proper attribution and compensation for content usage. The stance reflects a broader industry conversation surrounding the relationship between AI platforms and content creators.

Why does it matter?

While Dotdash Meredith joins a growing list of news organisations partnering with OpenAI, not all have embraced such agreements. Some, like newspapers owned by Alden Global Capital, have pursued legal action against OpenAI and Microsoft, citing copyright infringement concerns. These concerns revolve around using their content in AI models without proper attribution or compensation. These contrasting responses underscore the complex dynamics as AI increasingly intersects with traditional media practices.

OpenAI and Stack Overflow team up for better AI models

OpenAI and developer platform Stack Overflow have joined forces in a new partnership to enhance AI capabilities and provide richer technical information. Under this collaboration, OpenAI gains access to Stack Overflow’s API and will incorporate feedback from the developer community to refine AI models. In return, Stack Overflow will receive attribution in ChatGPT, offering users access to Stack Overflow’s extensive knowledge base when seeking coding or technical advice. Both companies anticipated that this collaboration would deepen user engagement with content.

Stack Overflow plans to leverage OpenAI’s large language models to enhance Overflow AI, introduced last year as its generative AI application. Overflow AI aims to incorporate AI-powered natural language search functionality into Stack Overflow, providing users with more intuitive access to coding solutions. Stack Overflow emphasises that it will integrate feedback from its community and internal testing of OpenAI models to develop additional AI products for its user base.

The initial phase of integrations resulting from this partnership is expected to roll out in the first half of the year, although Stack Overflow has yet to specify the exact features to be released first. The collaboration follows Stack Overflow’s similar arrangement with Google in February, where Gemini for Google Cloud users could access coding suggestions directly from Stack Overflow.

Why does it matter?

For years, developers have relied on Stack Overflow for coding solutions. Still, the company faced challenges in 2022, including a significant hiring push followed by layoffs of 28% of its workforce in October of the same year. While Stack Overflow did not provide a specific reason for the layoffs, they coincided with the growing prominence of AI-assisted coding. Additionally, Stack Overflow briefly prohibited users from sharing ChatGPT responses on its platform in 2022.