OpenAI partners with The Atlantic and Vox Media

OpenAI has secured licensing agreements with The Atlantic and Vox Media, expanding its partnerships with publishers to enhance its AI products. These deals allow OpenAI to display news from these outlets in products like ChatGPT and use their content to train its AI models. Although financial terms were not disclosed, this move follows similar agreements with major publishers like News Corp., Dotdash Meredith, and The Financial Times.

Executives from The Atlantic and Vox Media emphasised that these partnerships will help readers discover their content more easily. Nicholas Thompson, CEO of The Atlantic, highlighted the importance of AI in future web navigation and expressed enthusiasm for making The Atlantic’s stories more accessible through OpenAI’s platforms.

Additionally, these agreements will provide the publishers access to OpenAI’s technology, aiding them in developing new AI-powered products. For instance, The Atlantic is working on Atlantic Labs, an initiative focused on creating AI-driven solutions using technology from OpenAI and other companies.

Human rights groups protest Meta’s alleged censorship of pro-Palestinian content

Meta’s annual shareholder meeting on Wednesday sparked online protests from human rights groups, calling for an end to what they describe as systemic censorship of pro-Palestinian content on the company’s platforms and within its workforce. Nearly 200 Meta employees have recently urged CEO Mark Zuckerberg to address alleged internal censorship and biases on public platforms, advocating for greater transparency and an immediate ceasefire in Gaza.

Activists argue that after years of pressing Meta and other platforms for fairer content moderation, shareholders might exert more influence on the company than public pressure alone. Nadim Nashif, founder of the social media watchdog group 7amleh, highlighted that despite a decade of advocacy, the situation has deteriorated, necessitating new strategies like shareholder engagement to spur change.

Recently this month, a public statement from Meta employees followed an internal petition in 2023 with over 450 signatures, whose author faced an investigation by HR for allegedly violating company rules. The latest letter condemns Meta’s actions as creating a ‘hostile and unsafe work environment’ for Palestinian, Arab, Muslim, and ‘anti-genocide’ colleagues, with many employees claiming censorship and dismissiveness from leadership.

During the shareholder meeting, Meta focused on its AI projects and managing disinformation, sidestepping the issue of Palestinian content moderation. Despite external audit findings and a letter from US Senator Elizabeth Warren criticising Meta’s handling of pro-Palestinian content, the company did not immediately address the circulating letters and petitions.

AI tools deployed to counter cyber threats at 2024 Olympics

In just over two months, Paris will host the eagerly awaited 2024 Summer Olympics, welcoming athletes from around the globe. These athletes had a condensed preparation period due to the COVID-related delay of the 2020 Summer Olympics, which took place in Tokyo in 2021. While athletes hone their skills for the upcoming games, organisers diligently fortify their defences against cybersecurity threats.

As cyber threats become increasingly sophisticated, there’s a growing focus on leveraging AI to combat them. Blackbird.AI has developed Constellation, an AI-powered narrative intelligence platform that identifies and analyses disinformation-driven narratives. By assessing the risk and adding context to these narratives, Constellation equips organisations with invaluable insights for informed decision-making.

The platform’s real-time monitoring capability allows for early detection and mitigation of narrative attacks, which can inflict significant financial and reputational damage. With the ability to analyse various forms of content across multiple platforms and languages, Constellation offers a comprehensive approach to combating misinformation and safeguarding against online threats.

Meanwhile, the International Olympic Committee (IOC) is also embracing AI, recognising its potential to enhance various aspects of sports. From talent identification to improving judging fairness and protecting athletes from online harassment, the IOC is leveraging AI to innovate and enhance the Olympic experience. With cybersecurity concerns looming, initiatives like Viginum, spearheaded by French President Emmanuel Macron, aim to counter online interference and ensure the security of major events like the Olympics.

Telegram’s user base monitored by EU for stricter DSA compliance

EU tech regulators are closely monitoring the messaging app Telegram as it approaches a significant usage milestone that could subject it to stricter regulations under the EU’s Digital Services Act (DSA). The DSA, which came into effect last year, imposes tougher obligations on major tech companies to control illegal and harmful content on their platforms.

Telegram reported having 41 million EU users in the six months leading up to February, just shy of the 45 million threshold that would categorise it as a very large online platform (VLOP). If it reaches this threshold, Telegram will need to comply with more stringent regulations. The European Commission has confirmed ongoing communication with Telegram to monitor its user growth and compliance.

Currently, 18 online platforms, including giants like Google, Amazon, Apple, Meta, and TikTok, are already classified as VLOPs under the DSA. These platforms are required to adhere to rigorous standards to ensure safer online environments, reflecting the EU’s commitment to mitigating online risks and safeguarding digital spaces.

EU regulators work with tech giants on AI rules

According to Ireland’s Data Protection Commission, leading global internet companies are working closely with the EU regulators to ensure their AI products comply with the bloc’s stringent data protection laws. This body, which oversees compliance for major firms like Google, Meta, Microsoft, TikTok, and OpenAI, has yet to exercise its full regulatory power over AI but may enforce significant changes to business models to uphold data privacy.

AI introduces several potential privacy issues, such as whether companies can use public data to train AI models and the legal basis for using personal data. AI operators must also guarantee individuals’ rights, including the right to have their data erased and address the risk of AI models generating incorrect personal information. Significant engagement has been noted from tech giants seeking guidance on their AI innovations, particularly large language models.

Following consultations with the Irish regulator, Google has already agreed to delay and modify its Gemini AI chatbot. While Ireland leads regulation due to many tech firms’ EU headquarters being located there, other EU regulators can influence decisions through the European Data Protection Board. AI operators must comply with the new EU AI Act and the General Data Protection Regulation, which imposes fines of up to 4% of a company’s global turnover for non-compliance.

Why does it matter?

Ireland’s broad regulatory authority means that companies failing to perform due diligence on new products could be forced to alter their designs. As the EU’s AI regulatory landscape evolves, these tech firms must navigate both the AI Act and existing data protection laws to avoid substantial penalties.

USA urges tech giants to tackle antisemitic content

The Biden administration is pressing major technology companies to intensify their efforts to reduce antisemitic content on their platforms. Representatives from Alphabet, Meta, Microsoft, TikTok, and X met with US special envoy Deborah Lipstadt to discuss strategies for monitoring and combating antisemitism. Lipstadt emphasised the need for each company to assign a policy team member to address the issue, conduct specialised training to identify antisemitism, and publicly report trends in anti-Jewish content.

TikTok supported the meeting, highlighting their ongoing efforts and commitment to learning from experts. However, Alphabet, Microsoft, Meta, and X have yet to respond to requests for comment on the matter. The US administration is also calling for enhanced training to help platform staff recognise subtle antisemitic messages and distinguish between legitimate criticism of the Israeli government and hate speech directed at Jews.

The push from the administration comes amid a global increase in antisemitism following the 7 October attack by Hamas on southern Israel and the subsequent Israeli military response in Gaza. While the tech companies have not yet committed to voluntary measures, Lipstadt remains hopeful that they will take action soon to address this pressing issue.

Noyb files a complaint against OpenAI for ChatGPT inaccuracies

The European Centre for Digital Rights, or Noyb, has filed a complaint against OpenAI, claiming that ChatGPT fails to provide accurate information about individuals. According to Noyb, the General Data Protection Regulation (GDPR) mandates that information about individuals be accurate and that they have full access to this information, including its sources. However, OpenAI admits it cannot correct inaccurate information on ChatGPT, citing that factual accuracy in large language models remains an active research area.

Noyb highlights the potential dangers of ChatGPT’s inaccuracies, noting that while such errors may be tolerable for general uses like student homework, they are unacceptable when they involve personal information. The organisation cites a case where ChatGPT provided an incorrect date of birth for a public figure, and OpenAI refused to correct or delete the inaccurate data. Noyb argues this refusal breaches the GDPR, which grants individuals the right to rectify incorrect data.

Furthermore, Noyb points out that the EU law requires all personal data to be accurate, and ChatGPT’s tendency to produce false information, known as ‘hallucinations’, constitutes another violation of the GDPR. Data protection lawyer Maartje de Graaf emphasises that the inability to ensure factual accuracy can have serious consequences for individuals, making it clear that current chatbot technologies like ChatGPT are not compliant with the EU laws regarding data processing.

Noyb has requested that the Austrian data protection authority (DSB) investigate OpenAI’s data processing practices and enforce measures to ensure compliance with the GDPR. The organisation also seeks a fine against OpenAI to promote future adherence to data protection regulations.

Meta introduces tools to fight disinformation ahead of EU elections

The European Commission announced on Tuesday that Meta Platforms has introduced measures to combat disinformation ahead of the EU elections. Meta has launched 27 real-time visual dashboards, one for each EU member state, to enable third-party monitoring of civic discourse and election activities.

This development comes after the European Commission investigated Meta last month for allegedly breaching EU online content regulations. The investigation highlighted concerns over Meta’s Facebook and Instagram platforms failing to address disinformation and deceptive advertising adequately.

While the formal procedures against Meta continue, the European Commission stated that it would closely monitor the implementation of these new features to ensure their effectiveness in curbing disinformation.

OpenAI CEO leads safety committee for AI model training

OpenAI has established a Safety and Security Committee to oversee the training of its next AI model, the company announced on Tuesday. CEO Sam Altman will lead the committee alongside directors Bret Taylor, Adam D’Angelo, and Nicole Seligman. The committee makes safety and security recommendations to OpenAI’s board.

The committee’s initial task is to review and enhance OpenAI’s existing safety practices over the next 90 days, after which it will present its findings to the board. Following the board’s review, OpenAI plans to share the adopted recommendations publicly. This move follows the disbanding of OpenAI’s Superalignment team earlier this month, which led to the departure of key figures like former Chief Scientist Ilya Sutskever and Jan Leike.

Other members of the new committee include technical and policy experts Aleksander Madry, Lilian Weng, and head of alignment sciences John Schulman. Newly appointed Chief Scientist Jakub Pachocki and head of security Matt Knight will also be part of the committee, contributing to the safety and security oversight of OpenAI’s projects and operations.

Report reveals surge in fake accounts on X targeting US presidential election

Fake accounts discussing the US presidential election are increasing on the social media platform X, according to a report by Cyabra, an Israeli tech company specialising in AI-driven analysis. The report found that 15% of accounts praising former President Donald Trump and criticising President Joe Biden are fake, while 7% of accounts praising Biden and criticising Trump are fake.

Cyabra’s study analysed posts on X over two months, starting 1 March, focusing on popular hashtags and sentiment. The analysis showed a tenfold increase in fake accounts during March and April. Specifically, 12,391 out of 94,363 pro-Trump accounts and 803 out of 10,065 pro-Biden accounts were found to be bogus.

The report also noted that fake pro-Trump accounts appear to be part of a coordinated campaign pushing messages like ‘Vote for Trump’ and ‘Biden is the worst president the US has ever had,’ while fake pro-Biden accounts did not show coordinated activity.

Although X did not respond to requests for comments, Elon Musk recently announced efforts to purge bots and trolls from the platform, including testing a ‘Not a Bot’ program in New Zealand and the Philippines.

Why does it matter?

Since Russia’s interference in the 2016 election, social media platforms have faced increased scrutiny. With the upcoming election on 5 November, Cyabra’s findings on fake accounts are a cause of alarm for election officials and misinformation experts. The situation is even more concerning, given X’s history of downplaying the presence of fake accounts on its platform. According to Reuters, in May 2022, Twitter claimed that fewer than 5% of its daily active users were ‘false or spam’ based on an internal review. However, Cyabra estimated that 13.7% of Twitter profiles were inauthentic.