The EU’s privacy watchdog task force has raised concerns over OpenAI’s ChatGPT chatbot, stating that the measures taken to ensure transparency are insufficient to comply with data accuracy principles. In a report released on Friday, the task force emphasised that while efforts to prevent misinterpretation of ChatGPT’s output are beneficial, they still need to address concerns regarding data accuracy fully.
The task force was established by Europe’s national privacy watchdogs following concerns raised by authorities in Italy regarding ChatGPT’s usage. Despite ongoing investigations by national regulators, a comprehensive overview of the results has yet to be provided. The findings presented in the report represent a common understanding among national authorities.
Data accuracy is a fundamental principle of the data protection regulations in the EU. The report highlights the probabilistic nature of ChatGPT’s system, which can lead to biassed or false outputs. Furthermore, the report warns that users may perceive ChatGPT’s outputs as factually accurate, regardless of their actual accuracy, posing potential risks, especially concerning information about individuals.
Leading European research labs will receive β¬2.5 billion under the European Chips Act to establish a pilot line for developing and testing future generations of advanced computer chips, according to Belgium’s IMEC. The initiative is part of the EU’s β¬43 billion Chips Act, launched in 2023 to bolster domestic chipmaking in response to global shortages during the COVID-19 pandemic.
The pilot line, hosted by Leuven-based research hub IMEC, will focus on sub-2 nanometre chips. This facility aims to provide European industry, academia, and start-ups access to cutting-edge chip manufacturing technology, which would otherwise be prohibitively expensive. Top chipmakers like TSMC, Intel, and Samsung are already advancing 2-nanometre chips in commercial plants, costing up to β¬20 billion.
The European R&D line will be equipped with technology from European and global firms and is designed to support the development of even more advanced chips in the future. IMEC CEO Luc Van den Hove stated that this investment will double volumes and learning speed, enhancing the European chip ecosystem and driving economic growth across various industries, including automotive, telecommunications, and health.
Funding for this project includes β¬1.4 billion from several EU programs and the Flanders government, with an additional β¬1.1 billion from industry players, including equipment maker ASML. Other participating research labs include CEA-Leti from France, Fraunhofer from Germany, VTT from Finland, CSSNT from Romania, and the Tyndall Institute from Ireland. While aid under the EU plan has been slower than other regions, with only STMicroelectronics approved for β¬2.9 billion in aid from France, Intel and TSMC still await approval for substantial funding to build plants in Germany.
The EU enforcers responsible for overseeing the Digital Services Act (DSA) are intensifying their scrutiny of disinformation campaigns on X, formerly known as Twitter and owned by Elon Musk, in the aftermath of the recent shooting of Slovakia’s prime minister, Robert Fico. X has been under formal investigation since December for disseminating disinformation and the efficacy of its content moderation tools, particularly its ‘Community Notes’ feature. Despite ongoing investigations, no penalties have been imposed thus far.
Elon Musk’s personal involvement in amplifying a post by right-wing influencer Ian Miles Cheong linking the shooting to Robert Fico’s purported rejection of the World Health Organization’s pandemic prevention plan has drawn further attention to X’s role in spreading potentially harmful narratives. In response to inquiries during a press briefing, EU officials confirmed they are closely monitoring content on the platform to assess the effectiveness of X’s measures in combating disinformation.
In addition to disinformation concerns, X’s introduction of its generative AI chatbot, Grok, in the EU has raised regulatory eyebrows. Grok, known for its politically incorrect responses, has been delayed in certain aspects until after the upcoming European Parliament elections due to perceived risks to civic discourse and election integrity. The EU is in close communication with X regarding the rollout of Grok, indicating the regulatory scrutiny surrounding emerging AI technologies and their potential impact on online discourse and democratic processes.
The EU regulators announced on Thursday that Meta Platforms’ social media platforms, Facebook and Instagram, will undergo investigation for potential violations of the EU online content rules about child safety, potentially resulting in significant fines. The scrutiny follows the EU’s implementation of the Digital Services Act (DSA) last year, which places greater responsibility on tech companies to address illegal and harmful content on their platforms.
The European Commission has expressed concerns that Facebook and Instagram have not adequately addressed risks to children, prompting an in-depth investigation. Issues highlighted include the potential for the platforms’ systems and algorithms to promote behavioural addictions among children and facilitate access to inappropriate content, leading to what the Commission refers to as ‘rabbit-hole effects’. Additionally, concerns have been raised regarding Meta’s age assurance and verification methods.
Why does it matter?
Meta, formerly known as Facebook, is already under the EU scrutiny over election disinformation, particularly concerning the upcoming European Parliament elections. Violations of the DSA can result in fines of up to 6% of a company’s annual global turnover, indicating the seriousness with which the EU regulators are approaching these issues. Meta’s response to the investigation and any subsequent actions will be closely monitored as the EU seeks to enforce stricter regulations on tech giants to protect online users, especially children, from harm.
The European Commission announced on Monday that it has classified Booking as a ‘gatekeeper’ under the Digital Markets Act (DMA), signifying its strong market influence. At the same time, the Commission has initiated a market investigation into the regulatory status of social media network X to delve deeper into its market dominance. Despite this, according to the EU, online advertising services such as X Ads and TikTok Ads have not been designated as gatekeepers.
In March, the European Commission identified Elon Musk’s X, TikTok’s parent company ByteDance, and Booking.com as potential candidates for gatekeeper status, subjecting them to stringent tech regulations. While Booking has been officially designated as a gatekeeper, a market investigation has been initiated to address X’s opposition to such a classification. ByteDance was previously labelled as a gatekeeper in July last year, but TikTok has contested this designation at the EU’s second-highest court.
Why does it matter?
The Digital Markets Act (DMA) represents a significant step towards regulating the market dominance of large tech companies. It imposes stricter obligations on these firms, compelling them to moderate content, ensure fair competition, and facilitate consumer choice by making it easier to switch between services. As the EU continues to navigate the complexities of digital market regulation, the classification of gatekeepers and subsequent investigations serve as crucial measures to promote fair competition and protect consumers’ interests in the digital sphere.
The European Commission has taken a significant step in its investigation of company X under the Digital Services Act (DSA). On 8 May 2024, the Commission sent a request for information (RFI) to X, seeking detailed insights into its content moderation practices, particularly in light of a recent Transparency report highlighting a nearly 20% reduction in X’s content moderation team since October 2023. The reduction has diminished linguistic coverage within the EU, from 11 languages to 7.
Furthermore, the European Commission is keen on understanding X’s risk assessments and mitigation strategies concerning generative AI tools, especially their potential impact on electoral processes, dissemination of illegal content, and protection of fundamental rights. The investigation follows formal proceedings initiated against X in December 2023, examining potential breaches of the DSA related to risk management, content moderation, dark patterns, advertising transparency, and data access for researchers.
The request for information is part of an ongoing investigation, building upon prior evidence gathering and analysis, including X’s Transparency report released in March 2024 and its responses to previous information requests. X has been given deadlines to provide the requested information, with 17 May 2024 set for content moderation resources and generative AI-related data and 27 May 2024 for remaining inquiries. Failure to comply could result in fines or penalties imposed by the Commission, as stipulated under Article 74(2) of the DSA.
The EU regulators are gearing up to launch an investigation into Meta Platforms amid concerns regarding the company’s efforts to combat disinformation, mainly from Russia and other nations. According to a report by the Financial Times, the EU regulators are alarmed by Meta’s purported inadequacy in curbing the spread of political advertisements that could undermine the integrity of electoral processes. Citing sources familiar with the matter, the report suggests that Meta’s content moderation measures might need to address this issue more effectively.
While the investigation is expected to be initiated imminently, the European Commission is anticipated to refrain from explicitly targeting Russia in its official statement. Instead, the focus will be on the broader problem of foreign actors manipulating information. Meta Platforms and the European Commission have yet to respond to requests for comment, indicating the gravity and sensitivity of the impending probe.ΕΎ
Why does it matter?
The timing of the investigation coincides with a significant year for elections across the globe, with numerous countries, including UK, Austria, and Georgia, preparing to elect new leaders. Additionally, the European Parliament elections are slated for June, heightening the urgency for regulatory scrutiny over platforms like Meta. This development underscores the growing concern among regulators regarding the influence of disinformation on democratic processes, prompting concerted efforts to address these challenges effectively.
ByteDance, the company behind TikTok, has submitted a long-awaited risk assessment for its TikTok Lite service, recently launched in France and Spain, following regulatory threats of fines and potential bans from the European Commission. Regulators are concerned about the addictive nature of TikTok Lite, particularly its rewards system for users, and claim ByteDance didn’t complete a full risk assessment on time.
ByteDance now has until 24 April to defend itself against regulatory action, including possibly suspending the rewards program. Failure to comply with regulations could result in fines of up to 1% of its total annual income or periodic penalties of up to 5% of its average daily income under the Digital Services Act (DSA).
Meanwhile, in the US, legislation is swiftly advancing through Congress, requiring ByteDance, the Chinese company that owns TikTok, to divest its ownership within a year or face a US ban. The Senate has passed this measure as part of a foreign aid package, sending it to President Joe Biden for his expected approval. ByteDance will have nine months initially, with a possible three-month extension, to complete the sale, though legal challenges could cause delays.
The European Commission has warned TikTok that it may suspend a key feature of TikTok Lite in the European Union on Thursday if the company fails to address concerns regarding its impact on users’ mental health. This action is being taken under the EU’s Digital Services Act (DSA), which mandates that large online platforms take action against harmful content or face fines of up to 6% of their global annual turnover.
Thierry Breton, the EU industry chief, emphasised the Commission’s readiness to implement interim measures, including suspending TikTok Lite, if TikTok does not provide compelling evidence of the feature’s safety. Breton highlighted concerns about potential addiction generated by TikTok Lite’s reward program.
The TikTok Lite app, recently launched in France and Spain, includes a reward program where users earn points by engaging in specific tasks on the platform. However, TikTok should have submitted a risk assessment report before the app’s launch, as required by the DSA. The Commission remains firm on enforcing regulations to protect users’ well-being amidst the growing influence of digital platforms.
Three major adult content platforms, Pornhub, Stripchat, and XVideos, are required to conduct risk assessments and implement measures to address systemic risks associated with their services under new EU online content rules announced by the European Commission. These companies were classified as very large online platforms in December under the Digital Services Act (DSA), which demands heightened efforts to remove illegal and harmful content from their platforms.
The EU executive specified that Pornhub and Stripchat must comply with these rigorous DSA obligations by 21 April, while XVideos has until 23 April to do the same. These obligations include submitting risk assessment reports to the Commission and implementing mitigation measures to tackle systemic risks linked to their services. Additionally, the platforms are expected to adhere to transparency requirements related to advertisements and provide researchers with data access.
Failure to comply with the DSA regulations could lead to significant penalties, with companies facing fines of up to 6% of their global annual turnover for breaches. The European Commission’s actions underscore its commitment to ensuring that large online platforms take proactive steps to address illegal and harmful content, particularly within the context of adult content services. These measures are part of broader efforts to enhance online safety and accountability across digital platforms operating within the EU.