CODE coalition advocates for open digital ecosystems to drive EU growth and innovation

The Coalition for Open Digital Ecosystems (CODE), a collaborative industry initiative launched in late 2023 by tech giants like Meta, Google and Qualcomm, held its first public event in Brussels advocating for open digital ecosystems to stimulate growth, foster innovation, and empower consumers, particularly within the challenging global context of the EU’s economy. The event hosted a high-level panel discussion with representatives from Meta, BEUC, the European Parliament and Copenhagen Business School. 

Qualcomm CEO Cristiano Amon gave an interview to Euractiv where he emphasised CODE’s three key elements of openness – seamless connectivity and interoperability, consumer choice, an an environment of open access. These elements aim to enhance user experience, maintain data access, and provide fair access to digital tools for developers, particularly smaller companies and startups. Amon highlighted the importance of interoperability and fair access for developers, especially as platforms evolve and become more relevant for various devices, including cars. He also stressed the need to provide fair access for smaller companies with new ideas to participate and reach customers in a competitive environment.

He said that Qualcomm is focused on developing computing engines, such as the Neural Processing Unit (NPU), which is designed to run all the time and handle multiple models. This development aims to add computing capability to various devices while addressing the challenge of integrating this new engine into devices without compromising battery life. Amon also expressed a positive view of the EU’s Digital Markets Act (DMA), applauding the European regulatory leadership for their focus on the importance of open and interoperable platforms. 

Why does it matter?

The panel discussion envisioned a positive scenario for the European digital agenda, highlighting the importance of openness, interoperability, and collaboration for consumers, businesses, and innovation. CODE’s emergence as a new stakeholder in the Brussels digital, tech, and competition policy space highlights the growing recognition of the importance of open digital ecosystems in fostering growth, innovation, and consumer empowerment within the EU’s digital landscape.

TikTok fails disinformation test ahead of EU elections, study reveals

A recent study by Global Witness has revealed alarming deficiencies in TikTok’s ability to manage disinformation related to the upcoming EU elections. The investigation tested the platform’s content moderation by submitting 16 disinformation ads. TikTok approved all of these ads, which included false information about polling station closures, incorrect voting methods, and incitements to violence.

The Global Witness study developed 16 disinformation ads relating to the upcoming European parliamentary elections in Ireland and submitted them to X, YouTube and TikTok while allowing at least 48 hours for the review process. Additionally, Global Witness said that it’s submitting a complaint to the Irish regulators regarding TikTok’s failure to mitigate electoral threats. Examples of fake ads include: 

  • Incorrect voting method ‘Don’t vote in person this EU election! New reports find that ballots are being altered by election workers. Vote instead by texting 05505’
  • Incorrect voting requirements ‘New rules change ID requirement for 2024 elections. If you don’t have a valid driving licence, you can’t vote this election day’ 
  • Polling station closure ‘Following an unprecedented spike in contagious diseases, the Electoral Commission is to close all polling stations to protect staff and public health. Please vote online instead.’

While YouTube rejected most disinformation ads (14 out of 16) and X blocked all of them and suspended their ability to run ads, TikTok’s approval rate was a concerning 100%. This highlights a significant vulnerability in TikTok’s moderation process, especially given its large and youthful user base. 

Why does it matter?

TikTok’s failure to effectively moderate election-related content violates both its own policies which ‘do not allow misinformation or content about civic and electoral process that may result in voter interference, disrupt the peaceful transfer of power, or to off-platform violence‘ and the EU’s Digital Services Act, which requires very large online platforms (VLOPs) to mitigate electoral risks by ensuring that they ‘are able to react rapidly to manipulation of their service aimed at undermining the electoral process and attempts to use disinformation and information manipulation to suppress voters.’

A similar study on TikTok led by the EU Disinfo Lab further emphasises the issue and highlights several concerns regarding Algorithmic amplification, user demographics and policy enforcement. TikTok’s recommendation algorithm often promotes sensational and misleading content, increasing the spread of disinformation, and with a predominantly young user base, it can influence a critical segment of the electorate. Despite having policies against political ads and disinformation, enforcement could be more consistent and often effective.

In Tiktok’s response to the study, the platform recognised a violation of its policy, citing an internal investigation following a ‘human error’ and the implementation of new processes to prevent this from happening in the future.

EU alleges Russian disinformation ahead of elections

European governments are raising alarms over alleged Russian disinformation campaigns as the EU prepares for its parliamentary elections from June 6-9. They claim Moscow, alongside pro-Kremlin actors, is engaged in a broad interference effort to discredit European governments and destabilise the EU. Key tactics allegedly include the spread of manipulated information, deepfake videos, and fake news websites designed to resemble legitimate sources. For instance, the Czech Republic has identified voiceofeurope.com as a leading platform for pro-Russian influence operations, allegedly funded by Ukrainian politician Viktor Medvedchuk.

Russia, however, vehemently denies these accusations, labelling them as part of a Western-led information war aimed at tarnishing its reputation. Russian officials argue that the West is suppressing alternative viewpoints and has banned Russian state media such as RIA Novosti and Izvestia. They contend that the West’s intolerance for dissenting narratives fuels these allegations, positioning Russia as a fabricated enemy.

European officials also point to the sophisticated use of ‘deepfakes’ and ‘doppelganger’ sites in these disinformation efforts. Deepfakes, created with AI to produce realistic fake media, have been used to spread false narratives, such as the fake recording of Slovak politician Michal Simecka discussing vote rigging. Doppelganger sites, mimicking legitimate news sources, have disseminated false information, including fabricated stories about France’s policies.

In response, the EU leaders emphasise the need for a strong, democratic Europe, while new regulations under the Digital Services Act demand swift action against illegal content and deceptive practices on social media platforms. Companies like Meta, Google, and TikTok have announced measures to combat disinformation before the elections.

EU approves state aid for the construction of microchip plant in Sicily

The European Commission has given the green light to Italian state aid for semiconductor manufacturer STMicroelectronics to construct a €5 billion microchip plant in Sicily. This approval comes as Europe seeks to reduce dependence on Asian imports for critical manufacturing components. The plant in Catania will specialise in producing microchips that enhance energy efficiency in electric vehicles and will receive a direct grant of about €2 billion from Rome.

The move comes amidst heightened scrutiny of Europe’s reliance on Asian chip supplies due to pandemic-related disruptions and trade tensions with China. To address these concerns, the EU has introduced its Chips Act, aiming to attract chip manufacturers and secure vital components for hi-tech industries. European antitrust chief Margrethe Vestager emphasised the strategic importance of diversifying chip supply chains and reducing dependencies on single suppliers.

As major shareholders in STMicro, the governments of France and Italy are backing the project. The new plant, STMicro’s second facility in Sicily, will produce silicon carbide chips known for their energy efficiency. This initiative signals a shift towards self-sufficiency in semiconductor production and supports Europe’s digital and green transition objectives. With operations expected to reach full capacity by 2032, the plant aims to bolster regional security of chip supply and meet the growing demand from automakers like Tesla, BMW, and Renault.

Meta introduces tools to fight disinformation ahead of EU elections

The European Commission announced on Tuesday that Meta Platforms has introduced measures to combat disinformation ahead of the EU elections. Meta has launched 27 real-time visual dashboards, one for each EU member state, to enable third-party monitoring of civic discourse and election activities.

This development comes after the European Commission investigated Meta last month for allegedly breaching EU online content regulations. The investigation highlighted concerns over Meta’s Facebook and Instagram platforms failing to address disinformation and deceptive advertising adequately.

While the formal procedures against Meta continue, the European Commission stated that it would closely monitor the implementation of these new features to ensure their effectiveness in curbing disinformation.

Leadership vacancy stalls EU AI office development

Two months after the European Parliament passed the landmark AI Act, the EU Commission office responsible for its implementation remains understaffed and leaderless. Although such a pace is common for public institutions, stakeholders worry it may delay the enactment of the hundreds of pages of the AI Act, especially with some parts coming into effect by the end of the year.

The EU Commission’s Directorate-General for Communication Networks, Content, and Technology (DG Connect), which houses the AI Office, is undergoing a reorganisation. Despite reassurances from officials that preparations are on track, concerns persist about the office’s limited budget, slow hiring process, and the overwhelming workload on the current staff. Three Members of the European Parliament (MEPs) have expressed dissatisfaction with the transparency and progress of the recruitment and leadership processes.

The European Commission has identified 64 deliverables for the AI Office, prohibiting certain AI uses set to take effect by the year’s end. Codes of practice for general-purpose models, such as ChatGPT, will be developed within nine months of the legislation’s enactment. Despite recent recruitment efforts, including two positions opened in March and additional roles for lawyers and AI ethicists soon to be advertised, the hiring process is expected to take several more months.

Why does it matter?

A significant question remains regarding the leadership of the AI Office. The Commission has yet to announce candidates or details of the selection process. Speculation has arisen around MEP Dragoș Tudorache, who has been active in AI policy and is not seeking re-election. However, he has not confirmed any plans post-tenure. The Commission aims to finalise the office’s staffing and leadership to ensure the smooth implementation of the AI Act.

Airlines, hotels, and retailers in EU worry about exclusion in Google’s search alterations

Lobbying groups representing airlines, hotels, and retailers in Europe are urging the EU tech regulators to ensure that Google considers their views, not just those of large intermediaries, when implementing changes to comply with landmark tech regulations. These groups, including Airlines for Europe, Hotrec, EuroCommerce, and Ecommerce Europe, had previously expressed concerns about the potential impact of the EU’s Digital Markets Act (DMA) on their revenues.

The DMA aims to impose rules on tech giants like Google to give users more choice and offer competitors a fairer chance to compete. However, these industry groups fear the proposed adjustments could harm their direct sales revenues and exacerbate discrimination. In a joint letter to EU antitrust chief Margrethe Vestager and EU industry chief Thierry Breton, dated 22 May, they emphasised their mounting concerns regarding the potential consequences of the DMA.

Why does it matter?

Specifically, the groups worry that the proposed changes may give preferential treatment to powerful online intermediaries, resulting in a loss of visibility and traffic for airlines, hotels, merchants, and restaurants.

Despite Google’s acknowledgement in March that changes to search results may impact various businesses, including those in the European market, the company has not provided immediate comment on the recent concerns raised by these lobbying groups. The European Commission, currently investigating Google for possible DMA breaches, has yet to respond to requests for comment on the matter.

ChatGPT faces scrutiny from EU privacy watchdog over data accuracy

The EU’s privacy watchdog task force has raised concerns over OpenAI’s ChatGPT chatbot, stating that the measures taken to ensure transparency are insufficient to comply with data accuracy principles. In a report released on Friday, the task force emphasised that while efforts to prevent misinterpretation of ChatGPT’s output are beneficial, they still need to address concerns regarding data accuracy fully.

The task force was established by Europe’s national privacy watchdogs following concerns raised by authorities in Italy regarding ChatGPT’s usage. Despite ongoing investigations by national regulators, a comprehensive overview of the results has yet to be provided. The findings presented in the report represent a common understanding among national authorities.

Data accuracy is a fundamental principle of the data protection regulations in the EU. The report highlights the probabilistic nature of ChatGPT’s system, which can lead to biassed or false outputs. Furthermore, the report warns that users may perceive ChatGPT’s outputs as factually accurate, regardless of their actual accuracy, posing potential risks, especially concerning information about individuals.

EU Chips Act funds new chip pilot line with €2.5 billion

Leading European research labs will receive €2.5 billion under the European Chips Act to establish a pilot line for developing and testing future generations of advanced computer chips, according to Belgium’s IMEC. The initiative is part of the EU’s €43 billion Chips Act, launched in 2023 to bolster domestic chipmaking in response to global shortages during the COVID-19 pandemic.

The pilot line, hosted by Leuven-based research hub IMEC, will focus on sub-2 nanometre chips. This facility aims to provide European industry, academia, and start-ups access to cutting-edge chip manufacturing technology, which would otherwise be prohibitively expensive. Top chipmakers like TSMC, Intel, and Samsung are already advancing 2-nanometre chips in commercial plants, costing up to €20 billion.

The European R&D line will be equipped with technology from European and global firms and is designed to support the development of even more advanced chips in the future. IMEC CEO Luc Van den Hove stated that this investment will double volumes and learning speed, enhancing the European chip ecosystem and driving economic growth across various industries, including automotive, telecommunications, and health.

Funding for this project includes €1.4 billion from several EU programs and the Flanders government, with an additional €1.1 billion from industry players, including equipment maker ASML. Other participating research labs include CEA-Leti from France, Fraunhofer from Germany, VTT from Finland, CSSNT from Romania, and the Tyndall Institute from Ireland. While aid under the EU plan has been slower than other regions, with only STMicroelectronics approved for €2.9 billion in aid from France, Intel and TSMC still await approval for substantial funding to build plants in Germany.

EU investigates disinformation on X after Slovakia PM shooting

The EU enforcers responsible for overseeing the Digital Services Act (DSA) are intensifying their scrutiny of disinformation campaigns on X, formerly known as Twitter and owned by Elon Musk, in the aftermath of the recent shooting of Slovakia’s prime minister, Robert Fico. X has been under formal investigation since December for disseminating disinformation and the efficacy of its content moderation tools, particularly its ‘Community Notes’ feature. Despite ongoing investigations, no penalties have been imposed thus far.

Elon Musk’s personal involvement in amplifying a post by right-wing influencer Ian Miles Cheong linking the shooting to Robert Fico’s purported rejection of the World Health Organization’s pandemic prevention plan has drawn further attention to X’s role in spreading potentially harmful narratives. In response to inquiries during a press briefing, EU officials confirmed they are closely monitoring content on the platform to assess the effectiveness of X’s measures in combating disinformation.

In addition to disinformation concerns, X’s introduction of its generative AI chatbot, Grok, in the EU has raised regulatory eyebrows. Grok, known for its politically incorrect responses, has been delayed in certain aspects until after the upcoming European Parliament elections due to perceived risks to civic discourse and election integrity. The EU is in close communication with X regarding the rollout of Grok, indicating the regulatory scrutiny surrounding emerging AI technologies and their potential impact on online discourse and democratic processes.