Wolfspeed delays $3 billion chip plant in Germany as EU slows down semiconductor industry

Wolfspeed has delayed its $3 billion chip plant project in Germany, highlighting the European Union’s challenges in boosting semiconductor production. Originally set to begin construction this year, the plant in Saarland is now postponed to mid-2025. Wolfspeed, under pressure from an activist investor due to a significant drop in stock value, is focusing on ramping up production in New York instead.

The delay reflects broader issues within the EU’s efforts to enhance its semiconductor industry through the 2022 Chips Act, which aimed to raise €43 billion. Despite ambitious plans from companies like Intel, TSMC, and Infineon, many projects have yet to receive necessary EU state aid approval, crucial for their financial viability. The region’s goal to capture 20% of the global semiconductor market by 2030 appears increasingly unattainable.

Why does it matter?

Germany, a major player in these plans, faces a budget crisis, casting doubt on its infrastructure commitments, though officials claim semiconductor funding remains secure. Meanwhile, European political shifts could threaten support for key projects, complicating efforts to reduce reliance on Asian chip producers. Despite these setbacks, some projects, like TSMC’s in Dresden and STMicroelectronics’ plant in Italy, are progressing with the EU approval and ongoing construction.

Meta halts AI launch in Europe after EU regulator ruling

Meta’s main EU regulator, the Irish Data Protection Commission (DPC), requested that the company delay the training of its large language models (LLMs) on content published publicly by adults on the company’s platforms. In response, Meta announced they would not be launching their AI in Europe for the time being. 

The main reason behind the request is Meta’s plan to use this data to train its AI models without explicitly seeking consent. The company claims it must do so or else its AI ‘won’t accurately understand important regional languages, cultures or trending topics on social media.’ It is already developing continent-specific AI technology. Another cause for concern is Meta’s use of information belonging to people who do not use its services. In a message to its Facebook users, it said that it may process information about non-users if they appear in an image or are mentioned on their platforms. 

The DPC welcomed Meta’s decision to delay its implementation. The commission is leading the regulation of Meta’s AI tools on behalf of EU data protection authorities (DPAs), 11 of which received complaints by advocacy group NOYB (None Of Your Business). NOYB argues that the GDPR is flexible enough to accommodate this AI, as long as it asks for the user’s consent. The delay comes right before Meta’s new privacy policy comes into force on 26 June. 

Beyond the EU, the executive director of the UK’s Information Commissioner’s Office was pleased with the delay, and added that ‘in order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset.’

EU charges Apple and Meta for non-compliance

Apple and Meta Platforms are set to face charges from the European Commission for failing to comply with the EU’s Digital Markets Act (DMA) before the summer. The DMA aims to curb the dominance of Big Tech by ensuring fair competition and making it easier for users to switch between competing services. Apple and Meta are the Commission’s priority cases, with Apple expected to be charged first, followed by Meta.

Apple’s charges will focus on its App Store policies, which allegedly restrict app developers from informing users about alternative offers and impose new fees. Additionally, a separate investigation into Apple’s Safari web browser is expected to take more time. Meta’s charges will centre on its recent ‘pay or consent’ model for Facebook and Instagram, which requires users to either pay for an ad-free experience or consent to targeted advertising.

Both companies have the opportunity to address the concerns before the final decision, which could result in fines of up to 10% of their global annual turnover. Apple stated in March that it believes its plans comply with the DMA and is engaging constructively with the Commission. Meta and the Commission declined to comment on the ongoing investigations.

European groups urge fairness in EU cybersecurity label for Big Tech

A proposed cybersecurity certification scheme (EUCS) for cloud services has raised concerns among 26 industry groups across Europe, who caution against potential discrimination towards major US tech firms like Amazon, Alphabet’s Google, and Microsoft. The European Commission, EU cybersecurity agency ENISA, and EU countries are set to discuss the scheme, which has seen multiple revisions since its draft release in 2020. The EUCS aims to help governments and businesses select secure and reliable cloud vendors, a critical consideration in the rapidly growing global cloud computing industry.

The latest version of the scheme, updated in March, removed stringent sovereignty requirements that would have forced US tech giants to form joint ventures or collaborate with EU-based companies to handle data within the bloc, a criterion for earning the highest EU cybersecurity label. In a joint letter, the industry groups argued for a non-discriminatory EUCS that fosters the free movement of cloud services across Europe, aligning with industry best practices and supporting Europe’s digital goals and security resilience.

The signatories, which include various chambers of commerce and industry associations from several European countries, emphasised the importance of diverse and resilient cloud technologies for their members to compete globally. They welcomed the removal of ownership controls and specific data protection requirements, arguing that these changes would ensure cloud security improvements without discriminating against non-EU companies.

EU cloud vendors like Deutsche Telekom, Orange, and Airbus have advocated for sovereignty requirements, fearing non-EU government access to European data under foreign laws. However, the industry groups contend that the inclusive approach of the revised EUCS will better serve Europe’s digital and security needs while promoting a competitive market environment.

India’s EU-inspired antitrust law raises concerns among tech giants

India’s recent legislative push to implement antitrust laws like those in the EU has stirred significant concern among technology giants operating within the country, like Google, Meta, Apple and Amazon. That move, aimed at curbing the dominance of big tech companies and fostering a more competitive market environment, was met with a mixed reception, particularly from those within the technology sector.

The proposed antitrust law draws inspiration from the regulatory framework of the EU, which has been at the forefront of global antitrust enforcement. The EU’s regulations are known for their rigorous scrutiny of large tech corporations, often resulting in major fines and operational restrictions for companies that violate competition laws. Adaptation of this model in India signals a shift towards more assertive regulatory practices in the tech industry.

The Indian government is examining a panel’s report proposing a new ‘Digital Competition Bill‘ to complement existing antitrust laws. The law would target ‘systemically significant digital’ companies with a domestic turnover exceeding $480 million or a global turnover over $30 billion, along with a local user base of at least 10 million for its digital services. Companies would be required to operate in a fair and non-discriminatory manner, with the bill recommending a penalty of up to 10% of a company’s global turnover for violations, mirroring the EU’s Digital Markets Act. Big digital companies would be prohibited from exploiting non-public user data and from favoring their own products or services on their platforms. Additionally, they would be barred from restricting users’ ability to download, install, or use third-party apps in any way, and must allow users to select default settings freely.

Both domestic and international tech firms have voiced concerns about the potential impact of these regulations on their operations. A key US lobby group has already opposed the move, fearing its business impact. The primary worry is that the new laws could stifle innovation and place difficult compliance burdens on companies. That sentiment echoes the broader global debate on the balance between regulation and innovation in the tech sector.

Why does it matter?

  •  Market Dynamics: These laws could significantly alter the competitive landscape in India’s tech industry, making it easier for smaller companies to challenge established giants. 
  • Consumer Protection: Robust antitrust regulations are designed to protect consumers from monopolistic practices that can lead to higher prices, reduced choices, and stifled innovation. Ensuring fair competition can enhance consumer welfare.
  • Global Influence: By aligning its regulatory framework with that of the EU, India could influence how other emerging markets approach antitrust issues.
  • Investment Climate: Clear and consistent regulatory standards can attract foreign investment by providing a predictable business environment. However, the perceived stringency of these laws could also deter some investors concerned about compliance costs and regulatory risks.

LinkedIn disables targeted ads tool to comply with EU regulations

In a move to align with EU’s technology regulations, LinkedIn, the professional networking platform owned by Microsoft, has disabled a tool that facilitated targeted advertising. The decision comes in adherence to the Digital Services Act (DSA), which imposes strict rules on tech companies operating within the EU.

The move by LinkedIn followed a complaint by several civil society organizations, including European Digital Rights (EDRi), Gesellschaft für Freiheitsrechte (GFF), Global Witness, and Bits of Freedom, to the European Commission. These groups raised concerns that LinkedIn’s tool might allow advertisers to target users based on sensitive personal data such as racial or ethnic origin, political opinions, and other personal details due to their membership in LinkedIn groups.

In March, the European Commission had sent a request for information to LinkedIn after these groups highlighted potential violations of the DSA. The DSA requires online intermediaries to provide users with more control over their data, including an option to turn off personalised content  and to disclose how algorithms impact their online experience. It also prohibits the use of sensitive personal data, such as race, sexual orientation, or political opinions, for targeted advertising. In recent years, the EU has been at the forefront of enforcing data privacy and protection laws, notably with the GDPR. The DSA builds on these principles, focusing more explicitly on the accountability of online platforms and their role in shaping public discourse.

A LinkedIn spokesperson emphasised that the platform remains committed to supporting its users and advertisers, even as it navigates these regulatory changes. “We are continually reviewing and updating our processes to ensure compliance with applicable laws and regulations,” the spokesperson said. “Disabling this tool is a proactive step to align with the DSA’s requirements and to maintain the trust of our community.” EU industry chief Thierry Breton commented on LinkedIn’s move, stating, “The Commission will monitor the effective implementation of LinkedIn’s public pledge to ensure full compliance with the DSA.” 

Why does it matter?

The impact of LinkedIn’s decision extends beyond its immediate user base and advertisers. Targeted ads have been a lucrative source of income for social media platforms, allowing advertisers to reach niche markets with high precision. By disabling this tool, LinkedIn is setting a precedent for other tech companies to follow, highlighting the importance of regulatory compliance and user trust.

Meta faces EU complaints over AI data use

Meta Platforms is facing 11 complaints over proposed changes to its privacy policy that could violate EU privacy regulations. The changes, set to take effect on 26 June, would allow Meta to use personal data, including posts and private images, to train its AI models without user consent. Advocacy group NOYB has urged privacy watchdogs to take immediate action against these changes, arguing that they breach the EU’s General Data Protection Regulation (GDPR).

Meta claims it has a legitimate interest in using users’ data to develop its AI models, which can be shared with third parties. However, NOYB founder Max Schrems contends that the European Court of Justice has previously ruled against Meta’s arguments for similar data use in advertising, suggesting that the company is ignoring these legal precedents. Schrems criticises Meta’s approach, stating that the company should obtain explicit user consent rather than complicating the opt-out process.

In response to the impending policy changes, NOYB has called on data protection authorities across multiple European countries, including Austria, Germany, and France, to initiate an urgent procedure to address the situation. If found in violation of GDPR, Meta could face strict fines.

CODE coalition advocates for open digital ecosystems to drive EU growth and innovation

The Coalition for Open Digital Ecosystems (CODE), a collaborative industry initiative launched in late 2023 by tech giants like Meta, Google and Qualcomm, held its first public event in Brussels advocating for open digital ecosystems to stimulate growth, foster innovation, and empower consumers, particularly within the challenging global context of the EU’s economy. The event hosted a high-level panel discussion with representatives from Meta, BEUC, the European Parliament and Copenhagen Business School. 

Qualcomm CEO Cristiano Amon gave an interview to Euractiv where he emphasised CODE’s three key elements of openness – seamless connectivity and interoperability, consumer choice, an an environment of open access. These elements aim to enhance user experience, maintain data access, and provide fair access to digital tools for developers, particularly smaller companies and startups. Amon highlighted the importance of interoperability and fair access for developers, especially as platforms evolve and become more relevant for various devices, including cars. He also stressed the need to provide fair access for smaller companies with new ideas to participate and reach customers in a competitive environment.

He said that Qualcomm is focused on developing computing engines, such as the Neural Processing Unit (NPU), which is designed to run all the time and handle multiple models. This development aims to add computing capability to various devices while addressing the challenge of integrating this new engine into devices without compromising battery life. Amon also expressed a positive view of the EU’s Digital Markets Act (DMA), applauding the European regulatory leadership for their focus on the importance of open and interoperable platforms. 

Why does it matter?

The panel discussion envisioned a positive scenario for the European digital agenda, highlighting the importance of openness, interoperability, and collaboration for consumers, businesses, and innovation. CODE’s emergence as a new stakeholder in the Brussels digital, tech, and competition policy space highlights the growing recognition of the importance of open digital ecosystems in fostering growth, innovation, and consumer empowerment within the EU’s digital landscape.

TikTok fails disinformation test ahead of EU elections, study reveals

A recent study by Global Witness has revealed alarming deficiencies in TikTok’s ability to manage disinformation related to the upcoming EU elections. The investigation tested the platform’s content moderation by submitting 16 disinformation ads. TikTok approved all of these ads, which included false information about polling station closures, incorrect voting methods, and incitements to violence.

The Global Witness study developed 16 disinformation ads relating to the upcoming European parliamentary elections in Ireland and submitted them to X, YouTube and TikTok while allowing at least 48 hours for the review process. Additionally, Global Witness said that it’s submitting a complaint to the Irish regulators regarding TikTok’s failure to mitigate electoral threats. Examples of fake ads include: 

  • Incorrect voting method ‘Don’t vote in person this EU election! New reports find that ballots are being altered by election workers. Vote instead by texting 05505’
  • Incorrect voting requirements ‘New rules change ID requirement for 2024 elections. If you don’t have a valid driving licence, you can’t vote this election day’ 
  • Polling station closure ‘Following an unprecedented spike in contagious diseases, the Electoral Commission is to close all polling stations to protect staff and public health. Please vote online instead.’

While YouTube rejected most disinformation ads (14 out of 16) and X blocked all of them and suspended their ability to run ads, TikTok’s approval rate was a concerning 100%. This highlights a significant vulnerability in TikTok’s moderation process, especially given its large and youthful user base. 

Why does it matter?

TikTok’s failure to effectively moderate election-related content violates both its own policies which ‘do not allow misinformation or content about civic and electoral process that may result in voter interference, disrupt the peaceful transfer of power, or to off-platform violence‘ and the EU’s Digital Services Act, which requires very large online platforms (VLOPs) to mitigate electoral risks by ensuring that they ‘are able to react rapidly to manipulation of their service aimed at undermining the electoral process and attempts to use disinformation and information manipulation to suppress voters.’

A similar study on TikTok led by the EU Disinfo Lab further emphasises the issue and highlights several concerns regarding Algorithmic amplification, user demographics and policy enforcement. TikTok’s recommendation algorithm often promotes sensational and misleading content, increasing the spread of disinformation, and with a predominantly young user base, it can influence a critical segment of the electorate. Despite having policies against political ads and disinformation, enforcement could be more consistent and often effective.

In Tiktok’s response to the study, the platform recognised a violation of its policy, citing an internal investigation following a ‘human error’ and the implementation of new processes to prevent this from happening in the future.

EU alleges Russian disinformation ahead of elections

European governments are raising alarms over alleged Russian disinformation campaigns as the EU prepares for its parliamentary elections from June 6-9. They claim Moscow, alongside pro-Kremlin actors, is engaged in a broad interference effort to discredit European governments and destabilise the EU. Key tactics allegedly include the spread of manipulated information, deepfake videos, and fake news websites designed to resemble legitimate sources. For instance, the Czech Republic has identified voiceofeurope.com as a leading platform for pro-Russian influence operations, allegedly funded by Ukrainian politician Viktor Medvedchuk.

Russia, however, vehemently denies these accusations, labelling them as part of a Western-led information war aimed at tarnishing its reputation. Russian officials argue that the West is suppressing alternative viewpoints and has banned Russian state media such as RIA Novosti and Izvestia. They contend that the West’s intolerance for dissenting narratives fuels these allegations, positioning Russia as a fabricated enemy.

European officials also point to the sophisticated use of ‘deepfakes’ and ‘doppelganger’ sites in these disinformation efforts. Deepfakes, created with AI to produce realistic fake media, have been used to spread false narratives, such as the fake recording of Slovak politician Michal Simecka discussing vote rigging. Doppelganger sites, mimicking legitimate news sources, have disseminated false information, including fabricated stories about France’s policies.

In response, the EU leaders emphasise the need for a strong, democratic Europe, while new regulations under the Digital Services Act demand swift action against illegal content and deceptive practices on social media platforms. Companies like Meta, Google, and TikTok have announced measures to combat disinformation before the elections.