India’s EU-inspired antitrust law raises concerns among tech giants

India’s recent legislative push to implement antitrust laws like those in the EU has stirred significant concern among technology giants operating within the country, like Google, Meta, Apple and Amazon. That move, aimed at curbing the dominance of big tech companies and fostering a more competitive market environment, was met with a mixed reception, particularly from those within the technology sector.

The proposed antitrust law draws inspiration from the regulatory framework of the EU, which has been at the forefront of global antitrust enforcement. The EU’s regulations are known for their rigorous scrutiny of large tech corporations, often resulting in major fines and operational restrictions for companies that violate competition laws. Adaptation of this model in India signals a shift towards more assertive regulatory practices in the tech industry.

The Indian government is examining a panel’s report proposing a new ‘Digital Competition Bill‘ to complement existing antitrust laws. The law would target ‘systemically significant digital’ companies with a domestic turnover exceeding $480 million or a global turnover over $30 billion, along with a local user base of at least 10 million for its digital services. Companies would be required to operate in a fair and non-discriminatory manner, with the bill recommending a penalty of up to 10% of a company’s global turnover for violations, mirroring the EU’s Digital Markets Act. Big digital companies would be prohibited from exploiting non-public user data and from favoring their own products or services on their platforms. Additionally, they would be barred from restricting users’ ability to download, install, or use third-party apps in any way, and must allow users to select default settings freely.

Both domestic and international tech firms have voiced concerns about the potential impact of these regulations on their operations. A key US lobby group has already opposed the move, fearing its business impact. The primary worry is that the new laws could stifle innovation and place difficult compliance burdens on companies. That sentiment echoes the broader global debate on the balance between regulation and innovation in the tech sector.

Why does it matter?

  •  Market Dynamics: These laws could significantly alter the competitive landscape in India’s tech industry, making it easier for smaller companies to challenge established giants. 
  • Consumer Protection: Robust antitrust regulations are designed to protect consumers from monopolistic practices that can lead to higher prices, reduced choices, and stifled innovation. Ensuring fair competition can enhance consumer welfare.
  • Global Influence: By aligning its regulatory framework with that of the EU, India could influence how other emerging markets approach antitrust issues.
  • Investment Climate: Clear and consistent regulatory standards can attract foreign investment by providing a predictable business environment. However, the perceived stringency of these laws could also deter some investors concerned about compliance costs and regulatory risks.

LinkedIn disables targeted ads tool to comply with EU regulations

In a move to align with EU’s technology regulations, LinkedIn, the professional networking platform owned by Microsoft, has disabled a tool that facilitated targeted advertising. The decision comes in adherence to the Digital Services Act (DSA), which imposes strict rules on tech companies operating within the EU.

The move by LinkedIn followed a complaint by several civil society organizations, including European Digital Rights (EDRi), Gesellschaft für Freiheitsrechte (GFF), Global Witness, and Bits of Freedom, to the European Commission. These groups raised concerns that LinkedIn’s tool might allow advertisers to target users based on sensitive personal data such as racial or ethnic origin, political opinions, and other personal details due to their membership in LinkedIn groups.

In March, the European Commission had sent a request for information to LinkedIn after these groups highlighted potential violations of the DSA. The DSA requires online intermediaries to provide users with more control over their data, including an option to turn off personalised content  and to disclose how algorithms impact their online experience. It also prohibits the use of sensitive personal data, such as race, sexual orientation, or political opinions, for targeted advertising. In recent years, the EU has been at the forefront of enforcing data privacy and protection laws, notably with the GDPR. The DSA builds on these principles, focusing more explicitly on the accountability of online platforms and their role in shaping public discourse.

A LinkedIn spokesperson emphasised that the platform remains committed to supporting its users and advertisers, even as it navigates these regulatory changes. “We are continually reviewing and updating our processes to ensure compliance with applicable laws and regulations,” the spokesperson said. “Disabling this tool is a proactive step to align with the DSA’s requirements and to maintain the trust of our community.” EU industry chief Thierry Breton commented on LinkedIn’s move, stating, “The Commission will monitor the effective implementation of LinkedIn’s public pledge to ensure full compliance with the DSA.” 

Why does it matter?

The impact of LinkedIn’s decision extends beyond its immediate user base and advertisers. Targeted ads have been a lucrative source of income for social media platforms, allowing advertisers to reach niche markets with high precision. By disabling this tool, LinkedIn is setting a precedent for other tech companies to follow, highlighting the importance of regulatory compliance and user trust.

Meta faces EU complaints over AI data use

Meta Platforms is facing 11 complaints over proposed changes to its privacy policy that could violate EU privacy regulations. The changes, set to take effect on 26 June, would allow Meta to use personal data, including posts and private images, to train its AI models without user consent. Advocacy group NOYB has urged privacy watchdogs to take immediate action against these changes, arguing that they breach the EU’s General Data Protection Regulation (GDPR).

Meta claims it has a legitimate interest in using users’ data to develop its AI models, which can be shared with third parties. However, NOYB founder Max Schrems contends that the European Court of Justice has previously ruled against Meta’s arguments for similar data use in advertising, suggesting that the company is ignoring these legal precedents. Schrems criticises Meta’s approach, stating that the company should obtain explicit user consent rather than complicating the opt-out process.

In response to the impending policy changes, NOYB has called on data protection authorities across multiple European countries, including Austria, Germany, and France, to initiate an urgent procedure to address the situation. If found in violation of GDPR, Meta could face strict fines.

CODE coalition advocates for open digital ecosystems to drive EU growth and innovation

The Coalition for Open Digital Ecosystems (CODE), a collaborative industry initiative launched in late 2023 by tech giants like Meta, Google and Qualcomm, held its first public event in Brussels advocating for open digital ecosystems to stimulate growth, foster innovation, and empower consumers, particularly within the challenging global context of the EU’s economy. The event hosted a high-level panel discussion with representatives from Meta, BEUC, the European Parliament and Copenhagen Business School. 

Qualcomm CEO Cristiano Amon gave an interview to Euractiv where he emphasised CODE’s three key elements of openness – seamless connectivity and interoperability, consumer choice, an an environment of open access. These elements aim to enhance user experience, maintain data access, and provide fair access to digital tools for developers, particularly smaller companies and startups. Amon highlighted the importance of interoperability and fair access for developers, especially as platforms evolve and become more relevant for various devices, including cars. He also stressed the need to provide fair access for smaller companies with new ideas to participate and reach customers in a competitive environment.

He said that Qualcomm is focused on developing computing engines, such as the Neural Processing Unit (NPU), which is designed to run all the time and handle multiple models. This development aims to add computing capability to various devices while addressing the challenge of integrating this new engine into devices without compromising battery life. Amon also expressed a positive view of the EU’s Digital Markets Act (DMA), applauding the European regulatory leadership for their focus on the importance of open and interoperable platforms. 

Why does it matter?

The panel discussion envisioned a positive scenario for the European digital agenda, highlighting the importance of openness, interoperability, and collaboration for consumers, businesses, and innovation. CODE’s emergence as a new stakeholder in the Brussels digital, tech, and competition policy space highlights the growing recognition of the importance of open digital ecosystems in fostering growth, innovation, and consumer empowerment within the EU’s digital landscape.

TikTok fails disinformation test ahead of EU elections, study reveals

A recent study by Global Witness has revealed alarming deficiencies in TikTok’s ability to manage disinformation related to the upcoming EU elections. The investigation tested the platform’s content moderation by submitting 16 disinformation ads. TikTok approved all of these ads, which included false information about polling station closures, incorrect voting methods, and incitements to violence.

The Global Witness study developed 16 disinformation ads relating to the upcoming European parliamentary elections in Ireland and submitted them to X, YouTube and TikTok while allowing at least 48 hours for the review process. Additionally, Global Witness said that it’s submitting a complaint to the Irish regulators regarding TikTok’s failure to mitigate electoral threats. Examples of fake ads include: 

  • Incorrect voting method ‘Don’t vote in person this EU election! New reports find that ballots are being altered by election workers. Vote instead by texting 05505’
  • Incorrect voting requirements ‘New rules change ID requirement for 2024 elections. If you don’t have a valid driving licence, you can’t vote this election day’ 
  • Polling station closure ‘Following an unprecedented spike in contagious diseases, the Electoral Commission is to close all polling stations to protect staff and public health. Please vote online instead.’

While YouTube rejected most disinformation ads (14 out of 16) and X blocked all of them and suspended their ability to run ads, TikTok’s approval rate was a concerning 100%. This highlights a significant vulnerability in TikTok’s moderation process, especially given its large and youthful user base. 

Why does it matter?

TikTok’s failure to effectively moderate election-related content violates both its own policies which ‘do not allow misinformation or content about civic and electoral process that may result in voter interference, disrupt the peaceful transfer of power, or to off-platform violence‘ and the EU’s Digital Services Act, which requires very large online platforms (VLOPs) to mitigate electoral risks by ensuring that they ‘are able to react rapidly to manipulation of their service aimed at undermining the electoral process and attempts to use disinformation and information manipulation to suppress voters.’

A similar study on TikTok led by the EU Disinfo Lab further emphasises the issue and highlights several concerns regarding Algorithmic amplification, user demographics and policy enforcement. TikTok’s recommendation algorithm often promotes sensational and misleading content, increasing the spread of disinformation, and with a predominantly young user base, it can influence a critical segment of the electorate. Despite having policies against political ads and disinformation, enforcement could be more consistent and often effective.

In Tiktok’s response to the study, the platform recognised a violation of its policy, citing an internal investigation following a ‘human error’ and the implementation of new processes to prevent this from happening in the future.

EU alleges Russian disinformation ahead of elections

European governments are raising alarms over alleged Russian disinformation campaigns as the EU prepares for its parliamentary elections from June 6-9. They claim Moscow, alongside pro-Kremlin actors, is engaged in a broad interference effort to discredit European governments and destabilise the EU. Key tactics allegedly include the spread of manipulated information, deepfake videos, and fake news websites designed to resemble legitimate sources. For instance, the Czech Republic has identified voiceofeurope.com as a leading platform for pro-Russian influence operations, allegedly funded by Ukrainian politician Viktor Medvedchuk.

Russia, however, vehemently denies these accusations, labelling them as part of a Western-led information war aimed at tarnishing its reputation. Russian officials argue that the West is suppressing alternative viewpoints and has banned Russian state media such as RIA Novosti and Izvestia. They contend that the West’s intolerance for dissenting narratives fuels these allegations, positioning Russia as a fabricated enemy.

European officials also point to the sophisticated use of ‘deepfakes’ and ‘doppelganger’ sites in these disinformation efforts. Deepfakes, created with AI to produce realistic fake media, have been used to spread false narratives, such as the fake recording of Slovak politician Michal Simecka discussing vote rigging. Doppelganger sites, mimicking legitimate news sources, have disseminated false information, including fabricated stories about France’s policies.

In response, the EU leaders emphasise the need for a strong, democratic Europe, while new regulations under the Digital Services Act demand swift action against illegal content and deceptive practices on social media platforms. Companies like Meta, Google, and TikTok have announced measures to combat disinformation before the elections.

EU approves state aid for the construction of microchip plant in Sicily

The European Commission has given the green light to Italian state aid for semiconductor manufacturer STMicroelectronics to construct a €5 billion microchip plant in Sicily. This approval comes as Europe seeks to reduce dependence on Asian imports for critical manufacturing components. The plant in Catania will specialise in producing microchips that enhance energy efficiency in electric vehicles and will receive a direct grant of about €2 billion from Rome.

The move comes amidst heightened scrutiny of Europe’s reliance on Asian chip supplies due to pandemic-related disruptions and trade tensions with China. To address these concerns, the EU has introduced its Chips Act, aiming to attract chip manufacturers and secure vital components for hi-tech industries. European antitrust chief Margrethe Vestager emphasised the strategic importance of diversifying chip supply chains and reducing dependencies on single suppliers.

As major shareholders in STMicro, the governments of France and Italy are backing the project. The new plant, STMicro’s second facility in Sicily, will produce silicon carbide chips known for their energy efficiency. This initiative signals a shift towards self-sufficiency in semiconductor production and supports Europe’s digital and green transition objectives. With operations expected to reach full capacity by 2032, the plant aims to bolster regional security of chip supply and meet the growing demand from automakers like Tesla, BMW, and Renault.

Meta introduces tools to fight disinformation ahead of EU elections

The European Commission announced on Tuesday that Meta Platforms has introduced measures to combat disinformation ahead of the EU elections. Meta has launched 27 real-time visual dashboards, one for each EU member state, to enable third-party monitoring of civic discourse and election activities.

This development comes after the European Commission investigated Meta last month for allegedly breaching EU online content regulations. The investigation highlighted concerns over Meta’s Facebook and Instagram platforms failing to address disinformation and deceptive advertising adequately.

While the formal procedures against Meta continue, the European Commission stated that it would closely monitor the implementation of these new features to ensure their effectiveness in curbing disinformation.

Leadership vacancy stalls EU AI office development

Two months after the European Parliament passed the landmark AI Act, the EU Commission office responsible for its implementation remains understaffed and leaderless. Although such a pace is common for public institutions, stakeholders worry it may delay the enactment of the hundreds of pages of the AI Act, especially with some parts coming into effect by the end of the year.

The EU Commission’s Directorate-General for Communication Networks, Content, and Technology (DG Connect), which houses the AI Office, is undergoing a reorganisation. Despite reassurances from officials that preparations are on track, concerns persist about the office’s limited budget, slow hiring process, and the overwhelming workload on the current staff. Three Members of the European Parliament (MEPs) have expressed dissatisfaction with the transparency and progress of the recruitment and leadership processes.

The European Commission has identified 64 deliverables for the AI Office, prohibiting certain AI uses set to take effect by the year’s end. Codes of practice for general-purpose models, such as ChatGPT, will be developed within nine months of the legislation’s enactment. Despite recent recruitment efforts, including two positions opened in March and additional roles for lawyers and AI ethicists soon to be advertised, the hiring process is expected to take several more months.

Why does it matter?

A significant question remains regarding the leadership of the AI Office. The Commission has yet to announce candidates or details of the selection process. Speculation has arisen around MEP Dragoș Tudorache, who has been active in AI policy and is not seeking re-election. However, he has not confirmed any plans post-tenure. The Commission aims to finalise the office’s staffing and leadership to ensure the smooth implementation of the AI Act.

Airlines, hotels, and retailers in EU worry about exclusion in Google’s search alterations

Lobbying groups representing airlines, hotels, and retailers in Europe are urging the EU tech regulators to ensure that Google considers their views, not just those of large intermediaries, when implementing changes to comply with landmark tech regulations. These groups, including Airlines for Europe, Hotrec, EuroCommerce, and Ecommerce Europe, had previously expressed concerns about the potential impact of the EU’s Digital Markets Act (DMA) on their revenues.

The DMA aims to impose rules on tech giants like Google to give users more choice and offer competitors a fairer chance to compete. However, these industry groups fear the proposed adjustments could harm their direct sales revenues and exacerbate discrimination. In a joint letter to EU antitrust chief Margrethe Vestager and EU industry chief Thierry Breton, dated 22 May, they emphasised their mounting concerns regarding the potential consequences of the DMA.

Why does it matter?

Specifically, the groups worry that the proposed changes may give preferential treatment to powerful online intermediaries, resulting in a loss of visibility and traffic for airlines, hotels, merchants, and restaurants.

Despite Google’s acknowledgement in March that changes to search results may impact various businesses, including those in the European market, the company has not provided immediate comment on the recent concerns raised by these lobbying groups. The European Commission, currently investigating Google for possible DMA breaches, has yet to respond to requests for comment on the matter.