EU investigates Google-Samsung AI deal

The EU antitrust regulators are investigating a deal between Google and Samsung, where Google’s chatbot, Gemini Nano, is embedded in Samsung’s Galaxy S24 smartphones. The European Commission wants to understand if this multi-year generative AI deal restricts rival chatbots from being installed on Samsung devices, raising concerns about potential anti-competitive practices.

Regulators have sent out a questionnaire to industry participants, asking if the pre-installation of Gemini Nano via the device or the cloud limits the number of other AI systems that can be pre-installed. They are also examining if this arrangement affects the interoperability between Gemini Nano and other pre-installed apps on Samsung smartphones.

The investigation aims to determine if competitors have faced challenges in making deals with device manufacturers for the pre-installation of their chatbots and the reasons behind any rejections. Feedback from industry participants is crucial in shaping the EU’s stance on the matter.

Respondents have until this week to submit their responses to the eight-page questionnaire, which will play a key role in assessing the impact of the Google-Samsung deal on market competition.

ByteDance loses EU court challenge

ByteDance, the owner of TikTok, lost its challenge against being labeled a gatekeeper under the EU Digital Markets Act (DMA). This ruling strengthens antitrust regulators’ efforts to limit the influence of Big Tech. The DMA requires gatekeepers to make messaging apps interoperable, allow users to choose pre-installed apps, and prevent them from favoring their services.

ByteDance argued that the designation could protect dominant companies from competition, but the Court of Justice of the EU (CJEU) ruled that the company had not substantiated its claims. The court highlighted TikTok’s significant growth, making it comparable to rivals such as Meta Platforms and Alphabet.

Judges noted that ByteDance met the DMA’s thresholds concerning global market value and the number of TikTok users within the EU. ByteDance expressed disappointment but mentioned it had already taken steps to comply with the DMA’s obligations before last March’s deadline.

Other companies designated as gatekeepers include Alphabet, Amazon, Apple, Booking.com, Meta Platforms, and Microsoft. Apple and Meta are also contesting their gatekeeper labels, with Apple arguing against the classification of its App Stores and iOS operating system.

European Commission investigates DMA interoperability provisions

The European Commission intends to study interoperability provisions in the Digital Markets Act (DMA) through a new tender. This study, commissioned by the Directorate General for Communications Networks, Content, and Technology (DG CONNECT), aims to identify technical challenges and solutions for effective interoperability under the DMA, with a review expected by May 2026.

Under the DMA, gatekeepers must ensure their communication services, such as messaging apps, are interoperable with competitors’ platforms. However, this requirement is designed to protect competition by allowing users to switch between services more easily, addressing issues particularly noted with Apple’s App Store and Apple Pay. The commissioned study will evaluate the effectiveness of interoperability for number-independent messaging services like Facebook’s Messenger and WhatsApp, which do not require mobile number registration.

The evaluation will help determine whether these interoperability requirements should extend to online social networking services. The study will consider practical matters such as security, encryption, personal data collection, user interfaces, and content moderation. Companies like Apple have argued that interoperability could compromise privacy.

The tender for the study was published in the EU Official Journal of Tenders on 11 July. The company awarded the contract will play a crucial role in shaping the future of interoperability under the DMA.

EU AI Act published in Official Journal, initiating countdown to legal deadlines

The European Union has finalised its AI Act, a significant regulatory framework aimed at governing the use of AI within its member states. Published in the EU’s Official Journal, the law will officially come into effect on 1 August, with a phased implementation set to unfold over the next several years. By mid-2026, all provisions are expected to be fully applicable, marking a gradual rollout to accommodate various deadlines and compliance requirements.

Under the AI Act, different obligations are imposed on AI developers based on the perceived risk of their applications. Low-risk uses of AI will generally remain unregulated, while high-risk applications—such as biometric uses in law enforcement and critical infrastructure—will face stringent requirements around data quality and anti-bias measures. The law also introduces transparency requirements for developers of general-purpose AI models, like OpenAI’s GPT, ensuring that the most powerful AI systems undergo systemic risk assessments.

The phased approach begins with a list of prohibited AI uses becoming effective six months after the law’s enactment in early 2025. That includes bans on practices such as social credit scoring and unrestricted compilation of facial recognition databases. Subsequently, codes of practice for AI developers will be implemented nine months after the law takes effect to guide compliance with the new regulations. Concerns have been raised about the influence of AI industry players in shaping these guidelines, prompting efforts to ensure an inclusive drafting process overseen by the newly established EU AI Office.

By August 2025, transparency requirements will apply to general-purpose AI models, with additional time granted to comply with some high-risk AI systems until 2027. These measures reflect the EU’s proactive stance in balancing innovation with robust regulation to foster a competitive AI landscape while safeguarding societal values and interests.

Musk’s X faces EU investigation for DSA violations

According to a ruling by the EU tech regulators, Elon Musk’s social media company, X, has breached the EU online content rules. The decision taken by the European Commission follows a seven-month investigation under the Digital Services Act (DSA), which mandates that large online platforms and search engines tackle illegal content and address risks to public security. The European Commission highlighted issues with X’s use of dark patterns, lack of advertising transparency, and restricted data access for researchers.

The investigation also noted that X’s verified accounts, marked with a blue checkmark, do not adhere to industry standards, impairing users’ ability to verify account authenticity. X must also meet the DSA requirement to provide a reliable, searchable advertisement repository. The company has also been accused of obstructing researchers from accessing its public data, violating the DSA.

Why does this matter?

X has several months to respond to these charges. The company could face a fine of up to 6% of its global turnover if found guilty. The EU industry chief, Thierry Breton, stated that if their findings are confirmed, they will impose fines and demand significant operational changes.

Meanwhile, the European Commission continues separate investigations into disseminating illegal content on X and the measures it has taken to counter disinformation. Similar investigations are also ongoing for other platforms, including ByteDance’s TikTok, AliExpress, and Meta Platforms.

Apple’s NFC technology will no longer be reserved to Apple Pay and Wallet in the EEA

The EU Commission has announced that Apple will open its near-field-communication (NFC) technology to third party developers, including competitors. Rival mobile wallet providers will now be able to use this technology as well, giving them access to a new market of users. Companies other than Apple will also be able to access tap-and-go services which use NFC technology. This means they will have access to technologies for things like digital wallets, house and car keys, security badges, loyalty cards, and event tickets.  

“We have offered commitments to provide third-party developers in the European Economic Area with an option that will enable their users to make NFC contactless payments from within their iOS apps, separate from Apple Pay and Apple Wallet,” Apple said in an emailed statement to Reuters. EU antitrust chief Margrethe Vestager noted that ‘consumers will have a wider range of safe and innovative mobile wallets to choose from.’

 After the EU shared its concerns on Apple’s market dominance in May 2022, Apple decided it would settle the case and determined a first set of commitments. Commission market-tested these commitments between 19 January 2024 and 19 February 2024, consulting all interested third parties to verify whether they would remove its competition concerns. After this process, Apple came up with a second round of commitments which the EU turned into law. This way, Apple avoided a violation of the EU’s antitrust laws and a fine. 

Apple’s decision to settle the EU antitrust probe stands out given the company has pushed back against the EU competition watchdog on other occasions. Besides this case, it is currently facing a number of investigations under the Digital Markets Act (DMA) over its business practices. It recently received a €1.8 billion fine, which it is currently appealing.

EU’s AI Act influences New Zealand’s digital strategy

As governments worldwide grapple with AI regulation and digital identity strategies, many are looking to the EU for guidance. In New Zealand, the EU’s AI Act and EUDI wallet program serve as valuable models. Dr Nessa Lynch, an expert on emerging technology regulation, highlights the need for legal and policy safeguards to ensure AI development prioritises public interests over commercial ones. She argues that the EU’s AI Act, framed as product safety legislation, protects people from high-risk AI uses and promotes trustworthy AI. However, she notes the controversial exceptions for law enforcement and national security.

Lynch emphasises that regulation must balance innovation and trust. For New Zealand, adopting a robust regulatory framework is crucial for fostering public trust in AI. The current gaps in its privacy and data protection laws, along with unclear AI usage guidelines, could hinder innovation and public confidence. Lynch stresses the importance of a people-centred approach to regulation, ensuring AI is used responsibly and ethically.

Similarly, New Zealand’s digital identity strategy is evolving alongside its AI regulation. The recent launch of the New Zealand Trust Framework Authority aims to verify digital identity service providers. Professor Markus Luczak-Roesch from Victoria University of Wellington highlights the transformative potential of digital ID, which must be managed in line with national values. He points to Estonia and Norway as models for integrating digital ID with robust data infrastructure and ethical AI development, stressing the importance of avoiding technologies that may carry unethical components or incompatible values.

AI cybersecurity in devices deemed high-risk by European Commission

AI-based cybersecurity and emergency services components in internet-connected devices are expected to be classified as high-risk under the AI Act, according to a European Commission document seen by Euractiv. The document, which interprets the relationship between the 2014 Radio Equipment Directive (RED) and the AI Act, marks the first known instance of how AI-based safety components will be treated under the new regulations. The RED pertains to wireless devices, including those using Wi-Fi and Bluetooth, beyond traditional radios.

Under the AI Act, high-risk AI systems will be subject to extensive testing, risk management, security measures, and documentation. The Act includes a list of use cases where AI deployment is automatically considered high-risk, such as in critical infrastructure and law enforcement. It also sets criteria for categorising other high-risk products, requiring third-party conformity assessments in line with sector-specific regulations. AI cybersecurity and emergency services components meet these criteria under the RED, thus being classified as high-risk.

Even in cases where the RED allows for self-assessment compliance with harmonised standards, these AI-based components are still deemed high-risk. The AI Act references numerous sectoral regulations that could classify AI products as high-risk, extending beyond electronics to medical devices, aviation, heavy machinery, and personal watercraft. The preliminary interpretation suggests that self-assessment standards are insufficient to remove the high-risk classification from AI products in these industries.

The AI Act imposes significant requirements on high-risk AI systems, while those not in this category face only minor transparency obligations. The Commission’s document is a preliminary interpretation, and the full application of the AI Act, which spans over 500 pages, remains to be seen. Despite initial estimates that 5-15% of AI systems would be classified as high-risk, a 2022 survey of EU-based startups indicated that 33-50% of these startups consider their products high-risk. Further interpretive work is needed to understand how the AI Act will impact various sectors.

Why does it matter?

The abovementioned proceedings highlight the European Commission’s stringent approach to regulating AI-based cybersecurity and emergency services in internet-connected devices. By classifying these components as high-risk, the AI Act mandates rigorous testing, security measures, and documentation, ensuring robust safety standards. This move underscores the EU’s commitment to protecting critical infrastructure and sensitive data and signals significant regulatory implications for various industries, potentially influencing global standards and practices in AI technology.

EU probes industry on China’s chip production

The European Commission has initiated a consultation with the semiconductor industry to gather views on China’s expanded production of older-generation computer chips, known as legacy chips. The effort precedes two voluntary surveys targeting the chip industry and major chip-using firms, which are due in September. The Commission aims to assess the role of legacy chips in supply chains and explore potential joint measures with the US to address dependency and market distortion.

That move comes amid rising tensions between the EU and China as the European Union seeks to shield its industries from Chinese competition. Recently, the Commission imposed provisional tariffs of up to 37.6% on Chinese electric vehicles, signalling a potentially tougher stance towards Beijing. Chinese investment in legacy chip production, driven by state subsidies and US restrictions on advanced chips, has raised concerns in the West about long-term market implications and potential oversupply.

The Commission’s antitrust chief, Margrethe Vestager, hinted in April at a possible investigation into legacy chips after discussions with US officials. A detailed report released by the European Commission earlier this year highlighted the extensive support provided by the Chinese government to domestic firms across various sectors, including semiconductors. The new chip-focused surveys are broader in scope than previous US security-focused surveys, aiming to gather comprehensive data on chip sourcing, products, pricing, and competitive estimates.

EU asks Amazon for DSA compliance details

The European Commission has requested that Amazon provide detailed information regarding its measures to comply with the Digital Services Act (DSA) obligations. Specifically, the Commission is interested in the transparency of Amazon’s recommender systems. Amazon has been given a deadline of 26 July to respond.

The DSA mandates that major tech companies, like Amazon, take more responsibility in addressing illegal and harmful content on their platforms. The regulatory push aims to create a safer and more predictable online environment for users. Amazon stated that it is currently reviewing the EU’s request and plans to work closely with the European Commission.

A spokesperson for Amazon expressed support for the Commission’s objectives, emphasising the company’s commitment to a safe and trustworthy shopping experience. Amazon highlighted its significant investments in protecting its platform from bad actors and illegal content and noted that these efforts align with DSA compliance.