ByteDance’s TikTok has agreed to permanently withdraw its TikTok Lite rewards program from the EU to comply with the Digital Services Act (DSA), according to the European Commission. The TikTok Lite rewards program allowed users to earn points by engaging in activities like watching videos and inviting friends.
In April, the EU demanded a risk assessment from TikTok on the app shortly after its launch in France and Spain, citing concerns about its potential impact on children and users’ mental health. Under the DSA, large online platforms must report potential risks of new features to the EU before launching and adopting measures to address these risks.
The available brain time of young Europeans is not a currency for social media—and it never will be ⏳🧠
We have obtained the permanent withdrawal of #TikTokLite “rewards”program, which could have had very addictive consequences.
TikTok has made legally binding commitments to withdraw the rewards program from the EU and not to launch any similar program that would bypass this decision. Breaching these commitments would violate the DSA and could lead to fines. Additionally, an investigation into whether TikTok breached online content rules aimed at protecting children and ensuring transparent advertising is ongoing, putting the platform at risk of further penalties.
The European Commission has initiated a public consultation to gather feedback on draft guidelines addressing exclusionary abuses of dominance. These guidelines cover predatory pricing, margin squeeze, exclusive dealing, and refusal to supply.
According to the Commission, these guidelines aim to enhance legal certainty, benefiting consumers, businesses, national competition authorities, and courts.
The world’s first comprehensive AI law, known as the EU AI Act, officially came into force on 1 August 2024, marking a significant step in regulating AI. This landmark legislation aims to ensure AI’s safe and trustworthy deployment across Europe by setting clear rules and guidelines. While the AI Act is now in effect, it will be fully applicable in two years, with specific provisions, such as bans on prohibited practices, taking effect sooner.
The AI Act establishes a legal framework to address the risks associated with AI while promoting innovation and investment in the technology. It gives AI developers precise requirements, especially for high-risk applications like critical infrastructure, education, and law enforcement. The regulation also includes measures to reduce administrative burdens for small and medium-sized enterprises, encouraging their participation in the AI sector.
Today, the Artificial Intelligence Act comes into force.
Europe's pioneering framework for innovative and safe AI.
It will drive AI development that Europeans can trust.
And provide support to European SMEs and startups to bring cutting-edge AI solutions to market. pic.twitter.com/cRoVoRtEy0
A central aspect of the AI Act is its risk-based approach, categorising AI systems into different risk levels, from minimal to unacceptable. High-risk systems, such as those used in healthcare and law enforcement, face stringent obligations to ensure safety and compliance. Additionally, the Act mandates transparency for general-purpose AI models and requires robust risk management and oversight.
The European AI Office has been established to oversee the enforcement and implementation of the AI Act. This office will work with member states to create an environment that respects human rights and fosters AI innovation. As AI evolves, the regulation is designed to adapt to technological changes, ensuring that AI applications remain trustworthy and beneficial for society.
Hewlett Packard Enterprise (HPE) is anticipated to receive unconditional EU antitrust approval for its $14 billion acquisition of Juniper Networks, a leading networking gear maker. The acquisition, announced in January, highlights the industry’s urgency to innovate and develop new products in response to the surge in artificial intelligence-driven services.
The European Commission is set to decide on the deal by 1 August. Both HPE and Juniper have declined to comment on the matter. Sources suggest that HPE plans to emphasise the dominant market position of Cisco, Juniper’s main competitor, to mitigate any potential competition concerns from the EU.
In addition to the EU review, the deal is also under scrutiny by the UK’s antitrust authorities, with their decision expected by 14 August. The acquisition marks a significant move in the tech industry as companies strive to stay competitive in the rapidly evolving AI landscape.
Meta Platforms is facing its first EU antitrust fine for linking its Marketplace service with Facebook. The European Commission is expected to issue the fine within a few weeks, following an accusation over a year and a half ago that the company gave its classified ads service an unfair advantage by bundling it with Facebook.
Allegations include Meta abusing its dominance by imposing unfair trading conditions on competing classified ad services advertising on Facebook and Instagram. The potential fine could reach as much as $13.4 billion, or 10% of Meta’s 2023 global revenue, although such high fines are rarely imposed.
A decision is likely to come in September or October, before EU antitrust chief Margrethe Vestager leaves office in November. Meta has reiterated its stance, claiming the European Commission’s allegations are baseless and stating its product innovation is pro-consumer and pro-competitive.
In a separate development, Meta has been charged by the Commission for not complying with new tech rules due to its pay or consent advertising model launched last November. Efforts to settle the investigation by limiting the use of competitors’ advertising data for Marketplace were previously rejected by the EU but accepted by the UK regulator.
Two European Parliament committees have formed a joint working group to oversee the implementation of the AI Act, according to sources familiar with the matter. The committees involved, Internal Market and Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs (LIBE), are concerned about the transparency of the AI Office’s staffing and the role of civil society in the implementation process.
The European Commission’s AI Office is responsible for coordinating the implementation of the AI Act, which will come into force on 1 August. The Act prohibits certain AI applications, like real-time biometric identification, which will be enforced six months later. Full implementation is set for two years after the Act’s commencement when the Commission must clarify key provisions.
Traditionally, the European Parliament has had a limited role in regulatory implementation, but MEPs focused on tech policy are pushing for greater involvement, especially with recent digital regulations. The Parliament already monitors the implementation of the Digital Services and Digital Markets Acts, aiming to ensure effective oversight and transparency in these critical areas.
The European Union and Singapore have finalised a digital trade agreement to facilitate cross-border data flows and establish global rules for digital trade. This new deal, which enhances the existing EU-Singapore free trade agreement from 2019, includes provisions for e-signatures, consumer protection, and limits on spam. It also addresses data access and transfer concerns, particularly regarding technology mandates from countries like China.
The agreement is expected to reduce business costs and boost services trade, benefiting both parties. Singapore, a major player in the EU’s services trade, saw its digital services trade reach 43 billion euros ($47 billion) in 2022. For the EU, this deal aligns with its goal to set global standards for digital trade, particularly in the Asia-Pacific region. The EU already has similar agreements with Britain, Chile, New Zealand, and Japan and is negotiating with South Korea.
The agreement, which must be ratified by Singapore, the EU’s national governments, and the European Parliament, reflects the growing importance of digitally delivered services, which have been rising at an average annual rate of 8.1% globally.
Top competition authorities from the EU, UK, and US have issued a joint statement emphasising the importance of fair, open, and competitive markets in developing and deploying generative AI. Leaders from these regions, including Margrethe Vestager of the European Commission, Sarah Cardell of the UK Competition and Markets Authority, Jonathan Kanter of the US Department of Justice, and Lina M. Khan of the US Federal Trade Commission, highlighted their commitment to ensuring effective competition and protecting consumers and businesses from potential market abuses.
The officials recognise the transformational potential of AI technologies but stress the need to safeguard against risks that could undermine fair competition. These risks include the concentration of control over essential AI development inputs, such as specialised chips and vast amounts of data, and the possibility of large firms using their existing market power to entrench or extend their dominance in AI-related markets. The statement also warns against partnerships and investments that could stifle competition by allowing major firms to co-opt competitive threats.
The joint statement outlines several principles for protecting competition within the AI ecosystem, including fair dealing, interoperability, and maintaining choices for consumers and businesses. The authorities are particularly vigilant about the potential for AI to facilitate anti-competitive behaviours, such as price fixing or unfair exclusion. Additionally, they underscore the importance of consumer protection, ensuring that AI applications do not compromise privacy, security, or autonomy through deceptive or unfair practices.
The EU’s Ecodesign for Sustainable Products Regulation (ESPR) comes into force today, mandating Digital Product Passports (DPPs) for most products (excluding food and medicine) by 2030. These passports will contain unique identifiers and machine-readable features to track a product’s lifecycle and offer recycling advice. This regulation aims to improve information exchange, boost recycling rates, and build trust between consumers and businesses.
An Ecodesign Forum is planned for late 2024 or early 2025 to create a comprehensive implementation plan by March 2025. The first products needing DPP compliance are batteries, which in 2027 must include data carriers like QR codes or barcodes linked to a DPP database. This initiative presents a significant challenge, requiring substantial IT infrastructure and data management to meet the Commission’s deadlines.
Businesses must ensure their systems are DPP-compatible to facilitate smooth information flow throughout the supply chain. Aligning member states and ensuring interoperability will test the DPP’s capabilities, and the transition period is expected to be challenging. However, stakeholders believe the economic and sustainability benefits will outweigh the difficulties.
For consumers, DPPs will encourage informed purchasing by providing detailed information on product disposal and repair, supporting the circular economy. With virgin materials becoming scarcer and more expensive, companies will likely introduce buy-back and reward schemes to improve resource efficiency, similar to initiatives like Apple’s Take Back program.
Meta will withhold its future multimodal AI models from customers in the EU due to a lack of clear regulatory guidance. This decision reflects a growing tension between US tech giants and EU regulators.
Meta plans to release its multimodal Llama model in the coming months, integrating video, audio, images, and text. However, these models will not be available in the EU, impacting both European companies and those offering products in the region.
The company’s larger, text-only Llama 3 model will be available in the EU. Meta’s concerns stem from compliance with the General Data Protection Regulation (GDPR), despite briefings with EU regulators and attempts to address their feedback.
The UK, with data protection laws similar to the EU, will receive the new model without regulatory delays. Meta argues that delays in Europe harm consumers and competitiveness, pointing out that other tech companies already use European data to train their models.