Ransomware group dismantled by global authorities

An international operation has dismantled the criminal ransomware group Radar/Dispossessor, which had been targeting companies across various sectors, including healthcare and transport. Authorities from the United States and Germany led the effort to bring down the group, which was founded in August 2023 and initially focused on the US before expanding its attacks globally.

The investigation has identified 43 companies as victims, spanning countries such as the UK, Germany, Brazil, and Australia. The group, led by an individual using the alias ‘Brain’, primarily targeted small to medium-sized enterprises. Many more companies are believed to have been affected, with some cases still under investigation.

Radar/Dispossessor exploited vulnerable computer systems, often through weak passwords and the absence of two-factor authentication, to hold data for ransom. Authorities successfully dismantled servers and domains associated with the group in Germany, the US, and Britain.

Twelve suspects have been identified, hailing from various countries, including Germany, Russia, Ukraine, and Kenya. Investigations are ongoing to identify further suspects and uncover more companies that may have been victimised.

X agrees to pause EU data use for AI amid legal dispute

Elon Musk’s social media platform, X, has agreed to pause using data from European Union users to train its AI systems until further court decisions are made. The agreement comes after Ireland’s Data Protection Commission (DPC) sought to suspend X’s processing of user data for AI development, arguing that the platform had started using this data without user consent.

X, formerly known as Twitter, introduced an option for users to opt out of data usage for AI training. However, this was only available from 16 July, despite data processing beginning on 7 May. This delay led the DPC to take legal action, with a court hearing revealing that X would refrain from using data collected between 7 May and 1 August until the issue is resolved.

X’s legal team is expected to file opposition papers against the DPC’s suspension order by 4 September. The platform defended its actions, calling the regulator’s order unwarranted and unjustified. This case follows similar scrutiny faced by other tech giants like Meta and Google, which have also faced regulatory challenges in the EU over their AI systems.

Apple adjusts EU policies amid regulatory pressure

Apple has revised its policies in the European Union, allowing app developers to communicate with customers outside of its App Store. This move follows the European Commission‘s accusations that the tech giant was breaching the bloc’s technology rules. Previously, Apple permitted developers to direct users to a web page for transactions but did not allow broader communication or promotion from within the app.

Under the new policy, developers can promote offers that are available anywhere, not just on their websites. However, Apple has introduced two new fees: a 5% acquisition fee for new users and a 10% store services fee for sales made within 12 months of app installation on any platform. These fees will replace the reduced commission for digital goods and services sold through the App Store.

Spotify, a longtime critic of Apple’s in-app link policies, expressed concern over the new fees, suggesting they could undermine the Digital Markets Act (DMA), which aims to curb the power of big tech companies like Apple. The European Commission had previously criticised Apple’s fees, arguing they were excessive and unnecessary for fair remuneration.

The Commission will review Apple’s policy changes to ensure compliance with the DMA, which could impose fines of up to 10% of a company’s global annual revenue for violations. This charge against Apple marks the first under the new Digital Markets Act, highlighting the ongoing regulatory scrutiny on European tech giants.

DORA law tightens grip, banks and suppliers rush to meet EU regulations

Financial services firms across the European Union are preparing for new regulations under the Digital Operational Resilience Act (DORA). The law, which aims to enhance cybersecurity, will require banks and their technology suppliers to significantly strengthen their IT infrastructure by January 2025. The regulation mandates robust risk management, incident response, and resilience testing to ensure that financial institutions can withstand cyberattacks and other disruptions.

DORA’s scope extends beyond banks, placing stringent requirements on third-party technology providers. These suppliers must now undergo rigorous testing and reporting processes, as the regulation seeks to uncover dependencies within the digital supply chain. The new law represents a shift in focus towards the security of external tech partners, reflecting the growing reliance of financial institutions on digital services.

Non-compliance with DORA will result in severe penalties, with fines reaching up to 2% of global revenues for financial firms and 1% for IT providers. Individual managers could also face significant fines, further intensifying the pressure on firms to meet the new standards. Despite progress, many in the industry are concerned that not all companies will achieve full compliance by the deadline.

The European Union’s emphasis on cyber resilience highlights the evolving challenges faced by the financial sector. As banks and their suppliers scramble to meet the stringent requirements of DORA, the regulation underscores the critical importance of safeguarding digital infrastructure in an increasingly technology-dependent industry.

TikTok withdraws rewards program from EU to comply with DSA

ByteDance’s TikTok has agreed to permanently withdraw its TikTok Lite rewards program from the EU to comply with the Digital Services Act (DSA), according to the European Commission. The TikTok Lite rewards program allowed users to earn points by engaging in activities like watching videos and inviting friends.

In April, the EU demanded a risk assessment from TikTok on the app shortly after its launch in France and Spain, citing concerns about its potential impact on children and users’ mental health. Under the DSA, large online platforms must report potential risks of new features to the EU before launching and adopting measures to address these risks.

TikTok has made legally binding commitments to withdraw the rewards program from the EU and not to launch any similar program that would bypass this decision. Breaching these commitments would violate the DSA and could lead to fines. Additionally, an investigation into whether TikTok breached online content rules aimed at protecting children and ensuring transparent advertising is ongoing, putting the platform at risk of further penalties.

EU seeks input on dominance abuse guidelines

The European Commission has initiated a public consultation to gather feedback on draft guidelines addressing exclusionary abuses of dominance. These guidelines cover predatory pricing, margin squeeze, exclusive dealing, and refusal to supply.

According to the Commission, these guidelines aim to enhance legal certainty, benefiting consumers, businesses, national competition authorities, and courts.

EU AI Act officially comes into force

The world’s first comprehensive AI law, known as the EU AI Act, officially came into force on 1 August 2024, marking a significant step in regulating AI. This landmark legislation aims to ensure AI’s safe and trustworthy deployment across Europe by setting clear rules and guidelines. While the AI Act is now in effect, it will be fully applicable in two years, with specific provisions, such as bans on prohibited practices, taking effect sooner.

The AI Act establishes a legal framework to address the risks associated with AI while promoting innovation and investment in the technology. It gives AI developers precise requirements, especially for high-risk applications like critical infrastructure, education, and law enforcement. The regulation also includes measures to reduce administrative burdens for small and medium-sized enterprises, encouraging their participation in the AI sector.

A central aspect of the AI Act is its risk-based approach, categorising AI systems into different risk levels, from minimal to unacceptable. High-risk systems, such as those used in healthcare and law enforcement, face stringent obligations to ensure safety and compliance. Additionally, the Act mandates transparency for general-purpose AI models and requires robust risk management and oversight.

The European AI Office has been established to oversee the enforcement and implementation of the AI Act. This office will work with member states to create an environment that respects human rights and fosters AI innovation. As AI evolves, the regulation is designed to adapt to technological changes, ensuring that AI applications remain trustworthy and beneficial for society.

HPE set to gain EU approval for $14 billion Juniper deal

Hewlett Packard Enterprise (HPE) is anticipated to receive unconditional EU antitrust approval for its $14 billion acquisition of Juniper Networks, a leading networking gear maker. The acquisition, announced in January, highlights the industry’s urgency to innovate and develop new products in response to the surge in artificial intelligence-driven services.

The European Commission is set to decide on the deal by 1 August. Both HPE and Juniper have declined to comment on the matter. Sources suggest that HPE plans to emphasise the dominant market position of Cisco, Juniper’s main competitor, to mitigate any potential competition concerns from the EU.

In addition to the EU review, the deal is also under scrutiny by the UK’s antitrust authorities, with their decision expected by 14 August. The acquisition marks a significant move in the tech industry as companies strive to stay competitive in the rapidly evolving AI landscape.

EU prepares hefty fine for Meta’s Marketplace practices

Meta Platforms is facing its first EU antitrust fine for linking its Marketplace service with Facebook. The European Commission is expected to issue the fine within a few weeks, following an accusation over a year and a half ago that the company gave its classified ads service an unfair advantage by bundling it with Facebook.

Allegations include Meta abusing its dominance by imposing unfair trading conditions on competing classified ad services advertising on Facebook and Instagram. The potential fine could reach as much as $13.4 billion, or 10% of Meta’s 2023 global revenue, although such high fines are rarely imposed.

A decision is likely to come in September or October, before EU antitrust chief Margrethe Vestager leaves office in November. Meta has reiterated its stance, claiming the European Commission’s allegations are baseless and stating its product innovation is pro-consumer and pro-competitive.

In a separate development, Meta has been charged by the Commission for not complying with new tech rules due to its pay or consent advertising model launched last November. Efforts to settle the investigation by limiting the use of competitors’ advertising data for Marketplace were previously rejected by the EU but accepted by the UK regulator.

European Parliament forms joint working group to monitor AI Act implementation

Two European Parliament committees have formed a joint working group to oversee the implementation of the AI Act, according to sources familiar with the matter. The committees involved, Internal Market and Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs (LIBE), are concerned about the transparency of the AI Office’s staffing and the role of civil society in the implementation process.

The European Commission’s AI Office is responsible for coordinating the implementation of the AI Act, which will come into force on 1 August. The Act prohibits certain AI applications, like real-time biometric identification, which will be enforced six months later. Full implementation is set for two years after the Act’s commencement when the Commission must clarify key provisions.

Traditionally, the European Parliament has had a limited role in regulatory implementation, but MEPs focused on tech policy are pushing for greater involvement, especially with recent digital regulations. The Parliament already monitors the implementation of the Digital Services and Digital Markets Acts, aiming to ensure effective oversight and transparency in these critical areas.