Google overturns €1.49 billion antitrust fine in EU court

Google secured a significant victory on Wednesday, overturning a €1.49 billion ($1.66 billion) fine imposed by the European Commission in 2019. The fine, levied over antitrust violations, accused Google of abusing its dominance in online search advertising by restricting websites from using advertising brokers other than its AdSense platform. These practices, deemed illegal by the Commission, were said to have spanned from 2006 to 2016.

The General Court of Luxembourg, while agreeing with most of the European Commission’s findings, annulled the hefty fine. The judges ruled that the Commission had not fully considered all factors, particularly the duration of the unfair contractual clauses, which played a critical role in overturning the penalty. Despite the annulment, the ruling upheld many of the Commission’s assessments, but the financial punishment did not hold.

The fine was one of three that have cost Google a combined total of €8.25 billion in antitrust penalties, triggered by complaints from rivals such as Microsoft. Google noted that it had already revised the contracts in question in 2016 before the Commission’s decision.

The legal victory for Google comes just a week after it lost a separate case involving a €2.42 billion fine for unfairly promoting its price comparison service. While the battle over its advertising practices may have seen a favourable outcome, the tech giant’s ongoing legal challenges in Europe reflect the broader scrutiny facing major digital platforms across the continent.

EU to fine Meta over anti-competitive practices

Facebook’s owner company, Meta, is bracing for a substantial fine from the European Union, according to sources familiar with the matter. The penalty stems from allegations that Meta is leveraging its dominance in social networking to stifle competition in the classified advertising sector. The company’s practice of linking its free Marketplace service with Facebook has raised concerns among the EU regulators, who view this strategy as an attempt to edge out rivals.

The decision is expected as soon as next month, and it could be one of the final significant moves overseen by the EU’s current competition chief, Margrethe Vestager, before her departure. The investigation into Meta’s business practices marks a continuation of the EU’s broader efforts to crack down on the monopolistic behaviour of tech giants.

Currently, neither Meta nor the EU regulators have commented on the looming decision. However, this case could signal a more stringent approach to maintaining a level playing field in the digital marketplace, where tech companies have long held considerable power. The ruling could have substantial financial and operational consequences for Meta, potentially setting the tone for future regulatory actions in the tech industry.

Telegram’s Pavel Durov faces criminal probe in France under LOPMI law

France has taken a bold legal step with its new law, targeting tech executives whose platforms enable illegal activities. The pioneering legislation, enacted in January 2023, puts France at the forefront of efforts to curb cybercrime. The law allows for criminal charges against tech leaders, like Telegram CEO Pavel Durov, for complicity in crimes committed through their platforms. Durov is under formal investigation in France, facing potential charges that could carry a 10-year prison sentence and a €500,000 fine. He denies Telegram’s role in facilitating illegal transactions, stating the platform complies with the EU regulations.

The so-called LOPMI (Loi d’Orientation et de Programmation du Ministère de l’Intérieur) 2023-22 law, unique in its scope, is yet to be tested in court, making France the first country to target tech executives in this way directly. Legal experts point out that no similar laws exist in the US or elsewhere in the Western world.

While the US has prosecuted individuals like Ross Ulbricht, founder of the Silk Road marketplace, those cases required proof of active involvement in criminal activity. However, French law seeks to hold platform operators accountable for illegal actions facilitated through their sites, even if they were not directly involved.

Prosecutors in Paris, led by Laure Beccuau, have praised the law as a powerful tool in their fight against organised cybercrime, including child exploitation, credit card trafficking, and denial-of-service attacks. The recent high-profile arrest of Durov and the shutdown of other criminal platforms like Coco highlight France’s aggressive stance in combating online crime. The J3 cybercrime unit overseeing Durov’s case has been involved in other relevant investigations, including the notorious case of Dominique Pelicot, who used the anonymous chat forum Coco to orchestrate heinous crimes.

While the law gives French authorities unprecedented power, legal and academic experts caution that its untested nature could lead to challenges in court. Nonetheless, France’s new cybercrime law seriously escalates the global battle against online criminal activity.

TikTok faces legal battle over potential US ban

TikTok and its parent company ByteDance are locked in a high-stakes legal battle with the US government to prevent a looming ban on the app, used by 170 million Americans. The legal confrontation revolves around a US law that mandates ByteDance divest its US assets by 19 January or face a complete ban. Lawyers for TikTok argue that the law violates free speech and is an unprecedented move that contradicts America’s tradition of fostering an open internet. A federal appeals court in Washington recently heard arguments from both sides, with TikTok’s legal team pushing for an injunction to halt the law’s implementation.

The US government, represented by the Justice Department, contends that TikTok’s Chinese ownership poses a significant national security threat, citing the potential for China to access American user data or manipulate the flow of information. This concern is at the core of the new legislation passed by Congress earlier this year, highlighting the risks of having a popular social media platform under foreign control. The White House, while supportive of curbing Chinese influence, has stopped short of advocating for an outright ban.

ByteDance maintains that divesting TikTok is neither technologically nor commercially feasible, casting uncertainty over the app’s future as it faces potentially severe consequences amid a politically charged environment.

The case comes at a pivotal moment in the US political landscape, with both presidential candidates, Donald Trump and Kamala Harris, actively using TikTok to engage younger voters. The judges expressed concerns over the complexities involved, especially with monitoring the massive codebase that powers TikTok, making it difficult to assess risks in real-time. As the legal wrangling continues, a ruling is expected by 6 December, and the case may eventually reach the US Supreme Court.

Brazil unfreezes Starlink and X accounts after fine payment

Brazil’s Supreme Court has lifted the freeze on the bank accounts of Starlink and X after transferring 18.35 million reais ($3.3 million) to the national treasury. The decision follows a legal standoff between Justice Alexandre de Moraes and billionaire Elon Musk, who owns X and 40% of Starlink’s parent company, SpaceX. The fines, initially imposed over noncompliance, have now been settled, prompting the unfreezing of the accounts.

The dispute began when Moraes ordered X, the social media platform formerly known as Twitter, to block certain accounts accused of spreading misinformation and hate speech, which he deemed a threat to Brazil’s democracy. Musk resisted these orders, labelling them as ‘censorship.’ In response, the court moved to freeze Starlink’s accounts, as X had failed to comply with the demands, including appointing a local legal representative as required by Brazilian law.

Despite the resolution of the fines, Moraes has not lifted his order to block access to X in Brazil, the platform’s sixth-largest market. The restriction is tied to the platform’s failure to meet other legal obligations, such as removing specific content and appointing a legal representative in the country.

As the legal tussle continues, Musk’s companies remain under scrutiny in Brazil, a key battleground in the global debate over the regulation of social media and the balance between free speech and public safety.

Elon Musk’s X may sidestep EU’s big tech regulations

Elon Musk’s social media platform, X, is likely to avoid being subjected to the EU’s stringent new tech regulations aimed at curbing the power of Big Tech. The company is expected to fall outside the scope of the Digital Markets Act (DMA), which imposes strict rules on firms that act as key intermediaries between businesses and consumers.

The European Commission investigated X in May, exploring whether the platform met the criteria to be classified as a ‘gatekeeper’ under the DMA. To qualify, a company must have over 45 million active users and a market capitalisation of at least €75 billion. Gatekeepers must open their messaging apps to rival services, allow users more control over pre-installed apps, and avoid giving preferential treatment to their products.

X has argued that it does not serve as a critical gateway between businesses and consumers, distancing itself from the obligations set by the DMA. While the investigation remains ongoing, the Commission has not provided further comment on its findings.

However, X faces more pressing issues under the EU’s newly implemented Digital Services Act (DSA), which requires large platforms to actively combat harmful or illegal content or face significant fines—up to 6% of their global turnover. X is under scrutiny as part of several ongoing investigations related to its compliance with the DSA.

Meta revises AI labels on social media platforms to balance transparency and user experience.

Meta’s decision to change how it labels AI-modified content on Instagram, Facebook, and Threads signifies another advancement in the company’s approach to generative AI. The visibility of AI’s involvement is reduced by moving the ‘AI info’ label to the post’s menu for content that has been edited with AI tools. This could make it easier for users to overlook or miss the AI editing details in such posts.

However, for content fully generated by AI, Meta will continue to prominently display the label beneath the user’s name, ensuring that posts created entirely by AI prompts remain visibly marked. The distinction Meta is making here seems to reflect the varying degrees of AI involvement in content creation.

Meta aims to increase transparency about content labelling, specifying if AI designation is from industry signals or self-disclosure. This effort follows complaints and confusion over the previous ‘Made with AI’ label, particularly from photographers concerned that their real photos were misrepresented.

This change may raise concerns about the potential for users to be misled, especially as AI editing tools become more sophisticated and the line between human and AI-created content continues to blur. It highlights the need for continued transparency as AI technology integrates more deeply into content creation across platforms.

Illegal gun parts from China seized by US authorities

US authorities have taken down over 350 websites selling gun silencers and parts from China used to convert semiautomatic pistols into fully automatic machine guns. The move follows an investigation that started in August 2023, targeting illegal sales of these dangerous devices.

Undercover operations revealed shipments from China, falsely labelled as items such as ‘necklaces’ or ‘toys’. Instead, these packages contained machine gun conversion devices, known as ‘switches’, and ‘silencers’, both banned under the National Firearms Act. Some websites even sold counterfeit goods, misusing the trademark of gun manufacturer Glock Inc.

Acting US Attorney Joshua Levy emphasised the importance of seizing these websites to halt the influx of illegal and dangerous contraband. Law enforcement has so far seized over 700 machine gun conversion devices, 87 illegal suppressors, 59 handguns, and 46 long guns.

Officials highlighted the growing problem of such devices being easily accessible, posing a serious threat to public safety. The seizures are part of a broader effort to tackle the illegal gun parts trade and protect communities.

Former Google exec reveals giant’s strategy to crush ad rivals

In 2009, Google’s goal was to ‘crush’ rival ad networks, as revealed by a former executive in a point highlighted in the ongoing US Department of Justice antitrust trial against the tech giant. The remarks, made by David Rosenblatt, Google’s former president of display advertising, surfaced as part of the prosecution’s argument that Google has been trying to monopolise the online adtech market, dominating both publisher ad servers and advertiser ad networks.

The trial is gaining momentum and has introduced evidence of Google’s internal strategies since it acquired DoubleClick in 2008. Rosenblatt’s comments, referenced in court notes, underscored Google’s aim to control the digital advertising ecosystem. He compared the company’s adtech ambitions to those of major financial institutions, stating that Google wanted to achieve in display ads what it had already done with search ads.

Google has denied the allegations, asserting it faces strong competition from other major players like Microsoft, Amazon, and Meta. The company argues that its advertising tools are common in the industry. However, the prosecution contends that Google’s integrated ad services give it an unfair advantage, particularly by making it difficult for publishers to switch platforms, a challenge Rosenblatt described as a ‘nightmare.’

Should the court rule against Google, prosecutors have called for the company to sell off its Google Ad Manager, including its publisher ad server and ad exchange, to restore competition in the digital advertising market.

Australia targets social media giants over misinformation

Australia is stepping up its efforts to curb the spread of misinformation online with a new law that could see tech platforms fined up to 5% of their global revenue if they fail to prevent disseminating harmful content. The legislation, part of a broader crackdown on tech giants, aims to hold platforms accountable for falsehoods that threaten election integrity, public health, or critical infrastructure.

Under the proposed law, platforms must create codes of conduct detailing managing misinformation. A regulator must approve these codes, which can impose its standards and penalties if the platforms fail to comply. The government has emphasised the importance of addressing misinformation, warning of its risks to democracy and public safety. Communications Minister Michelle Rowland stressed that inaction would allow the problem to worsen, making it clear that the stakes are high for society and the economy.

The new legislation has sparked debate, with free speech advocates raising concerns about government overreach. A previous version of the bill was criticised for giving too much power to regulators to define what constitutes misinformation. However, the revised proposal includes safeguards, ensuring that professional news, artistic, and religious content are protected while limiting the regulator’s ability to remove specific posts or user accounts.

Tech companies, including Meta and X, have expressed reservations about the law. Meta, which serves a significant portion of Australia’s population, has remained tight-lipped on the legislation, while industry group DIGI has raised questions about its implementation. Meanwhile, X (formerly Twitter) has reduced its content moderation efforts, particularly following its acquisition by Elon Musk, adding another layer of complexity to the debate.

Australia’s stringent legal initiative is part of a global trend, with governments worldwide looking for ways to address the influence of tech platforms. As the country heads into an election year, leaders must ensure that foreign-controlled platforms do not undermine national sovereignty or disrupt the political landscape.