The European Central Bank (ECB) is keen to accelerate the creation of the digital euro, particularly following US President Donald Trump’s endorsement of stablecoins linked to the US dollar. ECB board member Piero Cipollone highlighted that Trump’s backing could push European lawmakers to fast-track the legislation for the digital euro. The ECB envisions the digital euro as a central bank-backed online wallet, offering an alternative to major US payment providers like Visa and PayPal.
Despite the European Commission’s proposal for digital euro legislation in June 2023, progress has been slow due to some scepticism in the political and banking sectors. Cipollone remains optimistic that recent developments, including the rise of US stablecoins, will prompt greater urgency from EU lawmakers. He expressed hope that the digital euro legislation could be finalised by summer, allowing for negotiations with the Commission to be wrapped up before November.
Cipollone also raised concerns over the growing use of US stablecoins in Europe, warning that it could lead to a shift of deposits from European banks to the US. He acknowledged bankers’ fears that a digital euro could have a similar effect. Still, he reassured that the ECB would likely limit the amount of digital euros users can hold to prevent destabilisation. Several countries, including Nigeria and China, have already launched central bank digital currencies, while many others, such as Russia and Brazil, are in the testing phase.
China’s antitrust regulator is reportedly preparing to investigate Apple’s App Store policies and fees, including its 30% commission on in-app purchases and restrictions on external payment services. The move follows recent measures targeting US businesses, including Google and fashion brand Calvin Klein, just as new US tariffs on Chinese goods emerged. Apple’s shares fell 2.6% in premarket trading following the news.
The investigation, led by the State Administration for Market Regulation, comes after ongoing discussions between Chinese regulators, Apple executives, and app developers over the past year. While neither Apple nor the Chinese antitrust regulator has commented on the matter, the move is seen as part of broader scrutiny of US companies operating in China.
In a separate development, Google was also accused of violating China’s anti-monopoly laws, with experts speculating the probe could be linked to Google’s Android operating system and its influence over Chinese mobile manufacturers. Additionally, China’s Commerce Ministry added PVH Corp, the owner of brands like Calvin Klein, to its “unreliable entity” list.
ByteDance, the company behind TikTok, has introduced OmniHuman-1, an advanced AI system capable of generating highly realistic deepfake videos from just a single image and an audio clip. Unlike previous deepfake technology, which often displayed telltale glitches, OmniHuman-1 produces remarkably smooth and lifelike footage. The AI can also manipulate body movements, allowing for extensive editing of existing videos.
Trained on 19,000 hours of video content from undisclosed sources, the system’s potential applications range from entertainment to more troubling uses, such as misinformation. The rise of deepfake content has already led to cases of political and financial deception worldwide, from election interference to multimillion-dollar fraud schemes. Experts warn that the technology’s increasing sophistication makes it harder to detect AI-generated fakes.
Despite calls for regulation, deepfake laws remain limited. While some governments have introduced measures to combat AI-generated disinformation, enforcement remains a challenge. With deepfake content spreading at an alarming rate, many fear that systems like OmniHuman-1 could further blur the line between reality and fabrication.
Alphabet is set to face investor scrutiny over its heavy spending on AI as it prepares to report earnings. Slower revenue growth in advertising and cloud services has raised concerns, especially as competition in AI intensifies. Chinese startup DeepSeek’s launch of low-cost AI models has fuelled worries about an industry price war. Alphabet’s capital expenditure, estimated at $50 billion for last year, is expected to rise further in 2025 to support AI-driven search features and cloud expansion.
Google Cloud’s growth is forecast to slow in the fourth quarter despite high expectations. Analysts suggest that while heavy investment continues, efficiency gains have helped maintain profits. The company’s search and advertising business remains strong, with an expected 11.2% increase in revenue, though this marks a slight slowdown from the previous quarter. Competition from Amazon and TikTok continues to challenge Alphabet’s dominance in search advertising.
Political advertising linked to the US presidential election may have boosted Google’s revenue, following a similar trend at Meta. However, Meta’s cautious outlook for the first quarter has raised concerns about broader ad market trends amid economic uncertainty. Alphabet’s shares have climbed 7% this year after a strong rally in 2023, largely driven by confidence in its AI strategy.
Investors will closely watch whether Alphabet faces the same cloud business challenges as Microsoft, whose Azure growth slowed due to a shift in AI priorities. Google Cloud revenue is expected to rise by 32% in the fourth quarter, slightly down from the 35% growth seen previously but still outpacing Microsoft and Amazon. Maintaining momentum in AI while balancing cloud growth remains a key challenge for Alphabet.
Google and Epic Games presented arguments before a US appeals court as Google attempted to overturn a jury verdict and a judge’s order requiring changes to its app store. Google’s lawyer argued that the trial judge made errors that unfairly benefited Epic, which had accused the company of monopolising access to apps on Android devices. A San Francisco jury previously ruled that Google had stifled competition.
The judge ordered Google to allow users to download rival app stores within its Play Store and to make its app catalogue available to competitors. Google’s appeal has put the ruling on hold, with its lawyer contending that the company faces strong competition from Apple’s App Store and was unfairly restricted from making that argument. Epic’s lawyer rejected Google’s claims, insisting that its dominance had harmed competition for years.
A judge on the appeals panel challenged Google’s position, highlighting key differences between Apple’s and Android’s business models. Google also argued that Epic’s case should not have gone before a jury, as it did not seek damages. Epic countered that the Play Store changes were necessary and disputed Google’s concerns about privacy and security.
The US Justice Department, Federal Trade Commission, and Microsoft have backed Epic in the case. A decision from the appeals court is expected later in the year, with the possibility of further escalation to the US Supreme Court.
Meta has introduced a new policy framework outlining when it may restrict the release of its AI systems due to security concerns. The Frontier AI Framework categorises AI models into ‘high-risk’ and ‘critical-risk’ groups, with the latter referring to those capable of aiding catastrophic cyber or biological attacks. If an AI system is classified as a critical risk, Meta will suspend its development until safety measures can be implemented.
The company’s evaluation process does not rely solely on empirical testing but also considers input from internal and external researchers. This approach reflects Meta’s belief that existing evaluation methods are not yet robust enough to provide definitive risk assessments. Despite its historically open approach to AI development, the company acknowledges that some models could pose unacceptable dangers if released.
By outlining this framework, Meta aims to demonstrate its commitment to responsible AI development while distinguishing its approach from other firms with fewer safeguards. The policy comes amid growing scrutiny of AI’s potential misuse, especially as open-source models gain wider adoption.
WhatsApp has identified an advanced hacking campaign targeting nearly 90 users across more than two dozen countries. The attack, linked to Israeli spyware firm Paragon Solutions, exploited a zero-click vulnerability, meaning victims’ devices were compromised without them needing to interact with any malicious files. The messaging platform, owned by Meta, has since taken steps to block the hacking attempts and has issued a cease-and-desist letter to Paragon.
While WhatsApp has not disclosed the identities of those targeted, reports indicate that journalists and members of civil society were among the victims. The company has referred affected users to Citizen Lab, a Canadian watchdog that investigates digital security threats. Law enforcement agencies and industry partners have also been alerted, though specifics remain undisclosed.
Paragon, which was recently acquired by US investment firm AE Industrial Partners, has not commented on the allegations. The company presents itself as a responsible player in the spyware industry, claiming to sell its technology only to governments in stable democracies. However, critics argue that the continued spread of surveillance tools increases the risk of human rights abuses, with spyware repeatedly found on the devices of activists, journalists, and officials worldwide.
Cybersecurity experts warn that the growing use of commercial spyware poses an ongoing threat to digital privacy. Despite claims of ethical safeguards, the latest revelations suggest that even companies with supposedly responsible practices may be engaging in questionable surveillance activities.
The European Union is preparing to introduce new regulations that would hold e-commerce platforms such as Temu, Shein, and Amazon Marketplace accountable for illegal or unsafe products sold online. Under the proposed customs reforms, online retailers will be required to provide data before goods arrive in the EU, allowing officials to inspect and monitor shipments more effectively.
Currently, consumers purchasing goods online are considered the official importers for customs purposes. The proposed changes would shift this responsibility to online platforms, making them liable for ensuring compliance with EU safety standards, as well as collecting duty and VAT. The reforms also include the creation of a central EU customs authority (EUCA) to oversee inspections and identify risks before shipments enter the bloc.
The draft proposal aims to improve consumer safety and close regulatory gaps in online commerce. E-commerce giants have not yet responded to the proposed changes, which could have significant financial and operational implications for their businesses.
Australia’s government recently passed laws banning social media access for children under 16, targeting platforms like TikTok, Snapchat, Instagram, Facebook, and X. However, YouTube was granted an exemption, with the government arguing that it serves as a valuable educational tool and is not a ‘core social media application.’ That decision followed input from company executives and educational content creators, who argued that YouTube is essential for learning and information-sharing. While the government claims broad community support for the exemption, some experts believe this undermines the goal of protecting children from harmful online content.
Mental health and extremism experts have raised concerns that YouTube exposes young users to dangerous material, including violent, extremist, and addictive content. Despite being exempted from the ban, YouTube has been criticised for its algorithm, which researchers say can promote far-right ideologies, misogyny, and conspiracy theories to minors. Studies conducted by academics have shown that the platform delivers problematic content within minutes of search queries, including harmful videos on topics like sex, COVID-19, and European history.
To test these claims, Reuters created child accounts and found that searches led to content promoting extremism and hate speech. Although YouTube removed some flagged videos, others remain on the platform. YouTube stated that it is actively working to improve its content moderation systems and that it has removed content violating its policies. However, critics argue that the platform’s algorithm still allows harmful content to thrive, especially among younger users.
Hewlett Packard Enterprise’s planned $14 billion acquisition of Juniper Networks faces a legal challenge from the US Department of Justice. Officials argue the deal would harm competition by leaving just two major players—HPE and Cisco—controlling over 70% of the US networking equipment market.
HPE had announced the all-cash acquisition over a year ago, aiming to strengthen its AI capabilities. Both companies defended the deal, saying their networking solutions complement each other and would enhance competition against global rivals. They criticised the DOJ’s market definition, calling it outdated.
Regulators noted that Juniper’s innovations forced HPE to lower prices and invest in new technology under its ‘Beat Mist’ campaign. Eliminating this competition, they claim, would reduce incentives for innovation and cost savings in the industry.
Legal proceedings could take up to eight months, with an October deadline for completion. Authorities in the UK and European Union have already approved the deal.