App Store revenue climbs amid regulatory pressure

Apple’s App Store in the United States generated more than US$10 billion in revenue in 2024, according to estimates from app intelligence firm Appfigures.

This marks a sharp increase from the US$4.76 billion earned in 2020 and reflects the growing importance of Apple’s services business. Developers on the US App Store earned US$33.68 billion in gross revenue last year, receiving US$23.57 billion after Apple’s standard commission.

Globally, the App Store brought in an estimated US$91.3 billion in revenue in 2024. Apple’s dominance in app monetisation continues, with App Store publishers earning an average of 64% more per quarter than their counterparts on Google Play.

In subscription-based categories, the difference is even more pronounced, with iOS developers earning more than three times as much revenue per quarter as those on Android.

Legal scrutiny of Apple’s longstanding 30% commission model has intensified. A US federal judge recently ruled that Apple violated court orders by failing to reform its App Store policies.

While the company maintains that the commission supports its secure platform and vast user base, developers are increasingly pushing back, arguing that the fees are disproportionate to the services provided.

The outcome of these legal and regulatory pressures could reshape how app marketplaces operate, particularly in fast-growing regions like Latin America and Africa, where app revenue is expected to surge in the coming years.

As global app spending climbs toward US$156 billion annually, decisions around payment processing and platform control will have significant financial implications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe cracks down on Shein for misleading consumers

The European Commission and national consumer protection authorities have determined that online fashion giant Shein is in breach of six EU consumer laws, giving the company one month to bring its practices into compliance.

Announced today, the findings from the European Commission mark the latest in a string of regulatory actions against e-commerce platforms based in China, as the EU intensifies efforts to hold international marketplaces accountable for deceptive practices and unsafe goods.

Michael McGrath, the commissioner for consumer protection, stated: ‘We will not shy away from holding e-commerce platforms to account, regardless of where they are based.’

The investigation, launched in February, identified violations such as fake discounts, high-pressure sales tactics, misleading product labelling, and hidden customer service contact details.

Authorities are also examining whether Shein’s product ranking and review systems mislead consumers, as well as the platform’s contractual terms with third-party sellers.

Shein responded by saying it is working ‘constructively’ with authorities and remains committed to addressing concerns raised during the investigation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU workshop gathers support and scrutiny for the DSA

A packed conference centre in Brussels hosted over 200 stakeholders on 7 May 2025, as the European Commission held a workshop on the EU’s landmark Digital Services Act (DSA).

The pioneering law aims to protect users online by obliging tech giants—labelled as Very Large Online Platforms and Search Engines (VLOPSEs)—to assess and mitigate systemic risks their services might pose to society at least once a year, instead of waiting for harmful outcomes to trigger regulation.

Rather than focusing on banning content, the DSA encourages platforms to improve internal safeguards and transparency. It was designed to protect democratic discourse from evolving online threats like disinformation without compromising freedom of expression.

Countries like Ukraine and Moldova are working closely with the EU to align with the DSA, balancing protection against foreign aggression with open political dialogue. Others, such as Georgia, raise concerns that similar laws could be twisted into tools of censorship instead of accountability.

The Commission’s workshop highlighted gaps in platform transparency, as civil society groups demanded access to underlying data to verify tech firms’ risk assessments. Some are even considering stepping away from such engagements until concrete evidence is provided.

Meanwhile, tech companies have already rolled back a third of their disinformation-related commitments under the DSA Code of Conduct, sparking further concern amid Europe’s shifting political climate.

Despite these challenges, the DSA has inspired interest well beyond EU borders. Civil society groups and international institutions like UNESCO are now pushing for similar frameworks globally, viewing the DSA’s risk-based, co-regulatory approach as a better alternative to restrictive speech laws.

The digital rights community sees this as a crucial opportunity to build a more accountable and resilient information space.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI regulation fight heats up over US federal moratorium

The US House of Representatives has passed a budget bill containing a 10-year moratorium on the enforcement of state-level artificial intelligence laws. With broad bipartisan concern already surfacing, the Senate faces mounting pressure to revise or scrap the provision entirely.

While the provision claims to exclude generally applicable legislation, experts warn its vague language could override a wide array of consumer protections and privacy rules in the US. The moratorium’s scope, targeting AI-specific regulations, has triggered alarm among concerned groups.

Critics argue the measure may hinder states from addressing real-world harms posed by AI technologies, such as deepfakes, discriminatory algorithms, and unauthorised data use.

Existing and proposed state laws, ranging from transparency requirements in hiring and healthcare to protections for artists and mental health app users, may be invalidated under the moratorium.

Several experts noted that states have often acted more swiftly than the federal government in confronting emerging tech risks.

Supporters contend the moratorium is necessary to prevent a fragmented regulatory landscape that could stifle innovation and disrupt interstate commerce. However, analysts point out that general consumer laws might also be jeopardised due to the bill’s ambiguous definitions and legal structure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Florida woman scammed by fake Keanu Reeves in AI-powered romance fraud

A Florida woman, Dianne Ringstaff, shared her painful story after falling victim to an elaborate online scam involving someone impersonating actor Keanu Reeves. The fraud began innocently when she received a message while playing a mobile game, followed by a video call confirming she was speaking with the Hollywood star.

The impostor cultivated a friendship through calls and messages for two and a half years, eventually gaining her trust. Things took a turn when the scammer began pleading for money, claiming Reeves was being sued and targeted by the FBI, which had supposedly frozen his assets.

Vulnerable after personal losses, Ringstaff was persuaded to help, ultimately taking out a home equity loan and selling her car. She sent around $160,000 in total, convinced she was aiding the beloved actor.

Authorities later informed her that not only had she been scammed, but her bank account had been used to funnel money from other victims as well. Devastated, Ringstaff broke down—but is now determined to reclaim her life and raise awareness.

She is speaking out to warn others about the growing threat of AI-powered ‘romance’ scams, where fraudsters use deepfake videos and cloned voices to impersonate celebrities and gain victims’ trust.

‘Don’t be naive,’ she cautions. ‘Do your research and don’t give out personal information unless you truly know who you’re dealing with.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

German court allows Meta to use Facebook and Instagram data

A German court has ruled in favour of Meta, allowing the tech company to use data from Facebook and Instagram to train AI systems. A Cologne court ruled Meta had not breached the EU law and deemed its AI development a legitimate interest.

According to the court, Meta is permitted to process public user data without explicit consent. Judges argued that training AI systems could not be achieved by other equally effective and less intrusive methods.

They noted that Meta plans to use only publicly accessible data and had taken adequate steps to inform users via its mobile apps.

Despite the ruling, the North Rhine-Westphalia Consumer Advice Centre remains critical, raising concerns about legality and user privacy. Privacy group Noyb also challenged the decision, warning it could take further legal action, including a potential class-action lawsuit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI regulation offers development opportunity for Latin America

Latin America is uniquely positioned to lead on AI governance by leveraging its social rights-focused policy tradition, emerging tech ecosystems, and absence of legacy systems.

According to a new commentary by Eduardo Levy Yeyati at the Brookings Institution, the region has the opportunity to craft smart AI regulation that is both inclusive and forward-looking, balancing innovation with rights protection.

Despite global momentum on AI rulemaking, Latin American regulatory efforts remain slow and fragmented, underlining the need for early action and regional cooperation.

The proposed framework recommends flexible, enforceable policies grounded in local realities, such as adapting credit algorithms for underbanked populations or embedding linguistic diversity in AI tools.

Governments are encouraged to create AI safety units, invest in public oversight, and support SMEs and open-source innovation to avoid monopolisation. Regulation should be iterative and participatory, using citizen consultations and advisory councils to ensure legitimacy and resilience through political shifts.

Regional harmonisation will be critical to avoid a patchwork of laws and promote Latin America’s role in global AI governance. Coordinated data standards, cross-border oversight, and shared technical protocols are essential for a robust, trustworthy ecosystem.

Rather than merely catching up, Latin America can become a global model for equitable and adaptive AI regulation tailored to the needs of developing economies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI could accelerate and automate future cyberattacks, Malwarebytes warns

A new report by Malwarebytes warns that the rise of agentic AI will significantly increase the frequency, sophistication, and scale of cyberattacks.

Since the launch of ChatGPT in late 2022, threat actors have used generative AI to write malware, craft phishing emails, and execute realistic social engineering schemes.

One notable case from January 2024 involved a finance employee who was deceived into transferring $25 million during a video call with AI-generated deepfakes of company executives.

Criminals have also found ways to bypass safety features in AI models using techniques such as prompt chaining, injection, and jailbreaking to generate malicious outputs.

While generative AI has already lowered the barrier to entry for cybercrime, the report highlights that agentic AI—capable of autonomously executing complex tasks—poses a far greater risk by automating time-consuming attacks like ransomware at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber scams use a three-letter trap

Staying safe from cybercriminals can be surprisingly simple. While AI-powered scams grow more realistic, some signs are still painfully obvious.

If you spot the letters ‘.TOP’ in any message link, it’s best to stop reading and hit delete. That single clue is often enough to expose a scam in progress.

Most malicious texts pose as alerts about road tolls, deliveries or account issues, using trusted brand names to lure victims into clicking fake links.

The worst of these is the ‘.TOP’ top-level domain (TLD), which has become infamous for its role in phishing and scam operations. Although launched in 2014 for premium business use, its low cost and lack of oversight quickly made it a favourite among cyber gangs, especially those based in China.

Today, nearly one-third of all .TOP domains are linked to cybercrime — far surpassing the criminal activity seen on mainstream domains like ‘.com’.

Despite repeated warnings and an unresolved compliance notice from internet regulator ICANN, abuse linked to .TOP has only worsened.

Experts warn that it is highly unlikely any legitimate Western organisation would ever use a .TOP domain. If one appears in your messages, the safest option is to delete it without clicking.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Secret passwords could fight deepfake scams

As AI-generated images grow increasingly lifelike, a cyber security expert has warned that families should create secret passwords to guard against deepfake scams.

Cody Barrow, chief executive of EclecticIQ and a former US government adviser, says AI is making it far easier for criminals to impersonate others using fabricated videos or images.

Mr Barrow and his wife now use a private code to confirm each other’s identity if either receives a suspicious message or video.

He believes this precaution, simple enough for anyone regardless of age or digital skills, could soon become essential. ‘It may sound dramatic here in May 2025,’ he said, ‘but I’m quite confident that in a few years, if not months, people will say: I should have done that.’

The warning comes the same week Google launched Veo 3, its AI video generator capable of producing hyper-realistic footage and lifelike dialogue. Its public release has raised concerns about how easily deepfakes could be misused for scams or manipulation.

Meanwhile, President Trump signed the ‘Take It Down Act’ into law, making the creation of deepfake pornography a criminal offence. The bipartisan measure will see prison terms for anyone producing or uploading such content, with First Lady Melania Trump stating it will ‘prioritise people over politics’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!