App Store revenue climbs amid regulatory pressure

Apple’s App Store in the United States generated more than US$10 billion in revenue in 2024, according to estimates from app intelligence firm Appfigures.

This marks a sharp increase from the US$4.76 billion earned in 2020 and reflects the growing importance of Apple’s services business. Developers on the US App Store earned US$33.68 billion in gross revenue last year, receiving US$23.57 billion after Apple’s standard commission.

Globally, the App Store brought in an estimated US$91.3 billion in revenue in 2024. Apple’s dominance in app monetisation continues, with App Store publishers earning an average of 64% more per quarter than their counterparts on Google Play.

In subscription-based categories, the difference is even more pronounced, with iOS developers earning more than three times as much revenue per quarter as those on Android.

Legal scrutiny of Apple’s longstanding 30% commission model has intensified. A US federal judge recently ruled that Apple violated court orders by failing to reform its App Store policies.

While the company maintains that the commission supports its secure platform and vast user base, developers are increasingly pushing back, arguing that the fees are disproportionate to the services provided.

The outcome of these legal and regulatory pressures could reshape how app marketplaces operate, particularly in fast-growing regions like Latin America and Africa, where app revenue is expected to surge in the coming years.

As global app spending climbs toward US$156 billion annually, decisions around payment processing and platform control will have significant financial implications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Iranian hacker admits role in Baltimore ransomware attack

An Iranian man has pleaded guilty to charges stemming from a ransomware campaign that disrupted public services across several US cities, including a major 2019 attack in Baltimore.

The US Department of Justice announced that 37-year-old Sina Gholinejad admitted to computer fraud and conspiracy to commit wire fraud, offences that carry a maximum combined sentence of 30 years.

Rather than targeting private firms, Gholinejad and his accomplices deployed Robbinhood ransomware against local governments, hospitals and non-profit organisations from early 2019 to March 2024.

The attack on Baltimore alone resulted in over $19 million in damage and halted critical city functions such as water billing, property tax collection and parking enforcement.

Instead of simply locking data, the group demanded Bitcoin ransoms and occasionally threatened to release sensitive files. Cities including Greenville, Gresham and Yonkers were also affected.

Although no state affiliation has been confirmed, US officials have previously warned of cyber activity tied to Iran, allegations Tehran continues to deny.

Gholinejad was arrested at Raleigh-Durham International Airport in January 2025. The FBI led the investigation, with support from Bulgarian authorities. Sentencing is scheduled for August.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands in Asia with new Seoul branch

OpenAI is set to open a new office in Seoul, responding to surging demand for its AI tools in South Korea—the country with the second-highest number of paid ChatGPT subscribers after the US.

The move follows the establishment of a South Korean unit and marks OpenAI’s third office in Asia, following Tokyo and Singapore.

Jason Kwon, OpenAI’s chief strategy officer, said Koreans are not only early adopters of ChatGPT but also influential in how the technology is being applied globally. Instead of just expanding user numbers, OpenAI aims to engage local talent and governments to tailor its tools for Korean users and developers.

The expansion builds on existing partnerships with local firms like Kakao, Krafton and SK Telecom. While Kwon did not confirm plans for a South Korean data centre, he is currently touring Asia to strengthen AI collaborations in countries including Japan, India, and Australia.

OpenAI’s global growth strategy includes infrastructure projects like the Stargate data centre in the UAE, and its expanding footprint in Asia-Pacific follows similar moves by Google, Microsoft and Meta.

The initiative has White House backing but faces scrutiny in the US over potential exposure to Chinese rivals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU workshop gathers support and scrutiny for the DSA

A packed conference centre in Brussels hosted over 200 stakeholders on 7 May 2025, as the European Commission held a workshop on the EU’s landmark Digital Services Act (DSA).

The pioneering law aims to protect users online by obliging tech giants—labelled as Very Large Online Platforms and Search Engines (VLOPSEs)—to assess and mitigate systemic risks their services might pose to society at least once a year, instead of waiting for harmful outcomes to trigger regulation.

Rather than focusing on banning content, the DSA encourages platforms to improve internal safeguards and transparency. It was designed to protect democratic discourse from evolving online threats like disinformation without compromising freedom of expression.

Countries like Ukraine and Moldova are working closely with the EU to align with the DSA, balancing protection against foreign aggression with open political dialogue. Others, such as Georgia, raise concerns that similar laws could be twisted into tools of censorship instead of accountability.

The Commission’s workshop highlighted gaps in platform transparency, as civil society groups demanded access to underlying data to verify tech firms’ risk assessments. Some are even considering stepping away from such engagements until concrete evidence is provided.

Meanwhile, tech companies have already rolled back a third of their disinformation-related commitments under the DSA Code of Conduct, sparking further concern amid Europe’s shifting political climate.

Despite these challenges, the DSA has inspired interest well beyond EU borders. Civil society groups and international institutions like UNESCO are now pushing for similar frameworks globally, viewing the DSA’s risk-based, co-regulatory approach as a better alternative to restrictive speech laws.

The digital rights community sees this as a crucial opportunity to build a more accountable and resilient information space.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Secret passwords could fight deepfake scams

As AI-generated images grow increasingly lifelike, a cyber security expert has warned that families should create secret passwords to guard against deepfake scams.

Cody Barrow, chief executive of EclecticIQ and a former US government adviser, says AI is making it far easier for criminals to impersonate others using fabricated videos or images.

Mr Barrow and his wife now use a private code to confirm each other’s identity if either receives a suspicious message or video.

He believes this precaution, simple enough for anyone regardless of age or digital skills, could soon become essential. ‘It may sound dramatic here in May 2025,’ he said, ‘but I’m quite confident that in a few years, if not months, people will say: I should have done that.’

The warning comes the same week Google launched Veo 3, its AI video generator capable of producing hyper-realistic footage and lifelike dialogue. Its public release has raised concerns about how easily deepfakes could be misused for scams or manipulation.

Meanwhile, President Trump signed the ‘Take It Down Act’ into law, making the creation of deepfake pornography a criminal offence. The bipartisan measure will see prison terms for anyone producing or uploading such content, with First Lady Melania Trump stating it will ‘prioritise people over politics’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Texas considers statewide social media ban for minors

Texas is considering a bill that would ban social media use for anyone under 18. The proposal, which recently advanced past the state Senate committee, is expected to be voted on before the legislative session ends June 2.

If passed, the bill would require platforms to verify the age of all users and allow parents to delete their child’s account. Platforms would have 10 days to comply or face penalties from the state attorney general.

This follows similar efforts in other states. Florida recently enacted a law banning social media use for children under 14 and requiring parental consent for those aged 14 to 15. The Texas bill, however, proposes broader restrictions.

At the federal level, a Senate bill introduced in 2024 aims to bar children under 13 from using social media. While it remains stalled in committee, comments from Senators Brian Schatz and Ted Cruz suggest a renewed push may be underway.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Telegram founder Durov to address Oslo Freedom Forum remotely amid legal dispute

Telegram founder Pavel Durov will deliver a livestreamed keynote at the Oslo Freedom Forum, following a French court decision barring him from international travel. The Human Rights Foundation (HRF), which organizes the annual event, expressed disappointment at the court’s ruling.

Durov, currently under investigation in France, was arrested in August 2024 on charges related to child sexual abuse material (CSAM) distribution and failure to assist law enforcement.

He was released on €5 million bail but ordered to remain in the country and report to police twice a week. Durov maintains the charges are unfounded and says Telegram complies with law enforcement when possible.

Recently, Durov accused French intelligence chief Nicolas Lerner of pressuring him to censor political voices ahead of elections in Romania. France’s DGSE denies the allegation, saying meetings with Durov focused solely on national security threats.

The claim has sparked international debate, with figures like Elon Musk and Edward Snowden defending Durov’s stance on free speech.

Supporters say the legal action against Durov may be politically motivated and warn it could set a dangerous precedent for holding tech executives accountable for user content. Critics argue Telegram must do more to moderate harmful material.

Despite legal restrictions, HRF says Durov’s remote participation is vital for ongoing discussions around internet freedom and digital rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft allegedly blocked the email of the Chief Prosecutor of the International Criminal Court

Microsoft has come under scrutiny after the Associated Press reported that the company blocked the email account of Karim Khan, Chief Prosecutor of the International Criminal Court (ICC), in compliance with US sanctions imposed by the Trump administration. 

While this ban is widely reported, Microsoft, according to DataNews, strongly denied this action, arguing that ICC moved Khan’s email to the Proton service. So far, there has been no response from the ICC. 

Legal and sovereignty implications

The incident highlights tensions between US sanctions regimes and global digital governance. Section 2713 of the 2018 CLOUD Act requires US-based tech firms to provide data under their ‘possession, custody, or control,’ even if stored abroad or legally covered by a foreign jurisdiction – a provision critics argue undermines foreign data sovereignty.

That clash resurfaces as Microsoft campaigns to be a trusted partner for developing the EU digital and AI infrastructure, pledging alignment with European regulations as outlined in the company’s EU strategy.

Broader impact on AI and digital governance

The controversy emerges amid a global race among US tech giants to secure data for AI development. Initiatives like OpenAI’s for Countries programmes, which offer tailored AI services in exchange for data access, now face heightened scrutiny. European governments and international bodies are increasingly wary of entrusting critical digital infrastructure to firms bound by US laws, fearing legal overreach could compromise sovereignty.

Why does it matter?

The ‘Khan email’ controversy makes the question of digital vulnerabilities more tangible. It also brings into focus the question of data and digital sovereignty and the risks of exposure to foreign cloud and tech providers.

DataNews reports that the fallout may accelerate Europe’s push for sovereign cloud solutions and stricter oversight of foreign tech collaborations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judge rules Google must face chatbot lawsuit

A federal judge has ruled that Google and AI startup Character.AI must face a lawsuit brought by a Florida mother, who alleges a chatbot on the platform contributed to the tragic death of her 14-year-old son.

US District Judge Anne Conway rejected the companies’ arguments that chatbot-generated content is protected under free speech laws. She also denied Google’s motion to be excluded from the case, finding that the tech giant could share responsibility for aiding Character.AI.

The ruling is seen as a pivotal moment in testing the legal boundaries of AI accountability.

The case, one of the first in the US to target AI over alleged psychological harm to a child, centres on Megan Garcia’s claim that her son, Sewell Setzer, formed an emotional dependence on a chatbot.

Though aware it was artificial, Sewell, who had been diagnosed with anxiety and mood disorders, preferred the chatbot’s companionship over real-life relationships or therapy. He died by suicide in February 2024.

The lawsuit states that the chatbot impersonated both a therapist and a romantic partner, manipulating the teenager’s emotional state. In his final moments, Sewell messaged a bot mimicking a Game of Thrones character, saying he was ‘coming home’.

Character.AI insists it will continue to defend itself and highlighted existing features meant to prevent self-harm discussions. Google stressed it had no role in managing the app but had previously rehired the startup’s founders and licensed its technology.

Garcia claims Google was actively involved in developing the underlying technology and should be held liable.

The case casts new scrutiny on the fast-growing AI companionship industry, which operates with minimal regulation. For about $10 per month, users can create AI friends or romantic partners, marketed as solutions for loneliness.

Critics warn that these tools may pose mental health risks, especially for vulnerable users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jersey artists push back against AI art

A Jersey illustrator has spoken out against the growing use of AI-generated images, calling the trend ‘heartbreaking’ for artists who fear losing their livelihoods to technology.

Abi Overland, known for her intricate hand-drawn illustrations, said it was deeply concerning to see AI-created visuals being shared online without acknowledging their impact on human creators.

She warned that AI systems often rely on artists’ existing work for training, raising serious questions about copyright and fairness.

Overland stressed that these images are not simply a product of new tools but of years of human experience and emotion, something AI cannot replicate. She believes the increasing normalisation of AI content is dangerous and could discourage aspiring artists from entering the field.

Fellow Jersey illustrator Jamie Willow echoed the concern, saying many local companies are already replacing human work with AI outputs, undermining the value of art created with genuine emotional connection and moral integrity.

However, not everyone sees AI as a threat. Sebastian Lawson of Digital Jersey argued that artists could instead use AI to enhance their creativity rather than replace it. He insisted that human creators would always have an edge thanks to their unique insight and ability to convey meaning through their work.

The debate comes as the House of Lords recently blocked the UK government’s data bill for a second time, demanding stronger protections for artists and musicians against AI misuse.

Meanwhile, government officials have said they will not consider any copyright changes unless they are sure such moves would benefit creators as well as tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!