Epic Games dispute leads to changes in Google Play policies

Google has agreed to major changes to its Play Store policies after settling a long-running legal dispute with Epic Games, the developer behind the popular game Fortnite.

The agreement will reduce the commission Google charges on in-app purchases and introduce new options that make it easier for users to install alternative app stores on Android devices.

Under the new structure, Google will lower its standard commission to 20% on in-app purchases. Developers who choose to use Google’s billing system will pay an additional 5% fee. The company also announced that recurring subscription fees will drop to 10%.

The revised fee structure will begin rolling out in the United States, the European Economic Area and the United Kingdom by June 2026, with expansion to other regions over the following years.

The settlement also introduces a new initiative called the Registered App Stores programme. The programme aims to simplify the installation of alternative app stores on Android while maintaining certain security and quality standards.

Approved third-party stores will be able to offer apps through a more streamlined installation process, addressing long-standing developer complaints that warnings about sideloading discouraged users from installing legitimate alternative marketplaces.

As part of the agreement, Epic Games plans to bring Fortnite back to the Google Play Store globally while continuing to develop its own Epic Games Store for Android. Both companies described the settlement as a step toward a more competitive Android ecosystem.

The dispute between Epic Games and Apple over App Store policies continues separately, reflecting broader industry debates over platform control, developer fees and competition in digital marketplaces.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU watchdog urges limits on US data access

The European Union’s data protection watchdog has urged stronger safeguards as negotiations continue with the US over access to biometric databases. European Data Protection Supervisor Wojciech Wiewiórowski said limits must ensure Europeans’ data is used only for agreed purposes.

Talks between the EU and the US involve potential arrangements that would allow US authorities to query national biometric systems. Databases across the EU contain sensitive information, including fingerprints and facial recognition data.

Past transatlantic data-sharing agreements between the two have faced legal challenges due to insufficient safeguards. European regulators are closely monitoring the Data Privacy Framework amid ongoing concerns about oversight.

Officials also warned that emerging AI technologies could create new surveillance risks linked to US data access. European authorities said they must negotiate as a unified bloc when dealing with the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Major crypto exchanges in South Korea face new ownership limits

South Korea’s ruling Democratic Party and the Financial Services Commission have agreed to cap major shareholder stakes in domestic crypto exchanges at 20%. Exceptions of up to 34% would apply to new businesses to support early-stage operators.

Large exchanges like Upbit and Bithumb will have 3 years to comply, while smaller platforms will receive an additional 3-year grace period.

Current ownership exceeds the proposed cap, with Upbit at 25.5%, Bithumb at 73.6%, and Coinone at 53.4%. Korbit’s pending acquisition would give Mirae Asset Consulting 92% ownership, highlighting the extent of concentrated holdings in the market.

The cap seeks to curb governance risks from concentrated shareholding, following the FSC’s January 2026 proposal. The move gained urgency after Bithumb’s accidental $43 billion Bitcoin transfer, which raised concerns about internal controls.

The ownership limit will likely be included in South Korea’s upcoming Digital Asset Basic Act, alongside rules on stablecoins and crypto ETFs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sovereign AI becomes a strategic question for governments

Governments across the world are increasingly treating AI as a strategic capability that shapes economic development, public services and national security. Momentum behind the idea of ‘sovereign AI’ is growing as countries reassess who controls the chips, cloud infrastructure, data and models powering modern technology.

Complete control over the entire AI stack remains unrealistic for most economies because of the enormous financial and technological costs involved. Global infrastructure continues to rely heavily on US technology firms, which still operate a large share of data centres and AI systems worldwide.

Policy makers are therefore exploring different approaches to sovereignty across the AI ecosystem rather than pursuing total independence. Strategies range from building domestic computing capacity to adapting global AI models for national languages, regulations and public services.

Several countries already illustrate different approaches. The EU is investing billions in AI infrastructure, Canada protects sensitive computing resources while using global models, and India prioritises applications that serve its multilingual population through public digital systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data centres’ expansion in London sparks energy and climate debate

London authorities are drafting new data centre policies amid concerns about their environmental impact and rising energy use. City Hall aims to balance the sector’s economic advantages with pressures on electricity, water, and emissions.

The Greater London Authority (GLA) estimates that 10 large data centres generate around 2.7 million tonnes of carbon emissions due to their high electricity consumption. Of the 100 data centres the UK plans, about 60 will be in London.

Megan Life, assistant director for environment and energy at the GLA, told the London Assembly Environment Committee the new strategy aims to ‘keep hold of the kind of economic growth benefits that data centres offer’ while addressing some ‘quite challenging’ impacts linked to their energy use.

Deputy mayor for environment Mete Coban said the expansion of data centres brings both ‘big benefits’ and ‘massive challenges’ for the capital, particularly in terms of energy and water consumption. ‘It’s not just a London problem, it’s going to be a global problem,’ he said, adding: ‘It’s about making sure that our environment doesn’t suffer in the hands of a few global corporations who will take and not give back, so we want to make sure we equitably do this.’

Policymakers are assessing how data centre growth may affect climate goals and urban infrastructure. London Mayor Sadiq Khan has commissioned a study to forecast future expansion. At the same time, UK lawmakers have launched an inquiry into the environmental impact of the sector as demand for cloud computing and AI infrastructure grows.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

TikTok rejects end-to-end encryption citing safety concerns

TikTok will not adopt end-to-end encryption for direct messages. The company explained that using this technology could hinder safety teams’ and law enforcement’s efforts to detect harmful content in private messages, which the company believes could make users less safe online.

Encrypted messaging ensures that only the sender and recipient can read a conversation and is widely used across the social media industry. Rivals including Facebook, Instagram, Messenger, and X have adopted the technology, saying protecting private communication is central to user privacy.

The issue has become more sensitive because the platform has long faced scrutiny over possible links between its parent company, ByteDance, and the government of the People’s Republic of China, something the company has repeatedly denied. Reflecting these concerns, earlier this year, US lawmakers ordered the separation of TikTok’s US operations from its global business.

The company told the BBC that encrypted messaging would make it impossible for police and platform safety teams to read direct messages when needed. TikTok emphasised that this decision was made to enhance user protection, with a particular focus on the safety of younger users, and that it sees monitoring capabilities as crucial for addressing harmful behaviour.

Industry analyst Matt Navarra said the platform’s decision to ‘swim against the tide’ is ‘notable’ but presents ‘challenging optics’. He noted, ‘Grooming and harassment risks are present in DMs [direct messages], so TikTok can state it is prioritising proactive safety over privacy absolutism,’ though he added that the decision ‘places TikTok out of alignment with global privacy expectations’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Online privacy faces new pressures in the age of social media

Online privacy is eroding as digital services collect ever-growing personal data and surveillance becomes part of daily technology use. The debate has intensified as social media platforms, advertisers, and connected devices expand their ability to track behaviour, preferences, and habits.

Analysts say younger generations have adapted to this reality rather than resisting it. ‘In 2026, online privacy is a luxury, not a right,’ says Thomas Bunting, an analyst at the UK innovation think tank Nesta. He argues many people have grown up accepting data collection as a trade-off for access to online services, noting: ‘We’ve been taught how to deal with it.’

Advocates warn that the erosion of online privacy could have wider social consequences. Cybersecurity expert Prof Alan Woodward from the University of Surrey says the issue goes beyond personal privacy. ‘People should care about online privacy because it shapes who has power over their lives,’ he says, arguing that privacy is ‘about having something to protect: freedom of thought, experimentation, dissent and personal development without permanent surveillance.’

Despite a growing number of privacy tools and regulations, data exposure remains widespread. According to Statista, more than 1.35 billion people were affected by data breaches, hacks, or exposure in 2024 alone. At the same time, more than 160 countries now have privacy legislation, while users regularly encounter cookie consent prompts that govern how their data is collected online.

Experts say frustration with privacy controls reflects a broader ‘privacy paradox’, in which people express concern about data protection but rarely change their behaviour. Cisco’s Consumer Privacy Survey found that while 89% of respondents said they care about privacy, only 38% actively take steps to protect their data.

As philosopher Carissa Véliz notes, the challenge is not simply awareness but a sense of agency: ‘Mostly, people don’t feel like they have control.’ She argues that protecting privacy requires stronger regulation, responsible technology design, and cultural change, adding: ‘It’s about having [access to] the right tech, but also using it.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global AI race intensifies as China claims leadership in strategic technologies

China asserted its position as the global leader in AI and strategic technology R&D, pledging to accelerate advancement toward technological autonomy. The assertion was prominently featured in government reports presented to the National People’s Congress.

A National Development and Reform Commission report states that China leads international research, development, and implementation in AI, biomedicine, robotics, and quantum technology. The report also references advancements in domestic chip innovation as proof of progress.

Competition between China and the United States for dominance in advanced technologies has escalated. Washington imposed export controls on advanced chips, while Beijing retaliated with restrictions on rare earth resources, escalating trade tensions over strategic technologies.

The report also highlighted the country’s global leadership in open-source AI models and its expansion into emerging technology sectors, including industrial robots and drones. Authorities pledged to nurture future industries such as quantum technology, embodied AI, and 6G networks, while promoting large-scale AI deployment across key sectors.

Officials also plan to launch new data centres, coordinate nationwide computing capacity, and establish mechanisms to prevent AI security risks. The strategy places particular emphasis on embodied AI to boost productivity and performance across sectors. Although US firms command larger investment resources, Beijing is relying on supply chains, manufacturing capacity, and rapid R&D cycles to scale emerging industries despite questions about long-term growth.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ECB reports minor impact of AI on employment

AI has so far had only a small effect on employment across Europe, according to economists at the European Central Bank. A comparison of 5,000 firms- both AI users and non-users- showed no significant difference in job creation or reduction.

Some firms that use AI intensively were even four percent more likely to hire new staff than average.

Economists noted that AI investment has not replaced existing jobs. In some cases, firms are hiring additional employees to develop and implement AI systems or to scale up operations more efficiently.

Only a minority of firms, around 15 percent, reported reducing labour costs as a motivation for AI adoption.

Despite limited impacts so far, the ECB cautioned that AI could have more significant effects as technology matures. Firms that specifically invest in AI to cut jobs may indeed reduce employment, and the long-term consequences for production processes and labour markets remain uncertain.

The findings come amid rising concern over AI-driven job losses, with companies such as Amazon and Allianz citing AI as a reason for recent cuts. Markets reacted negatively last week after a viral post predicted widespread layoffs, though current evidence shows only minor effects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Growing risks from AI meeting transcription tools

Businesses across the US and Europe are confronting new privacy risks as AI transcription tools spread through workplaces. Tools that automatically record and transcribe meetings increasingly capture sensitive conversations without clear consent.

Privacy specialists warn that organisations in the US and Europe previously focused on rules controlling what employees upload into AI systems. Governance efforts now shift towards monitoring what AI tools record during daily work.

AI services such as Otter, Zoom transcription and Microsoft Copilot can record discussions involving performance reviews, health information and legal matters. Companies in the US and Europe face legal exposure when third-party platforms store recordings without strict controls.

Governance teams in the US and Europe are being urged to introduce clear rules on meeting recordings and retention of transcripts. Stronger policies may include consent requirements, limits on recording sensitive meetings and stricter data storage oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot