Microsoft expands cloud footprint in Denmark

Microsoft has opened a new data centre region in Denmark, marking a major investment in cloud infrastructure and digital resilience. The Denmark East region spans multiple sites and aims to support secure, local data processing.

The project is expected to boost economic activity, with billions of dollars in projected spending and strong spillover effects for local technology firms. Organisations adopting cloud services are likely to rely on domestic partners across IT, cybersecurity, and software development.

Businesses and public sector users will gain access to advanced cloud and AI tools, alongside improved data sovereignty under the EU rules. Local data storage and low-latency services are designed to strengthen compliance and operational efficiency.

Sustainability also plays a central role, with renewable energy use, zero-water-cooling systems, and waste-heat recovery supporting local Danish communities. Broader ambitions include reinforcing digital sovereignty while enabling innovation across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Campaign highlights risks of profit-driven digital platforms

A global campaign led by the Norwegian Consumer Council (NCC) has drawn attention to the decline in quality across digital platforms, a phenomenon widely referred to as ‘enshitification’, in which services deteriorate over time as companies prioritise monetisation over user experience.

The initiative has gained momentum through a viral video and coordinated advocacy efforts across multiple regions.

Inshitification is a term coined by journalist Cory Doctorow that describes a pattern in which platforms initially serve users well, then shift towards extracting value from both users and business partners.

In practice, it often results in increased advertising, paywalls, and reduced functionality, with platforms leveraging user dependence to introduce less favourable conditions.

More than 70 advocacy groups across the EU, the US and Norway have urged policymakers to take stronger action, arguing that declining competition and market concentration allow platforms to degrade services without losing users.

Network effects and high switching costs further limit consumer choice, making it difficult to move to alternative platforms even when dissatisfaction grows.

Existing frameworks, such as the Digital Markets Act and the Digital Services Act, aim to address some of these issues by promoting interoperability, transparency, and accountability.

However, experts argue that enforcement remains too slow and insufficient to deter harmful practices, suggesting that stronger regulatory intervention will be necessary to restore balance between consumers, platforms, and competition in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK regulator targets misleading online reviews in new crackdown

The Competition and Markets Authority has launched new investigations into five companies as part of a wider crackdown on fake and misleading online reviews, targeting practices that shape consumer decisions rather than reflect genuine customer experiences.

The cases involve Autotrader, Feefo, Dignity, Just Eat and Pasta Evangelists across sectors, including car sales, food delivery and funeral services.

CMA is examining whether negative reviews were suppressed, ratings inflated, or incentives offered in exchange for positive feedback without disclosure.

Concerns also extend to moderation practices and whether review systems provide a complete and accurate picture of customer experiences, rather than favouring reputational or commercial interests. No conclusions have yet been reached on whether consumer law has been breached.

Online reviews play a central role in consumer behaviour, influencing significant levels of spending across the UK economy.

Research indicates that a large majority of consumers rely on reviews when making purchasing decisions, raising concerns that misleading content can distort markets and undermine trust, particularly as AI makes it harder to detect fabricated reviews.

The investigations form part of a broader enforcement effort under the Digital Markets Competition and Consumers Act 2024, which introduced stricter rules on fake and misleading reviews.

Authorities aim to improve transparency and accountability across digital platforms, with potential penalties reaching up to 10% of global turnover for companies found to have breached consumer protection laws.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Study highlights blind spots in digital regulation

A recent academic study argues that legal frameworks fail to fully capture the power of cloud infrastructure in the digital economy. The research suggests that regulators focus too heavily on services and outputs rather than on the underlying systems that shape markets.

Authors highlight how major cloud providers influence innovation, data flows and technological development. Existing laws are said to overlook the ability of these actors to structure markets and define how digital systems operate.

The paper links these gaps to fragmented legal approaches, specifically in the EU, that treat technology as a series of isolated issues. Such perspectives risk missing broader forms of control embedded in infrastructure and platform ecosystems.

Researchers call for a shift in legal thinking to better recognise infrastructure-level power and its societal impact. Stronger frameworks are seen as essential as global digital systems become increasingly central to economic and political life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lille proposed as EU customs hub

France has submitted a bid to host the future EU Customs Authority in Lille, positioning itself at the centre of efforts to modernise the customs union. The proposal highlights national expertise and a leading role in shaping recent reforms.

Authorities argue the new body will strengthen internal market security, improve oversight of e-commerce and enhance cooperation between member states. France has supported initiatives to tackle illicit trade and improve risk management.

Officials also point to strong operational experience, including international customs networks and the use of AI tools to screen postal shipments. Such capabilities are presented as key to supporting the authority from its launch, but questions are raised concerning the use of AI and its biases.

Lille is promoted as a strategic logistics hub with strong transport links and access to skilled workers. Its location near major European trade routes is expected to support recruitment and coordination across the bloc.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU demands stronger age verification from adult websites

The European Commission has preliminarily found that several major adult platforms, including Pornhub, Stripchat, XNXX, and XVideos, may be in breach of the Digital Services Act for failing to adequately protect minors from accessing harmful content.

These findings highlight concerns that children can easily access such platforms rather than being effectively prevented by robust safeguards.

The Commission’s investigation indicates that the platforms’ risk assessments were insufficient. In several cases, companies focused on reputational or business risks instead of fully addressing societal harms to minors.

Authorities also raised concerns that some platforms did not adequately consider input from civil society organisations specialising in children’s rights and age-assurance technologies, undermining the reliability of their evaluations.

Regarding risk mitigation, the Commission found that existing measures are ineffective. Simple self-declaration systems, in which users confirm they are over 18, were deemed inadequate, while additional features such as warnings, labels, or blurred content failed to prevent minors from accessing content.

The Commission considers that stronger, privacy-preserving age-verification solutions are necessary to ensure meaningful protection of children’s rights and well-being online.

The companies involved now have the opportunity to respond and propose corrective measures, while consultations with the European Board for Digital Services continue.

If the preliminary findings are confirmed, the Commission may impose fines of up to 6 percent of global annual turnover, alongside periodic penalties to enforce compliance.

The case forms part of broader efforts to enforce the Digital Services Act and strengthen online safety across the EU, rather than relying on voluntary measures by platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Europol warns legal gaps could weaken child abuse detection online

Efforts to combat online child sexual exploitation could be severely weakened, Europol has warned, if legal frameworks supporting detection and reporting are disrupted.

Executive Director Catherine De Bolle highlighted growing concerns over the increasing volume of harmful content online and stressed that protecting children remains a top priority for European law enforcement.

Authorities rely heavily on reports submitted by online service providers, which play a central role in identifying victims and supporting investigations, rather than relying solely on traditional policing methods.

Europol processed around 1.1 million CyberTips in a single year, many originating from the National Centre for Missing & Exploited Children and shared across 24 European countries.

These CyberTips include critical evidence such as images, videos, and other digital data used to track criminal activity.

Europol cautioned that removing the legal basis allowing voluntary detection by platforms could significantly reduce the number of reports submitted to authorities. A decline in CyberTips would limit investigative leads, making it harder to identify victims and disrupt online criminal networks.

Such a development could undermine broader security efforts and weaken the protection of minors across the EU instead of strengthening safeguards.

The agency emphasised that maintaining online service providers’ ability to detect and report suspected abuse is essential to effective law enforcement.

Ensuring continued cooperation between platforms and authorities remains a key factor in safeguarding children and addressing the growing threat of online exploitation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU court challenges French police data practices

The Court of Justice of the European Union has ruled that aspects of France’s biometric data collection system breach the EU law. Judges found that taking fingerprints and photographs of suspects under broad conditions fails to meet strict proportionality standards.

The case examined rules allowing police to collect and store data in the French Traitement des antécédents judiciaires and the Fichier automatisé des empreintes digitales. The court said collection cannot be routine and must meet a threshold of absolute necessity.

Judges also criticised the lack of clear justification for data collection, stating that individuals should receive explanations to exercise their legal rights. Existing rules were found to lack safeguards to ensure the limited and proportionate use of sensitive biometric information in France.

The ruling requires national courts to reassess the framework and could lead to changes in policing practices. It also raises broader questions about large-scale data retention and the balance between security and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Zimbabwe advances AI national strategy with UNESCO support

Zimbabwe has launched a National Artificial Intelligence Strategy for 2026 to 2030, marking a significant step towards shaping its digital future instead of relying solely on traditional development pathways.

Announced by President Emmerson Mnangagwa in Harare, the strategy sets out a national framework for the responsible use of AI to support innovation, improve public services, and expand economic opportunities across sectors such as agriculture, healthcare, education, finance, and public administration.

The strategy places strong emphasis on building digital infrastructure, developing AI skills, and strengthening research and innovation ecosystems.

Officials highlighted the importance of governance frameworks to ensure that AI systems remain transparent, ethical, and aligned with national priorities instead of advancing without oversight.

The initiative reflects a broader effort to position Zimbabwe within the evolving technological landscape of the fourth industrial revolution while promoting sustainable economic growth.

Development of the strategy was supported by UNESCO, working alongside national institutions and stakeholders from academia, industry, and civil society.

The process was informed by the Artificial Intelligence Readiness Assessment Methodology and aligned with UNESCO Recommendation on the Ethics of Artificial Intelligence, promoting a human-centred approach that prioritises human rights, fairness, and transparency.

Regional initiatives across Southern Africa have also contributed to strengthening AI adoption readiness through similar assessment frameworks.

Looking ahead, Zimbabwe aims to translate the strategy into concrete investments in infrastructure, talent development, and innovation ecosystems.

International partners, including the UN, have expressed support for implementation efforts, emphasising the importance of inclusive growth and equitable access to digital opportunities.

By combining national leadership with international collaboration, Zimbabwe seeks to ensure that AI benefits communities across urban and rural areas rather than widening existing socioeconomic divides.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK tests social media bans for children in national pilot

The UK government has launched a large-scale pilot programme to test social media restrictions in the homes of 300 teenagers, aiming to improve children’s well-being instead of relying solely on existing digital safety measures.

The initiative, led by the Department for Science, Innovation and Technology and supported by Liz Kendall, will run for six weeks and examine how limits on digital platforms affect young people’s daily lives, including sleep, schoolwork, and family relationships.

Families across the UK will be divided into groups testing different approaches. Some parents will block access to social media entirely, while others will introduce a one-hour daily limit on popular platforms such as Instagram, TikTok, and Snapchat.

Another group will implement overnight curfews, restricting access between 9 pm and 7 am, while a control group will maintain existing usage patterns rather than introducing changes.

Participants will be interviewed before and after the trial to assess behavioural and practical outcomes, including how easily restrictions can be enforced and whether teenagers attempt to bypass controls.

The pilot runs alongside a national consultation on children’s digital well-being, which has already received nearly 30,000 responses. Government officials and academic experts will analyse data gathered from both initiatives to guide future policy decisions.

A programme that aims to ensure that any regulatory steps are evidence-based, reflecting real-life experiences rather than theoretical assumptions about digital behaviour.

Alongside the government trials, an independent scientific study funded by the Wellcome Trust will examine the effects of reduced social media use among adolescents.

Led by researchers from the University of Cambridge and the Bradford Institute for Health Research, the study will involve around 4,000 students aged 12 to 15.

Findings are expected to provide deeper insight into how social media influences anxiety, sleep, relationships, and overall well-being, supporting policymakers in shaping future online safety measures instead of relying on limited evidence.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!