Microsoft expands cloud footprint in Denmark

Microsoft has opened a new data centre region in Denmark, marking a major investment in cloud infrastructure and digital resilience. The Denmark East region spans multiple sites and aims to support secure, local data processing.

The project is expected to boost economic activity, with billions of dollars in projected spending and strong spillover effects for local technology firms. Organisations adopting cloud services are likely to rely on domestic partners across IT, cybersecurity, and software development.

Businesses and public sector users will gain access to advanced cloud and AI tools, alongside improved data sovereignty under the EU rules. Local data storage and low-latency services are designed to strengthen compliance and operational efficiency.

Sustainability also plays a central role, with renewable energy use, zero-water-cooling systems, and waste-heat recovery supporting local Danish communities. Broader ambitions include reinforcing digital sovereignty while enabling innovation across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Campaign highlights risks of profit-driven digital platforms

A global campaign led by the Norwegian Consumer Council (NCC) has drawn attention to the decline in quality across digital platforms, a phenomenon widely referred to as ‘enshitification’, in which services deteriorate over time as companies prioritise monetisation over user experience.

The initiative has gained momentum through a viral video and coordinated advocacy efforts across multiple regions.

Inshitification is a term coined by journalist Cory Doctorow that describes a pattern in which platforms initially serve users well, then shift towards extracting value from both users and business partners.

In practice, it often results in increased advertising, paywalls, and reduced functionality, with platforms leveraging user dependence to introduce less favourable conditions.

More than 70 advocacy groups across the EU, the US and Norway have urged policymakers to take stronger action, arguing that declining competition and market concentration allow platforms to degrade services without losing users.

Network effects and high switching costs further limit consumer choice, making it difficult to move to alternative platforms even when dissatisfaction grows.

Existing frameworks, such as the Digital Markets Act and the Digital Services Act, aim to address some of these issues by promoting interoperability, transparency, and accountability.

However, experts argue that enforcement remains too slow and insufficient to deter harmful practices, suggesting that stronger regulatory intervention will be necessary to restore balance between consumers, platforms, and competition in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU investigates cyber attack targeting Commission websites

The European Commission has confirmed a cyber-attack targeting its cloud infrastructure hosting the Europa.eu services, with authorities acting swiftly to contain the incident and prevent disruption to public access.

The attack was identified on 24 March, prompting immediate mitigation measures to secure systems and maintain service continuity.

Preliminary findings indicate that some data may have been accessed from affected websites, although the full scope of the incident remains under investigation.

The Commission has begun notifying the relevant EU entities that may be affected, while continuing efforts to assess the extent of the breach and strengthen safeguards.

Officials confirmed that internal systems were not affected, limiting the overall impact of the attack.

Monitoring efforts remain ongoing, with additional security measures being implemented to protect data and infrastructure, rather than relying solely on existing defences. The Commission has also committed to analysing the incident to improve its cybersecurity capabilities.

The attack comes amid growing cyber and hybrid threats targeting European institutions and critical services.

Existing frameworks, including the NIS2 Directive and the Cyber Solidarity Act, aim to strengthen resilience and coordination across member states, supporting a more unified response to large-scale cyber incidents across the EU.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK regulator targets misleading online reviews in new crackdown

The Competition and Markets Authority has launched new investigations into five companies as part of a wider crackdown on fake and misleading online reviews, targeting practices that shape consumer decisions rather than reflect genuine customer experiences.

The cases involve Autotrader, Feefo, Dignity, Just Eat and Pasta Evangelists across sectors, including car sales, food delivery and funeral services.

CMA is examining whether negative reviews were suppressed, ratings inflated, or incentives offered in exchange for positive feedback without disclosure.

Concerns also extend to moderation practices and whether review systems provide a complete and accurate picture of customer experiences, rather than favouring reputational or commercial interests. No conclusions have yet been reached on whether consumer law has been breached.

Online reviews play a central role in consumer behaviour, influencing significant levels of spending across the UK economy.

Research indicates that a large majority of consumers rely on reviews when making purchasing decisions, raising concerns that misleading content can distort markets and undermine trust, particularly as AI makes it harder to detect fabricated reviews.

The investigations form part of a broader enforcement effort under the Digital Markets Competition and Consumers Act 2024, which introduced stricter rules on fake and misleading reviews.

Authorities aim to improve transparency and accountability across digital platforms, with potential penalties reaching up to 10% of global turnover for companies found to have breached consumer protection laws.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU and Japan strengthen digital partnership in ICT Dialogue

The European Commission and Japan have reinforced their digital cooperation through the 31st the EU–Japan ICT Dialogue held in Tokyo, focusing on advancing shared priorities in emerging technologies instead of pursuing separate national strategies.

A meeting that forms part of the broader EU–Japan Digital Partnership, which aims to deepen collaboration in key areas of the digital economy.

Discussions covered a wide range of topics, including AI, cybersecurity, and secure connectivity infrastructure such as submarine cables and Arctic networks.

Both sides also explored developments in 5G and 6G technologies, alongside emerging solutions like quantum key distribution, highlighting the importance of secure and resilient communication systems in an evolving digital landscape.

The dialogue also emphasised cooperation between the EU AI Office and AI Safety Institute, as well as joint efforts in research, innovation, and international standardisation.

These initiatives aim to align regulatory approaches and technological development rather than create fragmented global frameworks.

By strengthening collaboration across critical digital sectors, the EU and Japan seek to enhance technological resilience and promote secure, interoperable systems.

The ongoing partnership reflects a shared commitment to shaping global digital standards while supporting innovation and economic growth in both regions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Digital divide shapes AI job outcomes

A joint study by the International Labour Organization and the World Bank finds that AI will reshape labour markets unevenly across countries. Research covering 135 economies highlights growing risks for workers as automation expands.

Advanced economies show higher exposure to AI, particularly in clerical and professional roles. Lower-income regions face fewer direct impacts but lack the infrastructure and skills needed to capture productivity gains.

The digital divide plays a central role, with many vulnerable jobs already online and therefore exposed to automation. Workers in roles with potential benefits often lack reliable internet access, limiting opportunities.

The ILO’s findings suggest outcomes depend on infrastructure, skills and job design rather than technology alone. Policymakers are urged to improve connectivity, training and social protections to spread benefits more evenly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

IAPP updates US state breach notification resource as legal differences persist

The International Association of Privacy Professionals (IAPP) has updated its US State Breach Notification Chart, a resource that summarises state breach notification laws across the United States. In an analysis published on 26 March, the IAPP says the revised chart highlights both nationwide coverage and continuing variation in how states define personal information, apply harm thresholds, and trigger reporting duties.

According to the IAPP, all 50 states, the District of Columbia, Guam, Puerto Rico, and the US Virgin Islands now have breach notification laws. California enacted the first state law in 2002, which took effect in 2003, while Alabama was the last state to adopt such a law in 2018. The IAPP says the result is a de facto nationwide framework, but one marked by significant differences across jurisdictions.

A central point in the analysis is that breach notification laws generally use a narrower definition of personal information than more recent comprehensive privacy laws. The IAPP says the original purpose of breach notification was to alert people to the risks of identity theft and financial fraud after a data breach, so laws tend to focus on identifiers such as names combined with Social Security numbers, driver’s licence details, or financial account credentials.

The article contrasts narrower statutes with broader ones. Hawaii’s law is described as among the narrowest, while Illinois and California are presented as having broader definitions that can extend to medical information, health insurance details, biometric data, genetic data, and, in California’s case, some automated licence plate recognition data.

Even so, the IAPP says many state breach laws still do not cover large categories of digital information, such as browsing history, cookie data, IP addresses, cell phone numbers, purchasing records, or complete financial transaction histories where account credentials were not compromised.

Exemptions and scope also vary. The IAPP says most breach notification laws apply broadly to businesses and often to nonprofit organisations, while privacy laws tend to contain more exclusions. The article notes that some states cover state and local government entities directly, while California has a separate breach notification law for governmental bodies. The IAPP also says its chart is focused on laws applicable to the private sector.

Encryption safe harbours appear across the state laws, according to the analysis, with some states also recognising redaction or other protections that render data unreadable or unusable. Attorney general notification requirements also differ. The IAPP says 34 state laws require notice to the state attorney general once certain thresholds are met, with thresholds ranging from 250 affected residents in North Dakota and Oregon to 1,000 in many other states, while some states, such as Connecticut and New York, require notice regardless of the number affected.

Harm thresholds are another area of divergence. The IAPP says about 30 state laws include a harm standard, meaning notice may not be required unless the breach caused, or is likely to cause, harm to affected individuals.

The article describes substantial differences in wording across states, with some referring to ‘reasonable likelihood’ of harm, others to ‘material risk,’ ‘substantial economic loss,’ or misuse of the data, while some states, including California, Georgia, Illinois, Massachusetts, Minnesota, North Dakota, and Texas, require no harm showing at all.

The practical effect, the IAPP argues, is that organisations holding data on residents of multiple states face a complex compliance problem. A data element that triggers notice in one state may not do so in another, and the article says reconciling the different harm standards is effectively impossible. The analysis notes that some organisations may decide to notify if there is doubt, while others may choose to notify only where clearly required.

The IAPP concludes that the absence of a preemptive federal breach notification law leaves entities to navigate overlapping but inconsistent state rules. Its updated chart is presented as a tool to help practitioners track those differences and build awareness of how US state breach notification laws continue to evolve.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

National security rules to prioritise UK contracts in AI, steel and shipbuilding

The UK government has announced new procurement guidance that will treat shipbuilding, steel, AI, and energy infrastructure as critical to national security, with departments directed to prioritise British businesses where necessary to protect national security. The press release was published on 26 March by the Cabinet Office and its Minister, Chris Ward.

According to the government, the new approach is intended to respond to recent supply-chain fragility and strengthen domestic capacity in sectors it describes as vital to national security. The guidance is presented as the first clear framework for how departments can protect the UK’s economic security and build resilience in the four named sectors.

Additional measures in the package go beyond sector prioritisation. The government says departments will either use British steel or provide a justification if steel is sourced from overseas, linking the change to the UK Steel Strategy launched the previous week. Officials also say the reforms support the government’s Modern Industrial Strategy and follow the publication of the National Security Strategy.

Procurement reform is another part of the package. Under a new Public Interest Test, departments will be asked to assess whether outsourced service contracts worth more than £1 million could be delivered more effectively in-house. The government says the test will cover more than 95% of central government contracts by value.

Community impact is also being built into the contracting framework. Departments will be required to publish and report annually on a specific social value goal for contracts above £5 million, which the government says will cover more than 90% of central government contracts by value. Companies bidding for public contracts are also being encouraged to include commitments on local jobs, skills, and apprenticeships.

The press release also says a new suite of AI tools has been developed to streamline the commercial process. Contract terms will be simplified, and additional business information will be integrated into a central platform, with the stated aim of reducing repeated submissions by smaller businesses bidding for multiple contracts.

Chris Ward said: ‘This Government is backing British businesses and the working people who power them. These reforms are about using the full weight of Government spending to support British jobs, protect our national security and grow our economy.’ He added: ‘Whether you make steel in Scunthorpe, build ships on the Clyde or run a small tech firm in the Midlands, this Government is on your side.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India AI governance faces court, privacy and cyber pressures

An opinion article published by the International Association of Privacy Professionals says India’s data protection and AI governance environment is facing growing pressure as compliance work around the Digital Personal Data Protection Act (DPDPA) unfolds, court challenges continue, and regulators widen oversight into new sectors. The piece, published on 26 March, is labelled as an opinion article and includes an editor’s note stating that the IAPP is policy neutral and publishes contributed opinion pieces to reflect a broad spectrum of views.

The article says several legal and regulatory developments are unfolding simultaneously. One example cited is a public interest litigation filed before India’s Supreme Court by journalist Geeta Seshu and the Software Freedom Law Centre, India, challenging parts of the DPDPA on constitutional and rights-related grounds. According to the piece, the Supreme Court later issued a notice to the Government of India on 12 March.

Concerns outlined in the article include the absence of journalistic exemptions, the lack of compensation for data breach victims when penalties are imposed to the government, broad state powers to exempt departments from the law, and questions about the independence of the Data Protection Board given the government’s control over appointments. The article notes that similar petitions had already been filed, but says this was the first time the court issued notice to the government.

The article also turns to proceedings before the Kerala High Court involving privacy concerns about biometric and personal data collected through Digi Yatra, a not-for-profit foundation that operates airport passenger-processing infrastructure in India. According to the piece, a public interest litigation filed by C R Neelakandan asked for a temporary restraint on the sharing of collected personal data and its commercial use without proper authorisation.

The article says the Kerala High Court issued notice to the Digi Yatra Foundation and sought clarification from the government on whether the Data Protection Board had been established to oversee such matters.

Alongside the litigation, the opinion piece points to government efforts to show legal preparedness for AI-related risks. It says Electronics and Information Technology Minister Ashwini Vaishnaw outlined existing safeguards during the ongoing parliamentary session, referring to the Information Technology Act, the DPDPA, and subordinate rules, along with published guidelines on AI governance, toy safety, harmful content, awareness-building measures, and cyber safety.

Cybersecurity developments also feature in the article. It says the Indian Computer Emergency Response Team, working with the SatCom Industry Association, issued guidelines on 26 February for space, including satellite communications. According to the piece, the framework is intended to strengthen resilience in India’s space ecosystem.

It applies to covered entities, including government agencies, satellite service providers, ground station operators, terminal equipment vendors, and private space entities. Incident reporting within six hours and annual audits are among the measures described.

A further section of the article draws on Thales’ 2026 Data Threat Report. The piece says 64% of surveyed organisations in India identified AI-driven transformation as their biggest security risk, while 55% said they had to deal with reputational damage caused by AI-generated misinformation. It also says 65% reported deepfake-driven attacks, 35% had a complete view of their data, and 36% could fully classify their data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta unveils TRIBE v2 brain modelling AI

TRIBE v2 is a next-generation AI model introduced by Meta, designed to simulate how the human brain responds to complex stimuli such as images, sounds and language. The system functions as a digital twin of neural activity, enabling high-speed and high-resolution predictions of brain responses.

Built on data from over 700 volunteers, TRIBE v2 analyses fMRI recordings to predict brain responses to media such as videos, podcasts, and text. The model improves significantly on previous approaches, offering higher accuracy and the ability to generalise across new subjects, tasks, and languages.

Meta says the system could enable brain studies without human participants in every experiment, potentially accelerating research into neurological conditions. The approach may also support future AI development by incorporating principles derived from neuroscience.

Alongside the launch, Meta has released a research paper, model code, and interactive demo under a non-commercial licence to encourage wider exploration and collaboration in neuroscience and AI research.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot