Data centres’ expansion in London sparks energy and climate debate

London authorities are drafting new data centre policies amid concerns about their environmental impact and rising energy use. City Hall aims to balance the sector’s economic advantages with pressures on electricity, water, and emissions.

The Greater London Authority (GLA) estimates that 10 large data centres generate around 2.7 million tonnes of carbon emissions due to their high electricity consumption. Of the 100 data centres the UK plans, about 60 will be in London.

Megan Life, assistant director for environment and energy at the GLA, told the London Assembly Environment Committee the new strategy aims to ‘keep hold of the kind of economic growth benefits that data centres offer’ while addressing some ‘quite challenging’ impacts linked to their energy use.

Deputy mayor for environment Mete Coban said the expansion of data centres brings both ‘big benefits’ and ‘massive challenges’ for the capital, particularly in terms of energy and water consumption. ‘It’s not just a London problem, it’s going to be a global problem,’ he said, adding: ‘It’s about making sure that our environment doesn’t suffer in the hands of a few global corporations who will take and not give back, so we want to make sure we equitably do this.’

Policymakers are assessing how data centre growth may affect climate goals and urban infrastructure. London Mayor Sadiq Khan has commissioned a study to forecast future expansion. At the same time, UK lawmakers have launched an inquiry into the environmental impact of the sector as demand for cloud computing and AI infrastructure grows.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Online privacy faces new pressures in the age of social media

Online privacy is eroding as digital services collect ever-growing personal data and surveillance becomes part of daily technology use. The debate has intensified as social media platforms, advertisers, and connected devices expand their ability to track behaviour, preferences, and habits.

Analysts say younger generations have adapted to this reality rather than resisting it. ‘In 2026, online privacy is a luxury, not a right,’ says Thomas Bunting, an analyst at the UK innovation think tank Nesta. He argues many people have grown up accepting data collection as a trade-off for access to online services, noting: ‘We’ve been taught how to deal with it.’

Advocates warn that the erosion of online privacy could have wider social consequences. Cybersecurity expert Prof Alan Woodward from the University of Surrey says the issue goes beyond personal privacy. ‘People should care about online privacy because it shapes who has power over their lives,’ he says, arguing that privacy is ‘about having something to protect: freedom of thought, experimentation, dissent and personal development without permanent surveillance.’

Despite a growing number of privacy tools and regulations, data exposure remains widespread. According to Statista, more than 1.35 billion people were affected by data breaches, hacks, or exposure in 2024 alone. At the same time, more than 160 countries now have privacy legislation, while users regularly encounter cookie consent prompts that govern how their data is collected online.

Experts say frustration with privacy controls reflects a broader ‘privacy paradox’, in which people express concern about data protection but rarely change their behaviour. Cisco’s Consumer Privacy Survey found that while 89% of respondents said they care about privacy, only 38% actively take steps to protect their data.

As philosopher Carissa Véliz notes, the challenge is not simply awareness but a sense of agency: ‘Mostly, people don’t feel like they have control.’ She argues that protecting privacy requires stronger regulation, responsible technology design, and cultural change, adding: ‘It’s about having [access to] the right tech, but also using it.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global AI race intensifies as China claims leadership in strategic technologies

China asserted its position as the global leader in AI and strategic technology R&D, pledging to accelerate advancement toward technological autonomy. The assertion was prominently featured in government reports presented to the National People’s Congress.

A National Development and Reform Commission report states that China leads international research, development, and implementation in AI, biomedicine, robotics, and quantum technology. The report also references advancements in domestic chip innovation as proof of progress.

Competition between China and the United States for dominance in advanced technologies has escalated. Washington imposed export controls on advanced chips, while Beijing retaliated with restrictions on rare earth resources, escalating trade tensions over strategic technologies.

The report also highlighted the country’s global leadership in open-source AI models and its expansion into emerging technology sectors, including industrial robots and drones. Authorities pledged to nurture future industries such as quantum technology, embodied AI, and 6G networks, while promoting large-scale AI deployment across key sectors.

Officials also plan to launch new data centres, coordinate nationwide computing capacity, and establish mechanisms to prevent AI security risks. The strategy places particular emphasis on embodied AI to boost productivity and performance across sectors. Although US firms command larger investment resources, Beijing is relying on supply chains, manufacturing capacity, and rapid R&D cycles to scale emerging industries despite questions about long-term growth.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK to launch new lab for breakthrough AI research

Researchers in the UK will gain a new AI lab designed to drive transformational breakthroughs in healthcare, transport, science, and everyday technology, supported by government funding.

The lab will provide up to £40 million in funding over six years, alongside substantial access to large-scale computing resources, inviting UK researchers to pitch their most ambitious ideas.

The Fundamental AI Research Lab will focus on tackling core AI challenges, including hallucinations, unreliable memory, and unpredictable reasoning.

The lab will support high-risk, blue-sky research rather than simply scaling existing systems. Its goal is to unlock entirely new capabilities that could improve medical diagnoses, infrastructure resilience, scientific discovery, and public services.

UK officials highlighted the country’s strength in world-class universities, AI talent, and a thriving sector attracting over £100 billion in private investment. Experts, including Raia Hadsell of Google DeepMind, will peer-review funding applications, prioritising bold, high-reward proposals.

The initiative is part of the UKRI AI Strategy, which is backed by £1.6 billion and aims to strengthen research and ensure AI benefits society and the economy. UK AI projects like RADAR for rail faults and the IXI Brain Atlas for Alzheimer’s research demonstrate the approach’s potential impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI adoption and jobs debated at India summit

Governments, companies and international organisations gathered in India in February for the AI Impact Summit to discuss the future of AI governance and adoption. Participants in India focused on economic impacts, labour market changes and sector specific uses of AI.

Delegates in India also highlighted growing interest in international cooperation on AI governance. Ninety one countries endorsed a declaration supporting shared tools, global collaboration and people centred development of AI.

Language diversity became a central topic during discussions in India. India’s government announced eight foundation AI models designed to support generative AI across the country’s 22 recognised languages.

Debate in India also reflected the growing influence of the Global South in AI policy discussions. Policymakers and experts in India emphasised infrastructure gaps, language diversity and local economic realities shaping AI adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ECB reports minor impact of AI on employment

AI has so far had only a small effect on employment across Europe, according to economists at the European Central Bank. A comparison of 5,000 firms- both AI users and non-users- showed no significant difference in job creation or reduction.

Some firms that use AI intensively were even four percent more likely to hire new staff than average.

Economists noted that AI investment has not replaced existing jobs. In some cases, firms are hiring additional employees to develop and implement AI systems or to scale up operations more efficiently.

Only a minority of firms, around 15 percent, reported reducing labour costs as a motivation for AI adoption.

Despite limited impacts so far, the ECB cautioned that AI could have more significant effects as technology matures. Firms that specifically invest in AI to cut jobs may indeed reduce employment, and the long-term consequences for production processes and labour markets remain uncertain.

The findings come amid rising concern over AI-driven job losses, with companies such as Amazon and Allianz citing AI as a reason for recent cuts. Markets reacted negatively last week after a viral post predicted widespread layoffs, though current evidence shows only minor effects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI tracks how AI shapes student performance over time

AI is increasingly shaping education, offering tools like ChatGPT that provide personalised learning support for students anywhere. Early studies suggest features such as study mode can enhance exam performance, yet understanding AI’s long-term effect on learning remains a challenge.

Traditional research often focuses on test scores, overlooking how students interact with AI over time in real-world settings.

OpenAI, in partnership with Estonia’s University of Tartu and Stanford’s SCALE Initiative, created the Learning Outcomes Measurement Suite to track longitudinal learning outcomes. The framework assesses interactions, engagement, cognitive growth, and alignment with pedagogical principles.

Large-scale trials involve tens of thousands of students, combining AI-driven insights with traditional classroom measures such as exams and observations.

Research shows that guided AI interactions can strengthen understanding, persistence, and problem-solving. Microeconomics students using the study mode achieved around 15% higher exam scores than those relying on traditional online resources.

Beyond short-term results, the measurement suite evaluates deeper learning effects, including motivation, metacognition, and productive engagement, helping educators and developers optimise AI tools for meaningful outcomes.

The suite will be validated through ongoing studies and eventually made available to schools, universities, and education systems worldwide. OpenAI aims to share findings broadly to ensure AI contributes effectively to student learning and cognitive development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Council of Europe issues new guidance on AI and gender equality

Ahead of International Women’s Day on 8 March, the Council of Europe adopted two new recommendations addressing gender equality and the prevention of violence against women in the context of emerging technologies.

One recommendation targets the design and use of AI to prevent discrimination, while the other focuses on accountability for technology-facilitated violence against women and girls.

The AI recommendation advises member states on preventing discrimination throughout the lifecycle of AI systems, from development to deployment and retirement. It highlights risks like gender bias while promoting transparency, explainability, and safeguards.

Special attention is given to discrimination based on gender, race, and sexual orientation, gender identity, and expression (SOGIESC).

The second recommendation sets the first international standard for addressing technology-facilitated violence against women. It outlines strategies to overcome impunity, including clearer legal frameworks, accessible reporting systems, and victim-centred approaches.

Emphasis is placed on multistakeholder engagement, trauma-informed policies, and safety-by-design in technology products to prevent digital harm.

Both recommendations reinforce the importance of combining regulation, institutional support, and public awareness to ensure technology advances equality rather than perpetuates harm.

The formal launch is scheduled for 10 June 2026 at the Palais de l’Europe in Strasbourg during an event titled ‘From standards to action: making accountability for technology-facilitated violence against women and girls a reality.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gemini Canvas reaches millions as Google expands AI Search tools

Google has expanded access to the Canvas feature in Google Search’s AI Mode, making it available to all US users.

Canvas allows users to organise research, draft documents and develop small applications directly inside search.

Prompts can generate code, transform reports into webpages or quizzes, and produce audio summaries from uploaded material. The tool was previously introduced as part of experimental projects in Google Labs.

The feature builds on capabilities already available in Google Gemini and partly overlaps with NotebookLM, which supports research analysis and document processing.

Within Canvas, users can gather information from the web and the Google Knowledge Graph while refining projects through interaction with the Gemini model.

Competition is intensifying across AI development platforms. OpenAI and Anthropic offer similar tools, though their design approaches differ in how collaborative workspaces are triggered and used.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

New UNESCO and CENIA agreement targets AI literacy and ethical standards

The UNESCO Regional Office in Santiago and the National Centre for Artificial Intelligence (CENIA) signed a cooperation agreement at the end of February 2026 to promote ethical AI in education across Chile and Latin America.

The framework supports joint initiatives aimed at strengthening digital skills, improving AI literacy and advancing people-centred development models for AI.

Projects under the partnership will focus on training programmes and educational resources designed for a wide range of audiences, including the general public, educators, technical specialists and policymakers.

Collaborative efforts will also encourage dialogue between institutions, governments and industry to support responsible innovation and reinforce regional ecosystems linked to emerging technologies.

An early outcome includes Latam-GPT, the first open large language model for Latin America and the Caribbean. The system will aid education ministries and the UNESCO Regional Observatory on AI, helping guide responsible adoption and monitor developments.

‘Artificial Intelligence represents a historic opportunity to transform our education and productive systems, but its development must be guided by clear ethical principles and a people-centred vision. This partnership with CENIA will enable us to support countries in building capacities and governance frameworks that ensure AI effectively contributes to the common good,’ stated Esther Kuisch Laroche, Director of the UNESCO Regional Office in Santiago.

‘At CENIA, we have been working consistently on applied research and capacity-building, advancing knowledge generation, technology transfer and scientific evidence.

This experience allows us to contribute from both a technical and training perspective to ensure that the development of Artificial Intelligence in the region is grounded in robust and ethical standards, thereby impacting education and productive development. We are convinced that technological progress must be accompanied by training, responsible frameworks and multi-sector collaboration.

For this reason, this agreement with UNESCO represents a strategic step towards strengthening capacity development and the ethical, people-centred adoption of Artificial Intelligence in Latin America and the Caribbean.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot