
2-9 January 2026
HIGHLIGHT OF THE WEEK
Looking ahead: Our annual AI and digital forecast
As we enter the new year, we begin this issue of the Weekly newsletter with our annual outlook on AI and digital developments, featuring insights from our Executive Director. Drawing on our coverage of digital policy over the past year on the Digital Watch Observatory, as well as our professional experience and expertise, we highlight the 10 trends and events we expect to shape the digital landscape in the year ahead.
Technologies. AI is becoming a commodity, affecting everyone—from countries competing for AI sovereignty to individual citizens. Equally important is the rise of bottom-up AI: in 2026, small to large language models will be able to run on corporate or institutional servers. Open-source development, a major milestone in 2025, is expected to become a central focus of future geostrategic competition.
Geostrategy. The good news is that, despite all geopolitical pressure, we still have an integrated global internet. However, digital fragmentation is accelerating, with continued fragmentation of filtering social media, other services and the other developments around three major hubs: the United States, China, and potentially the EU. Geoeconomics is becoming a critical dimension of this shift, particularly given the global footprint of major technology companies. And any fragmentation, including trade fragmentation and taxation fragmentation, will inevitably affect them. Equally important is the role of “geo-emotions”: the growing disconnect between public sentiment and industry enthusiasm. While companies remain largely optimistic about AI, public scepticism is increasing, and this divergence may carry significant political implications.
Governance. The core governance dilemma remains whether national representatives—parliamentarians domestically and diplomats internationally—are truly able to protect citizens’ digital interests related to data, knowledge, and cybersecurity. While there are moments of productive discussion and well-run events, substantive progress remains limited. One positive note is that inclusive governance, at least in principle, continues through multistakeholder participation, though it raises its own unresolved questions.
Security. The adoption of the Hanoi Cybercrime Convention at the end of the year is a positive development, and substantive discussions at the UN continue despite ongoing criticism of the institution. While it remains unclear whether these processes are making us more secure, they are expanding the governance toolbox. At the same time, attention should extend beyond traditional concerns—such as cyberwarfare, terrorism, and crime—to emerging risks associated with the interconnectivity to AI systems through APIs. These points of integration create new interdependencies and potential backdoors for cyberattacks.
Human rights. Human rights are increasingly under strain, with recent policy shifts by technology companies and growing transatlantic tensions between the EU and the United States highlighting a changing landscape. While debates continue to focus heavily on bias and ethics, deeper human rights concerns—such as the rights to knowledge, education, dignity, meaningful work, and the freedom to remain human rather than optimised—receive far less attention. As AI reshapes society, the human rights community must urgently revisit its priorities, grounding them in the protection of life, dignity, and human potential.
Economy. The traditional three-pillar framework comprising security, development, and human rights is shifting toward economic and security concerns, with human rights being increasingly sidelined. Technological and economic issues, from access to rare earths to AI models, are now treated as strategic security matters. This trend is expected to accelerate in 2026, making the digital economy a central component of national security. Greater attention should be paid to taxation, the stability of the global trade system, and how potential fragmentation or disruption of global trade could impact the tech sector.
Standards. The lesson from social media is clear: without interoperable standards, users get locked into single platforms. The same risk exists for AI. To avoid repeating these mistakes, developing interoperable AI standards is critical. Ideally, individuals and companies should build their own AI, but where that isn’t feasible, at a minimum, platforms should be interoperable, allowing seamless movement across providers such as OpenAI, Cloudy, or DeepSeek. This approach can foster innovation, competition, and user choice in the emerging AI-dominated ecosystem.
Content. The key issue for content in 2026 is the tension between governments and US tech, particularly regarding compliance with EU laws. At the core, countries have the right to set rules for content within their territories, reflecting their interests, and citizens expect their governments to enforce them. While media debates often focus on misuse or censorship, the fundamental question remains: can a country regulate content on its own soil? The answer is yes, and adapting to these rules will be a major source of tension going forward.
Development. Countries that are currently behind in AI aren’t necessarily losing. Success in AI is less about owning large models or investing heavily in hardware, and more about preserving and cultivating local knowledge. Small countries should invest in education, skills, and open-source platforms to retain and grow knowledge locally. Paradoxically, a slower entry into AI could be an advantage, allowing countries to focus on what truly matters: people, skills, and effective governance.
Environment. Concerns about AI’s impact on the environment and water resources persist. It is worth asking whether massive AI farms are truly necessary. Small AI systems could serve as extensions of these processes or as support for training and education, reducing the need for energy- and water-intensive platforms. At a minimum, AI development should prioritise sustainability and efficiency, mitigating the risk of large-scale digital waste while still enabling practical benefits.

IN OTHER NEWS THIS WEEK
This week in AI governance
Italy. Italy’s antitrust authority has formally closed its investigation into the Chinese AI developer DeepSeek after the company agreed to binding commitments to make risks from AI hallucinations — false or misleading outputs — clearer and more accessible to users. Regulators stated that DeepSeek will enhance transparency, providing clearer warnings and disclosures tailored to Italian users, thereby aligning its chatbot deployment with local regulatory requirements. If these conditions aren’t met, enforcement action under Italian law could follow.
UK. Britain has escalated pressure on Elon Musk’s social media platform X and its integrated AI chatbot Grok after reports that the tool was used to generate sexually explicit and non‑consensual deepfake images of women and minors. UK technology officials have publicly demanded that X act swiftly to prevent the spread of such content and ensure compliance with the Online Safety Act, which requires platforms to block unsolicited sexual imagery. Musk, however, has suggested that users who use such prompts be held liable, a move criticised as shifting responsibility. Critics note that the platform should still have to embed stronger safeguards.
Brussels bets on open-source to boost tech sovereignty
The European Commission is preparing a strategy to commercialise European open-source software to strengthen digital sovereignty and reduce reliance on foreign technology providers.
The upcoming strategy, expected alongside the Cloud and AI Development Act in early 2026, will prioritise community upscaling, industrial deployment, and market integration. Strengthening developer communities, supporting adoption across various sectors, and ensuring market competitiveness are key objectives. Governance reforms and improved supply chain security are also planned to address vulnerabilities in widely used open-source components, enhancing trust and reliability.
Financial sustainability will be a key focus, with public sector partnerships encouraged to ensure the long-term viability of projects. By providing stable support and fostering collaboration between government and industry, the strategy seeks to create an economically sustainable open-source ecosystem.
The big picture. Despite funding fostering innovation, commercial-scale success has often occurred outside the EU. By focusing on open-source solutions developed within the EU, Brussels aims to strengthen Europe’s technological autonomy, retain the benefits of domestic innovation, and foster a resilient and competitive digital landscape.
USA pulls out of several international bodies
In a new move, US President Trump issued a memorandum directing the US withdrawal from numerous international organisations, conventions, and treaties deemed contrary to the interests of the USA.
The list includes 35 non-UN entities (e.g. the GFCE and the Freedom Online Coalition) and 31 UN bodies (e.g. the Department of Economic and Social Affairs, the UN Conference on Trade and Development and the UN Framework Convention on Climate Change (UNFCCC)).
Why does it matter? The order was not a surprise, following the Trump administration’s 2025 retreat from the Paris Agreement, WHO and other international organisations focusing on climate change, sustainable development, and identity issues. Two initiatives in the technology and digital governance ecosystem are explicitly dropped: the Freedom Online Coalition (FOC) and the Global Forum on Cyber Expertise (GFCE). And there is also some uncertainty regarding the meaning and the implications of the US ‘withdrawal’ from UNCTAD and UN DESA, given the roles these entities play in relation to initiatives such as WSIS and Agenda 2030 follow-up processes, the Internet Governance Forum (IGF), and data governance.
LOOKING AHEAD

The year has just begun, and the digital policy calendar is still taking shape. To stay up to date with upcoming events and discussions shaping the digital landscape, we encourage you to follow our calendar of events at dig.watch/events.

