X’s Türkiye tangle, between freedom of speech, control, and digital defiance

In the streets of Istanbul and beyond, a storm of unrest swept Türkiye in the past week, sparked by the arrest of Istanbul Mayor Ekrem İmamoğlu, a political figure whose detention has provoked nationwide protests. Amid these events, a digital battlefield has emerged, with X, the social media platform helmed by Elon Musk, thrust into the spotlight. 

Global news reveals that X has suspended many accounts linked to activists and opposition voices sharing protest details. Yet, a twist: X has also publicly rebuffed a Turkish government demand to suspend ‘over 700 accounts,’ vowing to defend free speech. 

This clash between compliance and defiance offers a vivid example of the controversy around freedom of speech and content policy in the digital age, where global platforms, national power, and individual voices collide like tectonic plates on a restless earth.

The spark: protests and a digital crackdown

The unrest began with İmamoğlu’s arrest, a move many saw as a political jab by President Recep Tayyip Erdoğan’s government against a prominent rival. As tear gas clouded the air and chants echoed through Turkish cities, protesters turned to X to organise, share live updates, and amplify their dissent. University students, opposition supporters, and grassroots activists flooded the platform with hashtags and footage: raw, unfiltered glimpses of a nation at odds with itself. But this digital megaphone didn’t go unnoticed. Turkish authorities pinpointed 326 accounts for the takedown, accusing them of ‘inciting hatred’ and destabilising order. X’s response? X has partially fulfilled the Turkish authorities’ alleged requests by ‘likely’ suspending many accounts.

The case isn’t the first where Türkish authorities require platforms to take action. For instance, during the 2013 Gezi Park protests, Twitter (X’s predecessor) faced similar requests. Erdoğan’s administration has long wielded legal provisions like Article 299 of the Penal Code (insulting the president) as a measure of fining platforms that don’t align with the government content policy. Freedom House’s 2024 report labels the country’s internet freedom as ‘not free,’ citing a history of throttling dissent online. Yet, X’s partial obedience here (selectively suspending accounts) hints at a tightrope walk: bowing just enough to keep operating in Türkiye while dodging a complete shutdown that could alienate its user base. For Turks, it’s a bitter pill: a platform they’ve leaned on as a lifeline for free expression now feels like an unreliable ally.

X’s defiant stand: a free speech facade?

Then came the curveball. Posts on X from users like @botella_roberto lit up feeds with news that X had rejected a broader Turkish demand to suspend ‘over 700 accounts,’ calling it ‘illegal’ and doubling down with a statement: ‘X will always defend freedom of speech.’ Such a stance paints X as a guardian of expression, a digital David slinging stones at an authoritarian Goliath.

Either way, one theory, whispered across X posts, is that X faced an ultimatum: suspend the critical accounts or risk a nationwide ban, a fate Twitter suffered in 2014

By complying with a partial measure, X might be playing a calculated game: preserving its Turkish foothold while burnishing its free-speech credibility globally. Musk, after all, has built X’s brand on unfiltered discourse, a stark pivot from Twitter’s pre-2022 moderation-heavy days. Yet, this defiance rings hollow to some. Amnesty International’s Türkiye researcher noted that the suspended accounts (often young activists) were the very voices X claims to champion.

Freedom of speech: a cultural tug-of-war

This saga isn’t just about X or Türkiye; it is an example reflecting the global tussle over what ‘freedom of speech’ means in 2025. In some countries, it is enshrined in laws and fiercely debated on platforms like X, where Musk’s ‘maximally helpful’ ethos thrives. In others, it’s a fragile thread woven into cultural fabrics that prizes collective stability over individual outcry. In Türkiye, the government frames dissent as a threat to national unity, a stance rooted in decades of political upheaval—think coups in 1960 and 1980. Consequently, protesters saw X as a megaphone to challenge that narrative, but when the platform suspended some of their accounts, it was as if the rug had been yanked out from under their feet, reinforcing an infamous sociocultural norm: speak too loud and you’ll be hushed.

Posts on X echo a split sentiment: some laud X for resisting some of the government’s requests, while others decry its compliance as a betrayal. This duality brings us to the conclusion that digital platforms aren’t neutral arbiters in free cyberspace but chameleons, adapting to local laws while trying to project a universal image.

Content policy: the invisible hand

X’s content policy, or lack thereof, adds another layer to this sociocultural dispute. Unlike Meta or YouTube, which lean on thick rulebooks, X under Musk has slashed moderation, betting on user-driven truth over top-down control. Its 2024 transparency report, cited in X posts, shows a global takedown compliance rate of 80%, but Türkiye’s 86% suggests a higher deference to Ankara’s demands. Why? Reuters points to Türkiye’s 2020 social media law, which mandates that platforms appoint local representatives to comply with takedowns or face bandwidth cuts and fines. X’s Istanbul office opened in 2023, signals its intent to play on Turkish ground, but the alleged refusal of government requests shows a line in the sand: comply, but not blindly.

This policy controversy isn’t unique to Türkiye. In Brazil, X faced a 2024 ban over misinformation, only to backtrack after appointing a local representative. In India, X sues Modi’s government over content removal in the new India censorship fight. In the US, X fights court battles to protect user speech. In Türkiye, it bows (partly) to avoid exile. Each case underscores a sociocultural truth: content policy isn’t unchangeable; it’s a continuous legal dispute between big tech, national power and the voice of the people.

Conclusions

As the protests simmer and X navigates Türkiye’s demands, the world watches a sociocultural experiment unfold. Will X double down on defiance, risking a ban that could cost 20 million Turkish users (per 2024 Statista data)? Or will it bend further, cementing its role as a compliant guest in Ankara’s house? The answer could shape future digital dissents and the global blueprint for free speech online. For now, it is a standoff: X holds a megaphone in one hand, a gag in the other, while protesters shout into the fray.

America’s Bitcoin gamble: A power play for financial dominance 

For years, the US government has maintained a cautious stance on cryptocurrency, often treating it as a regulatory challenge rather than an economic opportunity. Recent policy moves under President Donald Trump suggest that a dramatic shift is underway—one that could redefine the nation’s role in the digital asset space. During his pre-election campaign, Trump promised to create a Strategic Bitcoin Reserve, a move that generated significant excitement among crypto advocates. In the post-election period, a series of measures have been introduced, reflecting a deeper recognition of cryptocurrency’s growing influence. But are these actions bold steps towards financial innovation, or simply political manoeuvres designed to capture a rising economic trend? The answer may lie in how these policies unfold and whether they translate into real, lasting change for Bitcoin and the broader crypto ecosystem.

Digital Asset Stockpile: Has the promise of Bitcoin as a reserve been betrayed?

The first major step in this shift came on 23 January, when Trump signed an executive order promoting cryptocurrency and paving the way for the establishment of the US Digital Asset Stockpile. At first glance, this move appeared to be a groundbreaking acknowledgement of cryptocurrencies as valuable national assets. However, a closer look revealed that the stockpile was not focused on Bitcoin alone but included a mix of digital assets, all sourced from government seizures in criminal and civil procedures. This raised immediate concerns among Bitcoin advocates, who had expected a more direct commitment to Bitcoin as a reserve asset, as promised. Instead of actively purchasing Bitcoin to build a strategic reserve, the US government chose to rely solely on confiscated funds, raising questions about the long-term sustainability and intent behind the initiative. Was this a step towards financial innovation, or simply a way to repurpose seized assets without committing to a larger crypto strategy?

The ambiguity surrounding the Digital Asset Stockpile led many to doubt whether the US government was serious about adopting Bitcoin as a key financial instrument. If the goal was to establish a meaningful reserve, why not allocate funds to acquire Bitcoin on the open market? By avoiding direct investment, the administration sent mixed signals—recognising digital assets’ importance while hesitating to commit real capital. This move, while significant, seemed to fall short of the expectations set by previous pro-crypto rhetoric. 

America’s bold Bitcoin strategy could set off a global wave, reshaping the future of digital finance and economic power.

Strategic Bitcoin Reserve: A step towards recognising Bitcoin’s unique role

Just when it seemed like the US was betraying its promises to the crypto community, a new executive order emerged, offering a glimmer of hope. Many were initially disillusioned by the creation of the Strategic Bitcoin Reserve, which was to be built from confiscated assets instead of fresh, direct investments in Bitcoin. This approach raised doubts about the administration’s true intentions, as it seemed more focused on repurposing seized funds than on committing to Bitcoin’s long-term role in the financial system. However, the following executive order signalled a shift in US policy, opening the door to broader recognition of Bitcoin’s potential. While it might not have met the bold expectations set by early promises, it was still a significant step towards integrating cryptocurrency into national and global financial strategies. More importantly, it signalled a move beyond viewing all cryptocurrencies as the same, recognising Bitcoin’s unique position as a digital asset with transformative potential. This was a step further in acknowledging Bitcoin’s importance, distinct from other cryptos, and marking a pivotal moment in the evolution of digital finance.

White House Crypto Summit: Bringing legitimacy to the table

As these initiatives unfolded, the White House Crypto Summit added another layer to the evolving policy content. As the first event of its kind, it brought together industry leaders and policymakers in an unprecedented dialogue between government officials and crypto giants. This move was not just about discussing regulations—it was a strategic effort to strengthen the foundation for future pro-crypto actions. Consulting industry insiders provided a crucial opportunity to grasp the true nature of cryptocurrency before finalising legislative measures, ensuring that policies would be informed rather than reactive. By involving key industry players, the administration ensured that upcoming measures would be shaped by those who understand the technology and its potential. It was a calculated step towards framing future policies as collaborative rather than unilateral, fostering a more balanced approach to crypto regulation.

A new memecoin, Everything is Computer (EIC), has emerged following Trump’s viral comment, recording over $15 million in trading volume in a single day.

Bitcoin Act Unveiled: America is ready to HODL

And then, the moment the crypto community had been anticipating finally arrived—a decisive move that could reshape global crypto adoption. Senator Cynthia Lummis reintroduced the Bitcoin Act, a proposal to solidify Bitcoin’s place within the US financial system. Unlike executive orders that can be overturned by future administrations, this bill aimed to establish a permanent legal framework for Bitcoin’s adoption.

What made this proposal even more historic was its bold mandate: the US government would be required to purchase one million BTC over the next five years, a colossal investment worth around $80 billion at the time. To finance this, a portion of the Federal Reserve’s net earnings would be allocated, minimising the burden on taxpayers. Additionally, all Bitcoin acquired through the programme would be locked away for at least 20 years before any portion could be sold, ensuring a long-term commitment rather than short-term speculation. It seems like America is ready to HODL!

Trump’s crypto plan: Bringing businesses back to the US

Not just that—President Trump revealed plans to sign an executive order reversing Biden-era crypto debanking policies, a move that could significantly reshape the regulatory landscape if enacted. These policies have made it increasingly difficult for crypto businesses to access banking services, effectively cutting them off from the traditional financial system and driving many firms to relocate offshore.

If implemented, the reversal could have profound repercussions. By removing banking restrictions, the USA could become a more attractive destination for blockchain companies, potentially bringing back businesses that left due to regulatory uncertainty. Easier access to banking would give crypto businesses the stability they need, cutting out the risky loopholes they have had to rely on and making the industry more transparent.

For now, this remains a plan, but its announcement alone has already garnered strong support from the crypto community, which sees it as a critical step towards re-establishing the USA as a leader in digital asset innovation. Senator Cynthia Lummis stated, ‘By transforming the president’s visionary executive action into enduring law, we can ensure that our nation will harness the full potential of digital innovation to address our national debt while maintaining our competitive edge in the global economy.’

 Flag, Gold, American Flag

Global impact: How US measures could accelerate worldwide crypto adoption

This is not just a story about the USA; it has global implications. The effect of these measures goes beyond American borders. By officially recognising Bitcoin as a strategic asset and rolling back restrictive banking policies, the USA is setting an example that other nations may follow. If the world’s largest economy begins accumulating Bitcoin and incorporating it into its financial framework, it will solidify Bitcoin’s standing as a global reserve asset. This could prompt other countries to rethink their positions, fostering broader institutional adoption and possibly triggering a wave of regulatory clarity worldwide. Moreover, the return of crypto businesses to the USA might spark competition among nations to establish more attractive regulatory environments, speeding up innovation and mainstream adoption.

Simultaneously, these moves send a strong signal to global markets: the uncertainty surrounding the role of Bitcoin in the financial system is decreasing. With the USA taking the lead, institutional investors who were once cautious may gain more confidence to allocate substantial funds to Bitcoin and other digital assets. This could drive broader financial integration, positioning Bitcoin not just as a hedge against inflation or a speculative investment, but also as a central element in the future financial systems.

As nations compete to define the future of money, the true test will be whether the world can embrace a decentralised financial system or whether it will ultimately remain tethered to the traditional power structures. One thing is certain: it all comes down to who holds the power in the rise of cryptocurrency, as it will shape the economic relations of the future. 

For more information on these topics, visit diplomacy.edu.

The future of digital regulation between the EU and the US

Understanding the DMA and DSA regulations

The Digital Markets Act (DMA) and the Digital Services Act (DSA) are two major regulatory frameworks introduced by the EU to create a fairer and safer digital environment. While both fall under the broader Digital Services Act package, they serve distinct purposes.

The DMA focuses on ensuring fair competition by regulating large online platforms, known as gatekeepers, which have a dominant influence on digital markets. It prevents these companies from engaging in monopolistic practices, such as self-preferencing their own services, restricting interoperability, or using business data unfairly. The goal is to create a more competitive landscape where smaller businesses and consumers have more choices.

On the other hand, the DSA is designed to make online spaces safer by holding platforms accountable for illegal content, misinformation, and harmful activities. It imposes stricter content moderation rules, enhances transparency in digital advertising, and ensures better user rights protection. Larger platforms with significant user bases face even greater responsibilities under this act.

A blue background with yellow stars and dots

The key difference in regulation is that the DMA follows an ex-ante approach, meaning it imposes strict rules on gatekeepers before unfair practices occur. The DSA takes an ex-post approach, requiring platforms to monitor risks and take corrective action after problems arise. This means the DMA enforces competition while the DSA ensures online safety and accountability.

A key component of the DSA Act package is its emphasis on transparency and user rights. Platforms must explain how their algorithms curate content, prevent the use of sensitive data for targeted advertising, and prohibit manipulative design practices such as misleading cookie banners. The most powerful platforms, classified as Very Large Online Platforms (VLOPs) or Very Large Online Search Engines (VLOSEs), are also required to assess and report on ‘systemic risks’ linked to their services, including threats to public safety, democratic discourse, and mental well-being. However, these reports often lack meaningful detail, as illustrated by TikTok’s inadequate assessment of its role in election-related misinformation.

Enforcement is critical to the success of the DSA. While the European Commission directly oversees the largest platforms, national regulators, known as Digital Services Coordinators (DSCs), play a key role in monitoring compliance. However, enforcement challenges remain, particularly in countries like Germany, where understaffing raises concerns about effective regulation. Across the EU, over 60 enforcement actions have already been launched against major tech firms, yet Silicon Valley’s biggest players are actively working to undermine European rules.

Together, the DMA and the DSA reshape how Big Tech companies operate in the EU, fostering competition and ensuring a safer and more transparent digital ecosystem for users.

Trump and Silicon Valley’s fight against EU regulations

The close relationship between Donald Trump and the Silicon Valley tech elite has significantly influenced US policy towards European digital regulations. Since Trump’s return to office, Big Tech executives have actively lobbied against these regulations and have urged the new administration to defend tech firms from what he calls EU ‘censorship.’

 People, Person, Head, Face, Adult, Male, Man, Accessories, Formal Wear, Tie, Crowd, Clothing, Suit, Bride, Female, Wedding, Woman, Indoors, Elon Musk, Jeff Bezos, Sundar Pichai, Mark Zuckerberg, Laura Sánchez, Sean Duffy, Marco Rubio, Priscilla Chan, Doug Collins

Joel Kaplan, Meta’s chief lobbyist, has gone as far as to equate EU regulations with tariffs, a stance that aligns with the Trump administration’s broader trade war strategy. The administration sees these regulations as barriers to US technological dominance, arguing that the EU is trying to tax and control American innovation rather than foster its own competitive tech sector.

Figures like Elon Musk and Mark Zuckerberg have aligned themselves with Trump, leveraging their influence to oppose EU legislation such as the DSA. Meta’s controversial policy changes and Musk’s X platform’s lax approach to content moderation illustrate how major tech firms are resisting regulatory oversight while benefiting from Trump’s protectionist stance.

The White House and the House Judiciary Committee have raised concerns that these laws unfairly target American technology companies, restricting their ability to operate in the European market.

Brendan Carr, chairman of the FCC, has recently voiced strong concerns regarding the DSA, which he argues could clash with America’s free speech values. Speaking at the Mobile World Congress in Barcelona, Carr warned that its approach to content moderation might excessively limit freedom of expression. His remarks reflect a broader criticism from US officials, as Vice President JD Vance had also denounced European content moderation at a recent AI summit in Paris, labelling it as ‘authoritarian censorship.’

These officials argue that the DMA and the DSA create barriers that limit American companies’ innovations and undermine free trade. In response, the House Judiciary Committee has formally challenged the European Commission, stating that certain US products and services may no longer be available in Europe due to these regulations. Keep in mind that the Biden administration also directed its trade and commerce departments to investigate whether these EU laws restrict free speech and recommend countermeasures.

Recently, US President Donald Trump has escalated tensions with the EU threatening tariffs in retaliation for what he calls ‘overseas extortion.’ The memorandum signed by Trump on 21 February 2025, directs the administration to review EU and UK policies that might force US tech companies to develop or use products that ‘undermine free speech or foster censorship.’ The memo also aims at Digital Services Taxes (DSTs), claiming that foreign governments unfairly tax US firms ‘simply because they operate in foreign markets.’

 Pen, Adult, Male, Man, Person, People, Accessories, Formal Wear, Tie, donald trump

EU’s response: Digital sovereignty at stake

However, the European Commission insists that these taxes are applied equally to all large digital companies, regardless of their country of origin, ensuring fair contributions from businesses profiting within the EU. It has also defended its regulations, arguing that they promote fair competition and protect consumer rights.

EU officials see these policies as fundamental to Europe’s digital sovereignty, ensuring that powerful tech firms operate transparently and fairly in the region. As they push back against what they see as US interference and tensions rise, the dispute over how to regulate Big Tech could shape the future of digital markets and transatlantic trade relations.

Eventually, this clash could lead to a new wave of trade conflicts between the USA and the EU, with potential economic and geopolitical consequences for the global tech industry. With figures like JD Vance and Jim Jordan also attacking the DSA and the DMA, and Trump himself framing EU regulations as economic warfare, Europe faces mounting pressure to weaken its tech laws. Additionally, the withdrawal of the EU Artificial Intelligence Liability Directive (AILD) following the Paris AI Summit and JD Vance’s refusal to sign a joint AI statement raised more concerns about Europe’s ability to resist external pushback. The risk that Trump will use economic and security threats, including NATO involvement, as leverage against EU enforcement underscores the urgency of a strong European response.

Another major battleground is the AI regulation. The EU’s AI Act is one of the world’s first comprehensive AI laws, setting strict guidelines for AI transparency, risk assessment, and data usage. Meanwhile, the USA has taken a more industry-led approach, with minimal government intervention.

A blue flag with yellow stars and a circle of yellow stars

This regulatory gap could create further tensions as European lawmakers demand compliance from American AI firms. The recent withdrawal of the EU Artificial Intelligence Liability Directive (AILD) under US pressure highlights how external lobbying can influence European policymaking.

However, if the EU successfully enforces its AI rules, it could set a global precedent, forcing US firms to comply with European standards if they want to operate in the region. This scenario mirrors what happened with the GDPR (General Data Protection Regulation), which led to global changes in privacy policies.

To counter the growing pressure, the EU remains steadfast – as we speak – in enforcing the DSA, the DMA, and the AI Act, ensuring that regulatory frameworks are not compromised under US influence. Beyond regulation, Europe must also bolster its digital industrial capabilities to keep pace. The EUR 200 billion AI investment is a step in the right direction, but Europe requires more resilient digital infrastructures, stronger back-end technologies, and better support for its tech companies.

Currently, the EU is doubling down on its push for digital sovereignty by investing in:

  • Cloud computing infrastructure to reduce reliance on US providers (e.g., AWS, Microsoft Azure)
  • AI development and semiconductor manufacturing (through the European Chips Act)
  • Alternative social media platforms and search engines to challenge US dominance

These efforts aim to lessen European dependence on US Big Tech and create a more self-sufficient digital ecosystem.

The future of digital regulations

Despite the escalating tensions, both the EU and the USA recognise the importance of transatlantic tech cooperation. While their regulatory approaches differ significantly, there are areas where collaboration could still prevail. Cybersecurity remains a crucial issue, as both sides face growing threats from several countries. Strengthening cybersecurity partnerships could provide a shared framework for protecting critical infrastructure and digital ecosystems. Another potential area for collaboration is the development of joint AI safety standards, ensuring that emerging technologies are regulated responsibly without stifling innovation. Additionally, data-sharing agreements remain essential to maintaining smooth digital trade and cross-border business operations.

Past agreements, such as the EU-US Data Privacy Framework, have demonstrated that cooperation is possible. However, whether similar compromises can be reached regarding the DMA, the DSA, and the AI Act remains uncertain. Fundamental differences in regulatory philosophy continue to create obstacles, with the EU prioritising consumer protection and market fairness while the USA maintains a more business-friendly, innovation-driven stance.

Looking ahead, the future of digital regulations between the EU and the USA is likely to remain contentious. The European Union appears determined to enforce stricter rules on Big Tech, while the United States—particularly under the Trump administration—is expected to push back against what it perceives as excessive European regulatory influence. Unless meaningful compromises are reached, the global internet may further fragment into distinct regulatory zones. The European model would emphasise strict digital oversight, strong privacy protections, and policies designed to ensure fair competition. The USA, in contrast, would continue to prioritise a more business-led approach, favouring self-regulation and innovation-driven policies.

big tech 4473ae

As the digital landscape evolves, the coming months and years will be crucial in determining whether the EU and the USA can find common ground on tech regulation or whether their differences will lead to deeper division. The stakes are high, affecting not only businesses but also consumers, policymakers, and the broader future of the global internet. The path forward remains uncertain, but the decisions made today will shape the structure of the digital world for generations to come.

Ultimately, the outcome of this ongoing transatlantic dispute could have wide-reaching implications, not only for the future of digital regulation but also for global trade relations. While the US government and the Silicon Valley tech elite are likely to continue their pushback, the EU appears steadfast in its determination to ensure that its digital regulations are enforced to maintain a fair and safe digital ecosystem for all users. As this global battle unfolds, the world will be watching as the EU and USA navigate the evolving landscape of digital governance.

OEWG’s tenth substantive session: Entering the eleventh hour

The UN Open-Ended Working Group (OEWG) on the security of and in the use of information and communications technologies in 2021–2025 held its tenth substantive session on 17-21 February 2025. 

Some of the main takeaways from this session are:

  • Ransomware, AI and threats to critical infrastructure remain the biggest concerns countries have regarding the threat landscape. Even as countries don’t agree on an exhaustive list of threats or their sources, there is a strong emphasis on collective and cooperative responses such as capacity building and knowledge sharing to reduce the risk of these threats, and mitigate and manage these threats.
  • The long-standing debate between implementing existing norms and developing new ones continued. However, this session saw ASEAN countries take a more pragmatic approach, emphasising concrete steps toward implementing agreed norms while maintaining openness to discussing new ones in parallel. At the same time, the call from developing countries for greater capacity-building gained momentum, underscoring the challenge of implementing norms without sufficient resources and support.
  • The discussions on international law have shown little progress in drawing closer between the positions states hold — there is still no consensus on the necessity of new legally binding regulations for cyberspace. There is also discord on how to proceed with discussing international law in the future permanent UN mechanism on cybersecurity.
  • Discussions on confidence-building measures (CBMs) were largely subdued, as few new CBMs were introduced, and states didn’t overly detail their POC Directory experience. Many states shared their CBM implementation, often linked to regional initiatives and best practices, showing eagerness to operationalise CBMs. It seems that states now anticipate the future permanent mechanism to serve as the forum for detailed CBM discussions.
  • The Voluntary Fund and the Capacity-Building Portal have increasingly been regarded as key deliverables of the OEWG process. However, states remain cautious about the risk of duplicating existing global and regional initiatives, and a clear consensus has yet to emerge regarding the objectives of these deliverables.
  • States are still grappling with the questions of thematic groups and non-state stakeholder engagement in the future permanent mechanism. The Chair’s upcoming reflections and townhalls are likely to get the ball rolling on finding elements for the future permanent mechanism acceptable to all delegations.

As negotiations are entering the eleventh hour ahead of the OEWG’s eleventh session, consensus remains elusive. Tensions ran high since the first day, with attributions of cyberattacks and rights of reply denouncing those attributions taking centre stage. The states held tightly to their positions, largely unchanged since the last session in December 2024. The Chair pointed out that direct dialogue was lacking, with participants instead opting for a virtual town hall approach—circulating their positions and posting them on the portal, and reminded delegates that whatever decisions to be made would be made by consensus, urging them to demonstrate flexibility.

Threats: Collective action is key
 Text, Device, Grass, Lawn, Lawn Mower, Plant, Tool, Gun, Weapon

The discussions at this session revealed a range of national perspectives on cybersecurity threats. Malicious use of AI, critical infrastructure attacks and ransomware remained central concerns.

Collective solutions for cyber threats

The consensus remained clear throughout the discussions: cyber threats are a shared challenge requiring collective solutions. 

Nigeria underscored the importance of a comprehensive international framework to harmonise responses to cyber threats. Collaboration between state Computer Emergency Response Teams (CERTs), strategic planning, and continuous monitoring of emerging threats were highlighted as essential components. Albania reinforced the value of cooperative approaches in incident management, warning that cyberattacks could escalate tensions if misattributed. Albania also advocated for robust diplomatic dialogue through strengthened communication channels among CERTs and intelligence-sharing agreements. Uruguay and Argentina underscored the need for knowledge transfer and shared expertise in identifying and responding to cyber threats. Malaysia and South Africa further emphasised that fostering collaboration among technical experts, academia, and government officials would enhance cybersecurity preparedness. Bosnia and Herzegovina emphasised resilience-building through strategic communication and public awareness. 

Capacity building remained a priority for developing nations. Mauritius and Malawi stressed the urgent need for technical assistance, funding, and training to strengthen cybersecurity frameworks in regions facing resource constraints. Indonesia echoed this sentiment, advocating for increased knowledge sharing and technical cooperation to collectively address evolving threats. Nigeria advocated for capacity building in developing nations to reduce technological dependency and improve cybersecurity defences. Ghana called for greater investment in cybersecurity research and innovation to bolster national defences. 

Australia pointed to cyber sanctions as a means to deter malicious actors and impose tangible consequences on cyber criminals. Switzerland, focusing on the increasing threat of ransomware, stressed the need for states to uphold international law, reinforce resilience, and enhance international cooperation.

A particular concern was the spread of misinformation and disinformation, which Nigeria suggested should be countered through the circulation of accurate information without infringing on freedom of expression.

Final Report: How to best reflect discussions on threats

Several delegations emphasised key issues for inclusion in the OEWG final report. The EU, Croatia, New Zealand, and South Korea supported continued references to ransomware. 

China’s concerns for the final report include the risks of politicizing cybersecurity and ICT, which threaten global cooperation and digital integrity. It also highlights the rising cyber tensions conflict, particularly with offensive strategies and attacks on critical infrastructure. China stresses the importance of addressing false claims about cyber incidents, which harm trust between nations. It calls for secure ICT supply chains and the prevention of backdoors in products. 

China advocated for a comprehensive, evidence-based approach to data security in the AI era, focusing on data localisation and cross-border transfer issues. Malaysia supported China on the importance of addressing data security which should be included in the final report. 

El Salvador urged that the annual reports reflect the importance of safe and transparent data management throughout the whole life cycle with practices that protect privacy, particularly relevant for generative AI models, which Malaysia supported. 

El Salvador also believes that it’s essential that the report includes a reference to the development of cryptographic standards that are resistant in the quantum era, which Czechia echoed.

The future permanent mechanism: How to tackle discussions on threats

As discussions moved toward the future of global cybersecurity governance, the EU proposed a dedicated thematic group under the Program of Action (PoA) to systematically assess threats, enhance security, and coordinate capacity-building efforts. The USA and Portugal reinforced the urgency of this initiative, calling for a flexible yet permanent platform to address cyber threats, particularly ransomware.

Several countries stressed the importance of sector-specific security measures. Malaysia highlighted the need to tailor protections for different industries, while Mexico advocated for harmonised cybersecurity standards and multistakeholder cooperation across the digital supply chain. Mauritius and Malawi reaffirmed the importance of upholding international cyber norms, with Malawi emphasising continued dialogue within the UN Open-Ended Working Group (OEWG).

Australia and Canada pushed for linking emerging threats to responsible state behaviour under international law, with Canada calling for thematic groups to enable deeper discussions beyond general plenary meetings. Switzerland and Germany agreed, underscoring the need to first establish a shared understanding of threats before implementing coordinated responses. France called for shifting from merely identifying threats to actively developing solutions, proposing that expert briefings guide working group discussions.

AI security also emerged as a key concern. Malaysia stressed the role of AI developers in cybersecurity, while Argentina highlighted the private sector’s responsibility in addressing AI-related threats. Italy pointed to the recent Joint High-Level Risk Analysis on AI, which provides recommendations for securing AI systems and supply chains.

Norms, rules and principles: A near standstill in discussions
 Body Part, Hand, Person, Aircraft, Airplane, Transportation, Vehicle, Handshake

Need for new norms vs. implementation of existing ones

The divide persists between states that prioritise implementing the agreed norms (e.g. Japan, Switzerland, Australia, Canada, South Korea, Kazakhstan) and those advocating for new, legally binding rules (e.g. Russia, Pakistan, Cuba). The former group argued that introducing new norms without fully implementing current ones could dilute efforts, while the latter believes that voluntary norms lack accountability, particularly in crises. Italy specifically called for full implementation of existing cyber norms before introducing new ones. 

Among the new norms proposed, Kazakhstan proposed a ‘norm-on-zero trust’ approach, emphasising continuous verification and access controls, although it acknowledged the need to prioritise implementing agreed norms. El Salvador repeated its proposal to update norm E regarding privacy and personal data. China highlighted that existing norms do not cover data security, while Vietnam called for new norms to address emerging technologies and the digital divide.

Some states didn’t propose new norms but sought fresh perspectives on existing ones. The UK suggested categorising the 11 norms into three themes: Cooperation (Norms A, D, H), Resilience (Norms G, J), and Stability (Norms B, C, E, F, I, K). France and the UK also reiterated the need for Norm I to address the non-proliferation of malicious tools. Portugal emphasised the importance of a common understanding of due diligence. Italy prioritised supply chain security, advocating for measures like ICT supply chain security assessments, Software Bills of Materials (SBOMs), national security evaluation centres, and cybersecurity certification schemes.

Some countries (e.g. Malaysia and Brazil) proposed a balanced approach, supporting both the implementation and development of norms. The EU and the USA stressed that negotiations on binding agreements could be resource-intensive and counterproductive. Iran counter-argued that a uniform approach to norm implementation is impractical due to each nation’s unique circumstances. Nicaragua and Pakistan contended that non-binding norms fail to address emerging threats effectively, while China pointed out that the 2015 UN GGE report allows for developing additional norms over time.

Capacity-building as a critical component for cyber norms implementation

Many states, particularly Singapore, Indonesia, Pakistan, and Mauritius, emphasised that implementing cyber norms requires bridging the technical gap between developed and developing nations. Iran and Cuba noted that resource constraints hinder developing countries. Kenya and South Africa advocated for integrating long-term capacity-building into the future UN cyber mechanism to improve norm implementation. Kenya highlighted the challenges posed by varying technical expertise among states. For example, Norm C, which prohibits allowing territory for wrongful acts, requires specific tools and skills not all countries possess.

Singapore argued that each norm has policy, operational, technical, legal, and diplomatic aspects, and developing the capacity to implement these norms is essential for identifying gaps and determining the need for new norms. In this context, the ASEAN-Singapore Cybersecurity Centre of Excellence will launch a series of capacity-building workshops called ‘Cyber Norms in Action.’ 

Voluntary checklists: Cyber norms implementation 

The voluntary checklist is broadly supported as a tool for operationalising agreed cyber norms. Countries (e.g. Colombia, Japan, and Malaysia) view it as a ‘living document’ that should evolve with the evolving landscape of cyber threats. Kazakhstan suggested incorporating best practices for incident response and public-private collaboration.

Despite this support, some countries remain sceptical. Cuba and Iran cautioned against using the checklist as a de facto assessment tool for evaluating states’ cybersecurity performance. China insisted that the checklist remains within the UN information security framework to maintain neutrality. Iran proposed delaying negotiations on the checklist until a broader consensus is reached under a permanent UN cyber mechanism.

An important aspect of the checklist is its potential to promote inclusive cybersecurity governance. The UK, Brazil, and the Netherlands stressed the need to integrate a gender perspective, ensuring that the implementation of cyber norms considers the disproportionate impact on women and vulnerable communities. 

International law: Little progress made
 Accessories, Bag, Handbag, Scale

The discussions on international law have shown little progress in drawing closer between the positions. The states have made suggestions on how to capture the progress of the OEWG 2021-2025 in its Final Report and shared opinions on the structure and content of discussions on international law within the future permanent mechanism.

The persistent rift: The need for a new legally binding framework

In the substantive positions of the states on international law, the rift remains between the states that do not see a need for a new legally binding framework and those that do

The majority of states (Sweden, the EU, the Republic of Korea, the UK and others) do not see the need for a new legally binding framework and emphasise the need to discuss the application of existing international law in cyberspace. In rushing to discuss new legally binding obligations, the UK sees the risk of undermining the application of core, foundational rules of international law, including the UN Charter.

Cuba, China, Russia, Pakistan, and the Islamic Republic of Iran reiterated their positions, stating that the new legally binding mechanism is necessary to prevent interstate conflicts in cyberspace and to contribute to strengthening cooperation in this area. China has supported the Russian Draft Convention on International Information Security as a good basis for discussions. At the same time, Pakistan and Iran stated that there are gaps in international law that need to be addressed by binding rules.

Despite the Chairs’ call to states to find flexibility in their statements in December 2024 and time pressure, the statements on both sides are repeats of the positions voiced in the past substantive sessions. 

These differences directly translate to the language that the states were proposing to be included in the 2021-2025 OEWG Final Report, as well as positions on how to structure the Future Permanent Mechanism. 

Final Report: How to best reflect progress

States have discussed the proposals on how to best reflect the progress in the 2021-2025 OEWG on international law in its Final Report, as it will serve as a summary of the efforts, positions, and basis for the negotiations within the future permanent mechanism. 

The states predominantly concluded that the OEWG was a successful process and contributed to a greater understanding of international law in cyberspace. Specifically, states (Austria, Sweden, Brazil, Senegal, Canada, Thailand, Czechia, EU, Vanuatu, Switzerland, Australia, Germany and others) saw progress in a number of published national and regional positions on the applicability of international law in cyberspace in the course of the 2021-2025 OEWG. 

There were also specific wording suggestions for inclusion in the Final Report. The Joint Statement on International Law (Australia, Chile, Colombia, the Dominican Republic, El Salvador, Estonia, Fiji, Kiribati, Moldova, the Netherlands, Papua New Guinea, Thailand, Uruguay and Viet Nam) gained support from Czechia, Canada, Switzerland, United Kingdom, Republic of Moldova, Ireland, and others. The re-published paper, now with more co-sponsors, offers a convergence language for the Final Report that includes peaceful settlement of disputes, respect for international human rights obligations, the principle of state responsibility, and application of international humanitarian law to ICT activities during armed conflicts.

Another wave of proposals was focused on including a clear reference to the applicability of international humanitarian law and the fundamental legal principles of humanity, neutrality, necessity, proportionality, and distinction in the Final Report, supported by Sweden, the USA, the Republic of Korea, Malawi, Senegal, the EU, Tonga on behalf of the Pacific Island Forum, Australia, Germany, Republic of Moldova, Ireland, Ghana, Austria, and others. Just like in the 9th OEWG substantive session in December 2024, the Resolution on protection for the civilian population against the humanitarian consequences of the misuse of digital technologies in armed conflict within the framework of the 34th International Red Cross and Red Crescent Conference resonated with the states. 

Brazil has referred explicitly to the Operative Paragraph 4 of that Resolution (‘states recalled that in situations of armed conflict, international humanitarian law rules and principles serve to protect civilian populations and other protected persons and objects, including against the risks arising from ICT activities’) to be included in the Final Report. Canada, France, Netherlands, Czechia, and others supported this proposal. 

Switzerland, which sees the inclusion of the applicability of international humanitarian law as a priority, has also proposed a specific wording for the Final Report that builds on the 34th ICRC resolution and includes medical and humanitarian facilities.

States also called for stronger wording on the applicability of human rights law (Australia, Albania, Malawi, Mexico, Mozambique, Moldova, North Macedonia, Senegal, Switzerland, Thailand, and Germany) in the Final Report.

Cuba and Iran believe the Final Report should include references on setting up a legally binding instrument, and definitions of terms and technical mechanisms.

The future permanent mechanism: How to tackle international law

States further discussed ways that the discussions on international law would be incorporated and framed within the Future Permanent Mechanism. 

States reflected on Annex C of the Chair’s Discussion Paper on Draft Elements on Stakeholder Modalities and Dedicated Thematic Groups of the Future Permanent Mechanism, which proposed a dedicated thematic group on rules, norms and principles of responsible state behaviour and on international law. Mexico, Colombia, Indonesia, and Algeria endorsed the thematic group dedicated both to norms and international law, as they see these as complementary and contributing to safety and security.

Others, such as Sweden, the EU, Czechia, Brazil, and the USA, did not support the Chairs’ proposal to create one thematic group for norms and international law due to the voluntary nature of norms and binding nature of international law and combining these discussions posing a risk conflating distinct legal and policy concepts, that could hinder progress in both areas. 

Canada proposed integrating international law into each of the first three thematic working groups set out in the Chairs’ discussion paper (building resilience, enhancing cooperation in the management of ICT-related incidents, including through CBMs, and preventing conflict and increasing stability in the ICT sphere) to build common understandings on how international law applies to practical policy challenges. Thematic group meetings could include expert briefings on technical and legal topics and scenario-based discussions.

The states have deepened discussions on the Program of Action proposed by France, which seeks to incorporate discussions on international law in a cross-cutting manner in three action-oriented thematic groups: on building resilience, cooperation in the management of ICT-related incidents, and prevention of conflict and increasing stability in cyberspace. This approach was supported by Sweden, Portugal, Czechia, the UK, the EU, Albania, Australia, Germany, Ireland and others. The PoA also foresees the inclusion of non-state experts in cybersecurity, to which the EU and North Macedonia specifically expressed their support. 

In addition to the two proposals above, several states have voiced additional proposals.  

Switzerland generally supported the thematic and cross-cutting working groups as proposed by France but voiced concern that it might not be sufficient for in-depth discussions on international law. Switzerland considers it better if the discussion on the implementation of norms would occur in the cross-cutting working groups. In contrast, the discussion on the application of international law would benefit from a specific forum. 

The USA believes that the states are ready to integrate discussion into practical, thematic working groups oriented toward addressing specific, real-world concerns to international peace and stability and focusing on practical tools. 

Senegal recalled the equal importance and relevance of the five pillars of the OEWG mandate and would be willing to discuss adding a pillar on the application of international law. 

Iran, China and Russia see as a priority within the future permanent mechanism to initiate a substantive discussion on developing legally binding obligations in the ICT field and have a dedicated thematic group on international law. These states do not support the participation of non-state experts in the discussions. 

Ireland does not consider a thematic group on international law necessary or desirable. Their concern would be that such a group could be stifled by being overly outcome-focused and that it would duplicate efforts and divert resources and attention from more dynamic engagement on legal issues within the other thematic groups. Conversely, Egypt sees the need for a dedicated platform on international law in the future permanent mechanism and is sure that the modalities, mandate, structure, and types of discussions can be agreed on by consensus. Egypt sees the discussion as reshaping the content of international law and underscores the need to have a place within the UN to have a multilateral conversation with the participation of stakeholders.

The role of capacity building in fostering a better understanding of states on how international law applies to cyberspace and contributes to promoting peace, security, and stability in cyberspace was underscored by Tonga on behalf of the Pacific Island Forum, Viet Nam, Kenya, Ghana, Canada, Thailand, UK, France, Colombia, and many others.

CBMs: Looking forward to the permanent mechanism
 Stencil, Text

A more subdued CBMs discussion at this session seems to suggest that states now anticipate the future permanent mechanism to serve as the forum for detailed CBMs discussions. Kazakhstan suggested that addressing subtopics, such as standardised incident response procedures, would be more effective within thematic groups engaged in detailed discussions rather than plenary sessions. Some states voiced a cross-cutting approach to discussing CBMs more efficiently in the permanent mechanism, such as Germany proposing to address CBM 3, 5 and 6 under the single umbrella of the resilience of critical infrastructures.

While the previous session had already seen a decline in the discussion of additional CBMs, only Iran circulated a working paper proposing a new CBM to ensure unhindered access to a secure ICT market for all, aiming to foster global trust and confidence. No other state engaged with this proposal; Germany merely remarked that it might be more appropriately framed as a norm, given its reference to expectations or obligations.

The deliberations on standardised templates further exemplify the subdued nature of this session’s CBM discussions. South Africa, with Brazil‘s support, reiterated its proposal for a template encompassing a brief description of the assistance required, details of the cyber incident, acknowledgement of receipt by the requested state, and indicative response timeframes. Thailand emphasised the necessity for a flexible template, while Korea underscored that it should serve as a communication reference without imposing constraints on interactions. Finally, Kazakhstan reiterated its proposal to have specific templates for different scenarios, such as incident escalation, threat intelligence sharing and cyber capacity-building requests. The Secretariat is anticipated to produce such a standardised template by April 2025. In related matters, Mauritius proposed the development of secure communication platforms for exchanging information on cyber incidents.

This contrasts with the dynamic CBM landscape at the regional level, where numerous states shared their CBM implementations (the United Kingdom, Albania, Korea, Canada, Ethiopia, North Macedonia, Kenya, and the OSCE) often linked to regional initiatives and best practices (Tonga, Bosnia and Herzegovina, Thailand, Ghana, Brazil, Dominican Republic, Philippines). This further illustrates states’ eagerness to advance the operationalisation of CBMs.

The POC: Finally ripe for the picking?

As of the 10th session, 116 states have joined the Points of Contact (POC) Directory—an increase of five since December—registering nearly 300 diplomatic and technical POCs. The Secretariat shared conclusions from the December ping test and provided a detailed overview of the upcoming scenario-based exercise scheduled for March 10–11 and March 17–18, 2025. The Russian Federation actively encouraged remaining member states to participate in the POC Directory, promoting its guidelines on designating UN technical POCs and supporting a UNIDIR seminar aimed at achieving universal participation in the directory.

While most states remained silent regarding the ping test outcomes and their experiences with the POC Directory, three nations expressed dissatisfaction. Russia reiterated concerns about the inactivity of certain POCs and the insufficient authority of some technical POCs, which hampers their ability to respond to Russian notifications—echoing points raised during the 9th session’s CBM discussions. Germany and France jointly addressed issues with a specific state’s use of the POC Directory, noting that their technical POCs received notifications about malicious cyber activities linked to IP addresses in their respective countries. They recommended redirecting these requests to appropriate national authorities; however, identical requests continued to be sent to their technical POCs. This behaviour, they argued, contradicts the principle that the POC Directory should complement existing CERT-to-CERT channels designed for such requests. 

Without directly referencing these situations, China observed that, given the voluntary nature of the POC Directory, member states are free to determine the functions of their POCs, as well as the types and channels of messages they handle. This scenario highlights a broader lack of clear, consensual understanding regarding the POC Directory’s intended use. Mauritius emphasised the need to define clear thresholds for reportable incidents, while Cuba stressed the importance of detailing circumstances under which information exchange should occur. On a side note, the EU proposed that the private sector could participate in the POC directory.

Towards a more integrated approach: CBMs and capacity-building

Most states reaffirmed that capacity-building is a prerequisite to CBM implementation (Kazakhstan, Tonga, Russia, Thailand, Malawi, Laos, Ghana). Cuba and India voiced their interest in gathering the POC Directory in the global portal for capacity-building as a central access point and a core knowledge hub for resources. Pakistan argued that the POC Directory goes beyond crisis management but is a foundation for broader collaboration, including capacity-building. 

Capacity building: Positive feedback but uncertain objectives
 Art, Drawing, Doodle

Just like CBMs, the capacity-building agenda item is resolutely oriented towards pragmatic discussions and the 10th session proved again to be a privileged forum for member states to share their national and regional practices and initiatives (the EU, Columbia, Singapore, Bosnia and Herzegovina, Poland, Korea, Thailand, Canada, Israel, Albania, Japan, Morocco, Oman, Ukraine, Russia).

Among these experiences, an important number of states specifically highlighted the benefits of various fellowships (Kuwait, Iran), among which the Women in International Security and Cyberspace Fellowship (Mauritius, Ghana, Albania, Kazakhstan, Democratic Republic of Congo, Samoa, Paraguay, El Salvador) and the UN-Singapore Fellowship (Mauritius, Ghana, Albania, Nigeria, Democratic Republic of Congo). In that vein, Nigeria and Kuwait proposed to hold new fellowship programs under the auspices of the UN, similar to other UN fellowships related to international security matters.

Cyber-capacity building on a budget

One main discussing item was the Secretariat’s paper about the Voluntary Fund. An important number of states expressed their support for the fund (El Salvador, Columbia, South Africa, Rwanda, Morocco, Zimbabwe, Brazil, Kiribati, Cote d’Ivoire, Ecuador, Fiji and the Democratic Republic of Congo) and consensus largely emerged on the need to not duplicate existing funding initiatives and reflect on its link with the World Bank Multi-Donor Trust Fund (Germany, European Union, Kuwait, Australia). France specifically questioned whether the UN was a fit structure to support such capacity-building activities, and argued that it could be better positioned to play a role in linking existing initiatives. 

Western countries shared their capacity-building initiatives and specifically addressed the issue of costs. The Netherlands voiced the need to consider the cost efficiency of this initiative, and Canada asked for a more detailed budget, given that the costs presented are higher than those for similar activities that Canada usually finances. Australia reminded the audience that a new trust fund does not mean new money and that it could not support the proposal under its current formulation.

A large share of countries nevertheless positioned themselves in favour of open contributions from interested stakeholders other than member states, such as the private sector, NGOs, academia or philanthropic foundations (Argentina, Paraguay, Malawi, Mauritius, Nigeria, Mexico). Yet, Russia voiced its wariness concerning NGOs and companies sponsoring the fund as they may attempt to exert pressure.

Cuba and Iran warned against the constraining aspect of the fund. Iran specified that the principles guiding capacity-building mentioned in paragraph 10 did not enjoy consensus among member states and warned against attempts to condition capacity-building activities on the adoption of norms.

A portal, sure – but what for?

A second pivotal discussion item was the Secretariat’s paper about the development of a dedicated portal for cooperation and capacity-building based on a proposal made by India and member states’ views. Again, positions were consensual on the idea of a portal (Columbia, United Kingdom, Morocco, Oman, Zimbabwe, Ecuador, Nigeria, El Salvador, South Africa, Rwanda). Consensus also emerged around the fact that it should not duplicate the already existing portals and initiatives, such as UNIDIR Cyber Policy Portal and the Global Forum on Cyber Expertise (GFCE) Cybil Knowledge Portal (Fiji, Mexico, Tonga, Latvia, Mauritius, Germany, France, Samoa, Indonesia, Switzerland, Brazil, Mexico, Argentina, the Netherlands). 

Some delegations tackled the issue in a very pragmatic way. Korea questioned whether simply including direct links to existing portals was appropriate (supported by the UK) and proposed to have a technical review of the integration of the portal, including the POC directory into a new portal to establish an integrated platform (backed by Malaysia). Latvia reflected on potential existing administrative limitations and UN procurement rules about linkages with other websites, based on a previous IGF experience. 

The Secretariat wrapped up this discussion by specifying that the sections pertaining to the technical and administrative requirements were coordinated with ICT office in charge of UN-hosted platforms and websites and encouraged Member States to take a closer look at these sections. Still pertaining to pragmatic questions, Mauritius and India proposed that the portal be multilingual.

The level of publicity of the portal was also discussed. Korea and Kazakhstan proposed that the portal remain fully accessible to the public. Other states introduced nuance in the publicity. The Netherlands asked for the POC directory to remain accessible to member states only, whereas Cote d’Ivoire proposed that only modules 1 and 5 (respectively, the repository of documents and resources and the platform for exchange of information, including the potential participation of non-governmental entities) could be made public. India further suggested 3 levels of access: member states, stakeholders and the general public.

A major point of contention remains the exact content of this portal. Some states reaffirmed an incremental approach to the content of the portal (Kazakhstan, the EU, Australia), starting with basic functionalities, without necessarily specifying what those basic functionalities should be. China and Russia specifically warned against the use of the portal to facilitate information sharing regarding response to threats and incidents.

Indonesia suggested a specific section for stakeholders to share their own best practices, research papers etc., whereas Russia asked for NGO contributions to be published only for state information. On a side note, Cote d’Ivoire proposed to have a publication of an indicative quarterly or annual calendar along with the monthly publication of capacity-building initiatives and events.

The future permanent mechanism: How to tackle capacity building

States also tackled the structuration of capacity-building discussions within the future permanent mechanism. Iran, Argentina, Brazil and Paraguay supported the proposal to have a dedicated working group on capacity-building, as circulated in the chair’s discussion paper. A vast majority of states have defended a cross-cutting approach to capacity-building, with this agenda item being discussed across thematic groups (Tonga, Vanuatu, Canada, Kazakhstan, Kiribati, Ireland, Ukraine, Fiji). 

Some delegates proposed mixed approaches, such as the EU and Australia’s similar view that thematic groups can help identify gaps and specific challenges pertaining to capacity-building, and that these reflections can fuel a horizontal capacity-building discussion in plenary. Indonesia suggested that the thematic groups were the place to focus on technical recommendations rather than duplicating high-level policy discussions. In that vein, Indonesia also suggested establishing terms of reference to frame these discussions.

Finally, states expressed their support for the organisation of high-level panels such as the Global Roundtable on ICT Capacity Building held in May 2024 (the UK, Morocco, Zimbabwe, Kazakhstan, Ukraine, Germany). Thailand recommended that such high-panels be held on a biannual basis, and Australia suggested considering them as a ‘capacity-building exposition’. Canada argued that it should be held at other levels than the ministerial-level to distinguish it from plenary work. It further proposed that it could be a venue for beneficiaries to meet organisations deploying capacity-building activities. The chair recalled the initial scepticism around this initiative but recommends that in the Final Report a decision should be made about the next Global Roundtable.

Regular institutional dialogue: Consensus distant
 Accessories, Sunglasses, Text, Handwriting, Glasses

The agenda item on the regular institutional dialogue captured the most attention of the 10th substantive session — more than 60 delegations in total spoke on this issue. This is not surprising: the current OEWG’s mandate ends in July, and the Chair still does not have a sense of general consensus on what the future permanent mechanism will be. The statements of the countries showed that few states are ready to make concessions and be flexible in discussing the modalities of multistakeholder participation in the future permanent mechanism as well as its architecture.

As the delegations began to repeat their positions from last year, a sharp intervention from the Chair warned them that there is very little time left until the OEWG’s mandate ends, and if states do not want to disrupt the process that has been going on for more than 20 years, then they must make an effort and think about where they can be flexible in their positions.

The Chair also cautioned against equating the future permanent mechanism with either the OEWG or the PoA, noting that some participants remain attached to these frameworks. Instead, the future permanent mechanism should be seen as a synthesis of various proposals, including elements from both the OEWG and the PoA. The Chair pointed out the high risk of not having a consensus on the future permanent mechanism in the end and ‘the risk is even higher than ever before in this 5-year process’. 

The long-running issue of multistakeholder modalities 

The problem of stakeholder participation remained to be the hottest one. A lot of European and South American states, as well as Canada, put together a joint proposal to make the accreditation a more transparent process with disclosure of the basis for objections, and mechanisms to provide participation to more stakeholders as possible. The main principle is ‘to have a voice, not a vote’. Their argument was that stakeholders can serve as experts, especially in thematic groups whose work requires a deeper dive into the issues on the table. Some states advocated for giving the floor to stakeholders during plenaries too.

On the contrary, Russia and other like-minded states were insisting on keeping the already agreed OEWG modalities. The non-objection rule must be in place, but this group of states see the option to disclose the reasons for objection as a violation of the sovereign right of a state. They are also opposed to letting the Chair discuss the accreditation of a particular stakeholder with other states to overcome a veto by voting or any other procedure. Additionally, they don’t like the idea that stakeholders who had received objections be designated as provisional participants.

Another point was to seek already existing modalities for participation, and states recalled the Ad-hoc committee on Cybercrime, but Iran said it was not suitable since it was a temporary body with a specific mandate and limited working period.

The many proposals for thematic groups

The topic that brought the most variations to the discussion was the number and scope of dedicated thematic groups. Some of the proposals were:

  • to keep the ‘OEWG pillars’ structure and have the same groups, but that raised concerns about whether it will duplicate the plenaries. 
  • to merge some groups and introduce new ones (Chair’s proposal)
  • to have three thematic cross-cutting thematic groups on resilience, cooperation, and stability (France)
  • to have three groups on threat prevention and response, application of IL and existing and future norms and capacity building (the African group of states).

The majority of states voiced the option to have a special group on capacity building or provide for practical discussions in capacity building across other groups that will be created. 

Also, there was a discussion on whether to create a dedicated group on international law or combine international law with norms. This idea was criticised by the USA, Russia, Israel, and Germany since it merges two distinct areas of binding and voluntary regulation. Switzerland suggested discussing international law as a cross-cutting issue though all groups similar to capacity building. 

Additionally there were thoughts on creating a dedicated group on prevention of conflicts and a dedicated group on critical infrastructure, but they didn’t meet a lot of supporters.

As for the French proposal, which was upheld by the EU member states, the ‘cross-cutting policy-issue-focused working groups’ would go deeper on each OEWG pillar in a balanced way and then would feed it back into the plenary, which is structured the same way as the current OEWG.

The Chair intervened in the middle of the discussion asking the delegates to stop thinking in a binary way: to have either pillars approach or cross-cutting for the thematic groups and contemplate how to combine them all together. 

Some states, as well as the Chair, reminded that the thematic groups do not have to be cemented right now, and there is an option to have shifting agendas, as well as the creation of ad-hoc ones, and rearrangement of the groups after the first review conference of the future permanent mechanism.

Overall, the general impression is that states are inclined to have three groups rather than five to meet the concerns of smaller delegations. 

The format of thematic groups: hybrid or in-person

Delegations also expressed concerns about whether the format will be hybrid or in-person only. Both options have advantages, but some states are worried about limited resources for delegations to attend group meetings and plenaries in New York. In contrast, others question whether the hybrid format will be suitable for formal meetings and provide for closer bilateral and group engagements. 

What’s next?

With regular institutional dialogue remaining the most pressing and complex issue on the OEWG’s agenda, the coming two months will require heavy lifting from the Chair and his secretariat. In March and April, the Chair will reflect on thematic groups, and prepare a revised set of modalities. This will be followed by a town hall meeting to discuss these modalities. The Chair will also reflection on modalities for stakeholder participation, followed by a separate town hall on this.

The zero draft of the Final Report will be made available in May, after which one or more virtual town hall meetings to discuss it will be held. The OEWG is expected to adopt its Final Report at its eleventh substantive meeting in July.

We used our DiploAI system to generate reports and transcripts from the session. Browse them on the dedicated page.

Interested in more OEWG? Visit our dedicated page:

un meeting 2022
UN Open-ended Working Group (OEWG)
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.
un meeting 2022
UN Open-ended Working Group (OEWG)
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.

Basketball spirit through cutting-edge technology: What did the NBA Tech Summit deliver?

On Valentine’s Day in San Francisco, the NBA Tech Summit took place ahead of the NBA All-Star weekend, showcasing the latest trends in sports, media, and technology. With the help of NVIDIA CEO Jensen Huang and legendary Golden State Warriors coach Steve Kerr, the audience was introduced to the evolution of event broadcasting, companies set to make significant investments in the coming years, and the future of basketball as a sport.

The panels also included renowned basketball experts, media figures, and former NBA players. A common consensus emerged: robotics and AI will reshape the sport as we know and significantly help athletes achieve far better results than ever before.

However, this is not exactly a novelty, as many innovations were presented ahead of the Paris Olympics, where certain programmes helped analysts and audiences follow their favourite events in greater detail.

The future of the NBA and the role of fans during matches

The same idea applies to the NBA, particularly with the integration of augmented reality (AR) and a feature called ‘Tabletop’, which allows the display of a virtual court with digital avatars tracking player movements in real time.

A feature like this one generated the most interest from the audience, as it enables viewers to watch matches from various angles, analyse performances in real-time, access interactive player data, and simulate alternative outcomes—essentially exploring how the game would have unfolded if different decisions had been made on the court.

An important aspect of these innovations is that fans have the opportunity to vote for competition participants, ask real-time questions, and take part in virtual events designed to keep them engaged during and after match broadcasts.

AI plays a crucial role in these systems, primarily by analysing strategies and performances, allowing coaches and players to make better-informed decisions in key moments of the game.

Player health as a priority

With a packed schedule of matches, additional tournaments, and extensive travel, professional basketball players face daily physical challenges. To help preserve their health, new technologies aim to minimise potential injuries.

Wearable health-tracking sensors embedded in equipment to collect data on physical parameters were introduced at the NBA Summit. This technology provides medical teams with real-time insights into players’ conditions, helping prevent potential injuries.

Draymond Green with AI Robot
Basketball spirit through cutting-edge technology: What did the NBA Tech Summit deliver? 23

Biometric sensors, motion-analysis accelerometers, injury-prevention algorithms, dehydration and fatigue tracking, and shoe sensors for load analysis are just some of the innovations in this field.

Ultra cameras, ultra broadcasts, ultra experience

For fans of high-resolution and interactive matches, the latest technological advancements offer new viewing experiences. While still in the final development stages, fans can already enjoy Ultra HD 8K and 360-degree cameras, along with the highly anticipated ‘player cam’ perspective, which allows for close-up tracking of individual players.

It is also possible to independently control the camera during matches, offering a complete view of the court and arena from every possible angle. Additionally, matches can be broadcast as holograms, providing a new dimension in 3D space on specialised platforms.

The integration of 5G technology ensures faster and more stable transmissions, enabling high-resolution streaming without delays.

Fewer mistakes, less stress

Refereeing mistakes have always been part of the sport, influencing match outcomes and shaping the history of one of the world’s most popular games. In response, the NBA has sought to minimise errors through Hawk-Eye technology for ball and boundary tracking.

A multi-camera system monitors the ball to determine whether it has crossed the line, touched the boundary, or shot on time. AI also analyses player contact in real time, suggesting potential fouls for referees to review.

Beyond these features, the NBA now operates a centralised Replay Centre, offering detailed analysis of controversial situations where AI plays a crucial role in providing recommendations for quicker decision-making. Additional innovations include hoop sensors, audio analysis for simulating unsportsmanlike fouls, and more.

Environmental sustainability and awareness

As an organisation reliant on cutting-edge technology, the NBA is also focused on environmental awareness, which is increasingly becoming a key aspect of the league. Modern arenas utilise solar energy, energy-efficient lighting, and water recycling systems, reducing electricity consumption and waste.

Digital tickets and contactless payments contribute to sustainability efforts, particularly through apps that enable quicker and more eco-friendly entry to arenas and access to various services.

Partnerships with environmental organisations are a crucial part of the NBA’s sustainability initiatives, with collaborations including the Green Sports Alliance and the NRDC. These efforts aim to reduce the environmental impact of events while enhancing the fan experience.

For basketball fans (and followers of other sports adopting similar advancements), the most important takeaway is that the fundamental rules and essence of the game will remain unchanged. Despite the inevitable technological progress, the core spirit of basketball, established in Springfield in 1891, will continue to be preserved.

Overview of AI policy in 15 jurisdictions

1. CHINA

China has passed a law to address growing concerns over economic data fraud.

Summary

China remains a global leader in AI, driven by significant state investment, a vast tech ecosystem and abundant data resources. Although no single, overarching AI law is in place (such as the EU AI Act), the country has introduced a multilayered regulatory framework – combining data protection, copyright, AI-specific provisions, and ethical guidelines – to balance technological innovation with national security, content governance, and social stability.

AI landscape 

China’s regulatory landscape for AI is anchored by several core laws and a growing portfolio of AI-specific rules. At the core of this framework are data protection and copyright laws, which provide the legal baseline for AI deployments. 

The Personal Information Protection Law (PIPL), enacted in 2021, serves as a direct parallel to the EU’s General Data Protection Regulation (GDPR) by placing strict obligations on how personal data is collected and handled. Significantly, and unlike the GDPR, it clarifies that personal information already in the public domain can be processed without explicit consent as long as such use does not unduly infringe on individuals’ rights or go against their explicit objections. The PIPL also addresses automated decision-making, explicitly barring discriminatory or exploitative algorithmic practices, such as charging different prices to different consumer groups without justification.

Copyright considerations further shape the development of AI. Under the Chinese Copyright Law, outputs generated entirely by AI, devoid of human originality, cannot be granted copyright protection. Yet courts have repeatedly recognised that when users meaningfully contribute creative elements through prompts, they can secure copyrights in the resulting works, as illustrated by rulings in cases like Shenzhen Tencent v Shanghai Yingxun. At the same time, developers of generative AI systems have faced legal liabilities when their algorithms inadvertently produce content that violates intellectual property or personality rights, exemplified by high-profile instances involving the unauthorised use of the Ultraman character and imitations of distinctive human voices.

Over the past few years, these broader legal anchors have been reinforced by regulations specifically tailored for algorithmic and generative AI systems. One of the most notable is the Provisions on the Management of Algorithmic Recommendations in Internet Information Services of 2021, which target services deploying recommendation algorithms for personalised news feeds, product rankings, or other user-facing suggestions. Providers deemed capable of shaping public opinion must register with authorities, disclose essential technical details, and implement robust security safeguards. These requirements extend to ensuring transparency in how content is recommended and offering users the option to disable personalisation altogether.

In 2022, China introduced the Provisions on the Administration of Deep Synthesis Internet Information Services to address AI-generated synthetic media. These requirements obligate service providers to clearly label media that has been artificially generated or manipulated, particularly when there is a risk of misleading the public. To facilitate accountability, users must undergo real-name verification, and any provider offering a service with a marked capacity to influence public opinion or mobilise society must conduct additional security assessments.

Interim Measures for the Management of Generative Artificial Intelligence Services, which came into effect on 15 August 2023, apply to a broad range of generative technologies, from large language models to advanced image and audio generators. Led by the Cyberspace Administration of China (CAC), these rules require compliance with existing data and intellectual property laws, including obtaining informed user consent for personal data usage and engaging in comprehensive data labelling. Providers must also detect and block illegal or harmful content, particularly anything that might jeopardise national security, contravene moral standards, or infringe upon IP rights, and are expected to maintain thorough complaint mechanisms and special protective measures for minors. 

Where public opinion could be swayed, providers are required to file details of their algorithms for governmental review and may face additional scrutiny if they are deemed highly influential.

Building on these interim measures, the Basic Safety Requirements for Generative AI Services, which came into effect in 2024, took a more granular approach to technical controls. Issued by the National Information Security Standardization Technical Committee (TC260), these requirements outline 31 risk categories ranging from content that undermines socialist core values to discriminatory or infringing materials.

Under these guidelines, training data must be meticulously checked via random spot checks of at least 4,000 items from the entire dataset to ensure that at least 96 percent is free from illegal or unhealthy information. Illegal or unhealthy information is information that contains any of the 29 safety risks listed in the Annex. 

Providers are similarly obligated to secure explicit consent from individuals whose personal data might be used in model development. If a user prompt is suspected of eliciting unlawful or inappropriate outputs, AI systems must be capable of refusing to comply, and providers are expected to maintain logs of such refusals and accepted queries.

Alongside these binding regulations, the Chinese government and local authorities have published a range of ethical and governance guidelines. The Ethical Norms for New Generation AI, released in 2021 by the National New Generation AI Governance Specialist Committee, articulate six guiding principles, including respect for human welfare, fairness, privacy, and accountability.

While these norms do not themselves impose concrete penalties, they have guided subsequent legislative efforts. In a more formal measure, the 2023 Measures for Scientific and Technological Ethics Review stipulates that institutions engaging in ethically sensitive AI research, particularly those working on large language models with the potential to sway social attitudes, must establish ethics committees.

These committees are subject to national registration, and violations can result in administrative, civil, or even criminal penalties. Local governments, such as those in Shenzhen and Shanghai, have further set up municipal AI ethics committees to oversee particularly high-risk AI projects, often requiring providers to
conduct ex-ante risk reviews before introducing new systems.

Under the binding frameworks, providers can be subject to financial penalties, service suspension, or even criminal proceedings if they fail to comply with content governance or user rights.

In 2023, China’s State Council announced that it would draft an AI law. However, since then, China has halted all efforts to unify its AI legislation, instead opting for a piecemeal, sector-focused regulatory strategy that continues to evolve in response to emerging technologies.

2. AUSTRALIA

Australia bans 'Terrorgram' as part of efforts to combat online extremism and antisemitism.

Summary

Australia takes a principles-based approach to AI governance, blending existing laws, such as privacy and consumer protection, with voluntary standards and whole-of-government policies to encourage both innovation and public trust. There is currently no single, overarching AI law; rather, the government has proposed additional, risk-based mandatory guardrails – especially for ‘high-risk’ AI uses – and issued a policy to ensure responsible adoption of AI across all federal agencies.

AI landscape

  • The voluntary AI Safety Standard (2024) introduces ten guardrails, such as accountability, transparency, and model testing that guide organisations toward safe AI practices.

There is no single all-encompassing AI law in Australia. The government has pursued a flexible approach that builds upon privacy protections, consumer safeguards, and voluntary principles while moving steadily towards risk-based regulation of high-impact AI applications. 

At the core of Australia’s legal baseline is the Privacy Act of 1988, which has been under review to address emerging challenges, including AI-driven data processing and automated decision-making. Under updated guidance, the Privacy Act now clarifies that any personal information handled by an AI system, including inferred or artificially generated data, falls under the Australian Privacy Principles, meaning organisations must lawfully and fairly collect it (with consent for sensitive data), maintain transparency about AI usage, ensure accuracy, and uphold stringent security and oversight measures. Alongside the Privacy Act, the Consumer Data Right facilitates secure data sharing in sectors such as finance and energy, allowing AI-driven products to leverage richer data sets under strict consent mechanisms. 

From a consumer protection standpoint, the Australian Consumer Law, enforced by the Australian Competition and Consumer Commission (ACCC) prohibits misleading or unfair conduct. This has occasionally encompassed AI-driven pricing or recommendation algorithms, as exemplified in the ACCC v Trivago case involving deceptive hotel pricing displays.

Various sectors impose complementary rules. The Online Safety Act 2021 addresses harmful or exploitative content, which may include AI-generated deepfakes. The Copyright Act governs the permissible scope of AI training data, while the Corporations Act 2001 influences AI tools used in financial services, such as algorithmic trading and robo-advice.

The government has introduced several AI-specific guidelines and policies to add to these laws: 

  • Voluntary AI Safety Standard (2024) was issued by DISR and covers accountability, data governance, model testing, and other risk management practices to help organisations innovate responsibly. 

Category 1 AI: Foreseeable uses of AI with known but manageable risks

Category 2 AI: More advanced or unpredictable AI systems with the potential for large-scale harm. Enforcement mechanisms include licensing, registration, or mandatory ex-ante approvals.

A variety of additional AI initiatives complement these policies, such as the Australian Framework for Generative Artificial Intelligence (AI) in Schools, which sets guidelines for safe generative AI adoption in education, covering transparency, user protection, and data security; the AFP Technology Strategy that sets guidelines for AI-based tools in federal law enforcement; and the Medical Research Future Fund that invests in AI-driven healthcare pilots, such as diagnostics for skin cancer and radiological screenings.

Internationally, Australia aligns with the Global Partnership on Artificial Intelligence (GPAI) and the OECD AI Principles, actively collaborating with global partners on AI policy and digital trade.

3. SWITZERLAND

 Flag, Switzerland Flag

Summary

Switzerland follows a sector-focused, technology-neutral approach to AI regulation, grounded in strong data protection and existing legal frameworks for finance, medtech, and other industries. Although the Federal Council’s 2020 Strategy on Artificial Intelligence sets ethical and societal priorities, there is no single, overarching AI law in force.

AI landscape

  • Current AI uses fall under traditional Swiss laws. Switzerland has not enacted an overarching AI law, relying instead on sectoral oversight and cantonal initiatives.
  • Oversight responsibilities are distributed among several federal entities and occasionally supplemented by cantonal authorities. For instance, the Federal Data Protection and Information Commissioner (FDPIC) addresses privacy concerns, while the Financial Market Supervisory Authority (FINMA) exercises administrative powers to regulate financial institutions, including the authority to revoke licenses for noncompliance. The Federal Council sets the AI policy agenda. Cantonal governments, for their part, may provide frameworks for local pilot programmes, fostering public-private partnerships and encouraging best practices in AI adoption.
  • The Strategy on Artificial Intelligence (2020) emphasises human oversight, data governance, and collaborative R&D to position Switzerland as an innovation hub for finance, medtech, robotics, and precision engineering.

At the core of Swiss data protection is the Revised Federal Act on Data Protection (FADP), which took effect in 2023. It imposes strict obligations on entities that process personal data, extending to AI-driven activities. Under Article 21, the FADP places particular emphasis on automated decision-making, urging transparency when significant personal or economic consequences may result. The FDPIC enforces the law, carrying out investigations and offering guidance, though it lacks broad direct penalty powers.

Beyond data privacy, AI solutions must comply with existing sectoral regulations. In healthcare, the Therapeutic Products Act and the corresponding Medical Devices Ordinance govern AI-based diagnostic tools, with Swissmedic classifying such systems as medical devices when applicable.

In finance, FINMA oversees AI applications in robo-advisory, algorithmic trading, and risk analysis, regularly issuing circulars and risk monitors that highlight expectations for transparency, reliability, and robust risk controls. Other domains, like autonomous vehicles and drones, fall under the jurisdiction of the Federal Department of the Environment, Transport, Energy, and Communications (DETEC), which grants pilot licenses and operational approvals through agencies such as the Federal Office of Civil Aviation (FOCA).

Liability, intellectual property, and non-discrimination matters are similarly addressed through existing legislation. The Product Liability Act, the Civil Code, and the Code of Obligations govern contracts and liability for AI products and services, while the Copyright Act and the Patents Act regulate AI training data usage and software IP rights. The Gender Equality Act and the Disability Discrimination Act may apply if AI outputs result in systematic bias or exclusion.

At a local level, several cantonal innovation hubs, such as Zurich’s Innovation Sandbox for Artificial Intelligence support pilot projects and produce policy feedback on emerging technologies. The Swiss Supercomputer Project – a collaboration among national labs, Hewlett Packard Enterprise, and NVIDIA – provides high-performance computing resources to bolster AI research in areas ranging from precision engineering to climate simulations. In the same vein, the Swiss AI Initiative is a national effort led by ETH Zurich and the Swiss Federal Technology Institute of Lausanne (EPFL) and powered by the world’s most advanced GPU supercomputer, uniting experts across Switzerland to develop large-scale, domain-specific AI models.

The Digital Society Initiative at the University of Zurich focuses on interdisciplinary research and public engagement, exploring the ethical, social, and legal impacts of digital transformation. 

Switzerland engages with the OECD AI Principles and participates in the Council of Europe Committee on Artificial Intelligence. In November 2023, the Federal Council instructed the Federal Department of the Environment, Transport, Energy and Communications and the Federal Department of Foreign Affairs to produce an overview of potential regulatory approaches for artificial intelligence, emphasising transparency, traceability, and alignment with international standards such as the Council of Europe’s AI Convention and the EU’s AI Act.

In February 2025, they presented a plan proposing sector-specific legislative changes in areas like data protection and non-discrimination, along with non-binding measures such as self-declarations and industry-led solutions, to protect fundamental rights, bolster public trust, and support Swiss innovation.

4. TÜRKIYE

 Flag, Turkey Flag

Summary

Türkiye strives to become a major regional AI hub by investing in industrial applications, defence innovation, and a rapidly growing tech workforce. While there is no single, overarching AI law at present, a Draft AI Bill introduced in June 2024 is under parliamentary review and, if enacted, will establish guiding principles on safety, transparency, accountability, and privacy, especially for high-risk AI like autonomous vehicles, medical diagnostics, and defence systems.

Existing sectoral legislation, privacy rules under the Law on the Protection of Personal Data, and the National Artificial Intelligence Strategy (2021–2025) shape responsible AI use across industries.

AI landscape

  • The National Artificial Intelligence Strategy (2021–2025) is a roadmap for talent development, data infrastructure, ethical frameworks, and AI hubs to spark local innovation.
  • The Draft AI Bill, proposed in June 2024, is pending parliamentary approval. The Draft proposes broad principles, such as safety, transparency, equality, accountability, and privacy, as well as a registration requirement for certain high-risk AI use cases.
  • The Personal Data Protection Law, overseen by KVKK, underpins AI-driven processing of personal data and mandates informed consent and data minimisation.

There is no single, overarching AI law. Sectoral regulations play a key role. In banking and finance, the Banking Regulation and Supervision Agency (BRSA) supervises AI-driven credit scoring, risk analysis, and fraud detection, proposing rules mandating explicit consent and algorithmic fairness audits. The defence sector, led by state-owned enterprises such as TUSAŞ and ASELSAN, deploys autonomous drones and advanced targeting systems, although official details remain classified for national security reasons. The automotive industry invests in connected and self-driving vehicles — particularly through TOGG, Türkiye’s national electric car project – aligning with the National Artificial Intelligence Strategy’s push for advanced
manufacturing.

The Law on Consumer Protection, the E-commerce Law, and the Turkish Criminal Code collectively impose transparency, fairness, and liability standards on AI-driven advertising, misinformation, and automated decision-making, while the Industrial Property Code governs the permissible use of copyrighted data for AI training and clarifies patentability criteria for AI-based innovations.

While not an EU member, Türkiye often harmonises regulations with EU norms to facilitate trade and ensure cross-border legal compatibility. It also engages in the Global Partnership on Artificial Intelligence (GPAI) and participates in the Council of Europe Committee on Artificial Intelligence.

5. MEXICO

 Flag, Mexico Flag

Summary

Mexico does not have a single, overarching AI law or a fully institutionalised national strategy. The 2018 National AI Strategy, commissioned by the British Embassy in Mexico, developed by Oxford Insights and C Minds, and informed by government and expert input has been influential in articulating principles for ethical AI adoption, open data, and talent development.

However, it has not been officially enforced as a national plan. AI adoption in the private sector remains limited, although the public sector ranks relatively high in Latin America for AI integration. Data protection laws were previously enforced by the National Institute for Transparency, Access to Information, and Personal Data Protection (INAI), which was eliminated in December 2024 due to budgetary constraints. These responsibilities now fall under the Secretariat of Anti-Corruption and Good Governance (SABG).

AI landscape

  • The 2018 National AI Strategy outlined fairness, accountability, and a robust AI workforce, but remains unevenly implemented.

At the heart of Mexico’s data governance is the Federal Law on the Protection of Personal Data Held by Private Parties (2010). This law imposes consent, transparency, and security obligations on any entity handling personal data, including AI-driven projects. Until December 2024, the National Institute for Transparency, Access to Information, and Personal Data Protection enforced these rules; enforcement responsibilities have since been transferred to the Secretariat of Anti-Corruption and Good Governance. Although its powers were primarily focused on privacy, INAI has periodically offered guidance on best practices for AI-based solutions, such as public chatbots and e-commerce platforms.

Beyond privacy, other laws – such as the Consumer Protection Law and the E-Commerce Law – can indirectly govern AI use, particularly when automated tools influence marketing, pricing, or other consumer-facing decisions.

Copyright and IP regulations apply to AI developers, especially regarding training data usage or patent filings. Regarding training data usage, developers must rely on obtaining proper licenses or using public domain material to avoid potential copyright infringement when training AI models. Patents require a genuine technical solution and novelty, and AI cannot be named as the inventor. Mexico accounts for a significant share of AI patent applications in Latin America, alongside Brazil.

Mexico’s public sector ranks third in Latin America in terms of AI integration, with pilot projects
in:

Healthcare (AI-based triage and diagnostics);

Agriculture (precision farming via drones);

Municipal services (chatbots and data analytics tools).

Nonetheless, private-sector adoption remains modest, scoring below the regional average in the
Latin American AI Index; critics argue that this is due to Mexico’s relatively low R&D spending,
fragmented policy environment, and insufficient incentives for businesses.

6. INDONESIA

Indonesia fines Google $12.4 million for payment system abuses.

Summary

While no single, overarching AI law is in place, Indonesia’s Ministry of Communication and Digital Affairs has announced a forthcoming AI regulation. Currently, the Personal Data Protection Law (2022) provides an important legal foundation for AI-related personal data processing.

Key institutions, including the Ministry of Communication and Information Technology (Kominfo) and the National Research and Innovation Agency (BRIN), jointly shape AI policies and promote R&D initiatives, with further sector-specific guidelines emerging at both the national and provincial levels.

Indonesia envisions AI as a driver of national development, aiming to strengthen healthcare, education, food security, and public services through the 2020–2045 Masterplan for National Artificial Intelligence (Stranas KA).

AI landscape

  • The Stranas KA (2020–2045) is a long-term roadmap dedicated to setting ethical AI goals, boosting data infrastructure, cultivating local capacity, and encouraging global partnerships.
  • The Personal Data Protection Law (2022) establishes consent, transparency, and data minimization requirements, backed by Kominfo’s authority to impose administrative fines or suspend services.

In January 2025, Indonesia’s Ministry of Communication and Digital Affairs announced a forthcoming AI regulation that will build on guidelines emphasising transparency, accountability, human rights, and safety. Minister Meutya Hafid assigned Deputy Minister Nezar Patria to draft this regulation, as well as to gather stakeholder input across sectors such as education, health, infrastructure, and financial services.

Currently, Indonesia’s AI governance is anchored by strategic planning under the Stranas KA (2020–2045) and the Personal Data Protection Law (2022). Existing regulations, along with provincial-level guidelines and interministerial collaboration, guide the adoption of AI systems across multiple industries.

To bolster cybersecurity and data protection, the National Cyber and Encryption Agency sets additional security standards for AI implementations in critical sectors.

The Stranas KA (2020–2045) provides short-term milestones (2025) and long-term goals (2045) aimed at constructing a robust data infrastructure, prioritising ethical AI, and building a large talent pool. Five national priorities structure these efforts:

AI solutions for telemedicine, remote diagnostics, and hospital administration;

Automating public services with chatbots and data analytics;

Upskilling and training for a domestic AI workforce;

Precision agriculture, pest detection, and yield forecasting;

AI for traffic management, urban planning, and public safety.

The Stranas KA provides only broad principles rather than explicit, enforceable audit mandates, covering areas such as data handling, model performance, and ethical compliance, so formal requirements remain relatively limited. 

Certain provincial governments have issued draft guidelines for AI usage in local services, including chatbots for administrative tasks and agritech solutions that support smallholder farmers. These guidelines typically incorporate privacy measures and user consent requirements, aligning with the Personal Data Protection Law.

Indonesia cooperates with ASEAN partners on cross-border digital initiatives. During its 2022 G20 presidency, Indonesia spotlighted AI as a tool for inclusive growth, focusing on bridging the digital divide.

7. EGYPT

 Egypt Flag, Flag

Summary

Although there is no single, overarching AI law, the Egypt National Artificial Intelligence Strategy (2020) provides a roadmap for research, capacity development, and foreign investment in AI, while the Personal Data Protection Law No. 151 of 2020 governs personal data used by AI systems. In January 2025, President Abdel Fattah El-Sisi launched the updated 2025–2030 National AI Strategy, aiming to grow the ICT sector’s contribution to GDP to 7.7% by 2030, establish 250+ AI startups, and develop a talent pool of 30,000 AI professionals. The new strategy also announces the development of a national foundational model, including a large-scale Arabic language model, as a key enabler for Egypt’s AI ecosystem.

With multiple pilot projects, ranging from AI-assisted disease screening to smart city solutions, Egypt is laying the groundwork for broader AI deployment, with the Data Protection Authority providing oversight of AI-driven data processing. The Ministry of Communications and Information Technology (MCIT) spearheads AI policy, focusing on AI applications in healthcare, finance, agriculture, and education.

AI landscape

  • The Ministry of Communications and Information Technology (MCIT) leads Egypt’s AI efforts, coordinating with other ministries on digital transformation and legislative updates. The Data Protection Authority can levy fines or administrative measures against noncompliant entities, while the Central Bank of Egypt supervises AI-based credit scoring and fraud detection in financial services.
  • The 2020 National Artificial Intelligence Strategy established strategic goals for AI research, workforce development, and partnerships with global tech players, aligning with the Vision 2030 framework. The AI Strategy acknowledged non-discrimination and responsible usage, though enforcement mostly fell under existing data protection measures.
  • The newly introduced 2025–2030 National Artificial Intelligence Strategy builds on the first plan, with a focus on inclusive AI, domain-specific large language models, and stronger alignment with the Digital Egypt initiative.
  • The Personal Data Protection Law No. 151 of 2020 requires consent, data security, and transparency in automated processing, enforced by the Data Protection Authority.
  • Healthcare initiatives deploy AI-driven disease screening and telemedicine, expanded during public health emergencies. Agriculture pilots focus on yield prediction and irrigation optimisation. Smart cities apply AI in traffic management and public safety. Education reforms integrate AI curricula in universities, coordinated by MCIT and the Ministry of Higher Education.

As part of the 2025–2030 plan, Egypt is re-emphasising ethical AI, with additional guidelines under the Egyptian Charter for Responsible AI (2023) and plans for domain-specific AI regulations. The strategy also aims to strengthen AI infrastructure with next-generation data centres, robust 5G connectivity, and sustainable computing facilities.

AI adoption aligns closely with the overarching Egypt Vision 2030 framework, highlighting the role of AI in socio-economic reforms.

8. MALAYSIA

 Computer, Computer Hardware, Computer Keyboard, Electronics, Hardware, Helmet, Laptop, Pc

Summary

Malaysia aims to become a regional AI power through government-led initiatives such as the Malaysia Artificial Intelligence Roadmap (2021–2025) and the MyDIGITAL blueprint. While there is currently no single, overarching AI legislation, the National Guidelines on AI Governance and Ethics (2024) serve as a key reference point for responsible AI development. Established in December 2024, the National AI Office now centralises policy coordination and is expected to propose regulatory measures for high-stakes AI use cases.

AI landscape

  • The Malaysia Artificial Intelligence Roadmap (2021–2025) outlines talent building, ethical guidelines, and R&D priorities spanning sectors like finance, healthcare, and manufacturing.
  • The National Guidelines on AI Governance and Ethics (2024) promote seven key principles – fairness, reliability/safety, privacy/security, inclusiveness, transparency, accountability, and human well-being – and clarify stakeholder obligations for end users, policymakers, and developers.

In addition to these frameworks, sectoral bodies impose further requirements:

  • Bank Negara Malaysia (BNM) oversees AI in finance, emphasising fairness and transparency for credit scoring and fraud detection tools.

Major enterprises leverage AI for e-services, medical diagnostics, manufacturing optimisation, and real-time analytics.

9. NIGERIA

 Flag, Nigeria Flag

Summary

While no single, overarching AI law is in place, the Nigeria Data Protection Act (NDPA) provides
an important legal foundation for AI-related personal data processing. Key institutions, including
the Federal Ministry of Communications, Innovation and Digital Economy (FMCIDE) and the
National Information Technology Development Agency (NITDA), shape Nigeria’s AI policy
framework and encourage responsible adoption.

AI landscape

Nigeria’s NDPA applies to AI to some extent, as its provisions demand consent, data minimisation, and the possibility of human intervention for decisions with significant personal impact. The NDPC has the authority to impose penalties on violators, and the SEC requires robo-advisory firms to adopt safeguards against algorithmic errors or bias. The Nigerian Bar Association issued Guidelines for the Use of Artificial Intelligence in the Legal Profession in Nigeria in 2024, emphasising data privacy, human oversight, and transparency in AI-driven decisions.

In 2023, Nigeria joined the Bletchley Declaration on AI, pledging to cooperate internationally on responsible AI development.

10. KENYA

a red flag with a white cross

Summary

While no single, overarching AI law is in place, Kenya’s Data Protection Act (2019) offers a foundational framework for AI-related personal data usage, while existing ICT legislation, sector-specific guidelines, and taskforce reports further shape AI governance. The Ministry of Information, Communications, and the Digital Economy (MoIC) steers national digital transformation, supported by the Kenya ICT Authority’s oversight of ICT projects and the Office of the Data Protection Commissioner (ODPC) enforcing privacy provisions. The National Artificial Intelligence (AI) Strategy (2025-2030) aims to consolidate these diverse efforts, focusing on risk management, ethical standards, and broader AI-driven economic growth.

AI landscape

  • Kenya’s National Artificial Intelligence Strategy for 2025–2030 aims to drive inclusive, ethical, and innovation-driven AI adoption across key sectors – agriculture, healthcare, education, and public services – by establishing a robust infrastructure, governance, and talent development frameworks to address national challenges and foster sustainable growth.
  • The Ministry of Information, Communications, and the Digital Economy (MoIC) sets high-level policy, reflecting AI priorities in Kenya’s broader digital agenda. The Kenya ICT Authority coordinates pilot projects, manages government ICT initiatives, and promotes AI adoption across sectors.
  • The Data Protection Act (2019) mandates consent, data minimisation, and user rights in automated decision-making. The Office of the Data Protection Commissioner (ODPC) enforces these rules, investigating breaches and imposing sanctions – particularly relevant to AI-driven digital lending and fintech solutions.
  • The Distributed Ledgers Technology and Artificial Intelligence Taskforce (2019) proposed ethics guidelines, innovation sandboxes, and specialised oversight for distributed ledgers technology, AI, the internet of things, and 5G wireless technology. The taskforce aims to balance consumer and human rights protection with promoting innovation and market competition.

The Data Protection Act (2019) remains central, requiring accountability and consent for AI-driven profiling – particularly in high-impact domains like micro-lending, where machine-learning models analyse creditworthiness. 

The MoIC has integrated AI objectives into national strategies for e-government, supporting pilot projects such as chatbot-based public services and resource allocation.

The National AI Strategy aims to harmonise Kenya’s diverse AI efforts, addressing potential algorithmic bias, auditing standards, and the practicalities of responsible AI, particularly in healthcare, agritech, and fintech. To achieve this, the strategy sets out a clear governance framework, establishes multi- stakeholder collaboration platforms, and develops robust guidelines that promote transparent, ethical, and inclusive AI development across these priority sectors.

The government collaborates with global organisations, such as GIZ, World Bank, UNDP and regional partners such as Smart Africa with the aspiration to become an AI hub in Africa.

11. ARGENTINA

 Flag, Car, Transportation, Vehicle, Argentina Flag

Summary

While no single, overarching AI law is in place, Data Protection Law No. 25.326 (Habeas Data, 2000) provides an important baseline for AI-related personal data use, enforced by the Argentine Agency of Access to Public Information (AAIP). The government has developed a National AI Plan and has issued Recommendations for Trustworthy Artificial Intelligence (2023) to guide ethical AI adoption – especially within the public sector. Academic institutions, entrepreneurial tech clusters in Buenos Aires and Córdoba, and partnerships with multinational firms support Argentina’s growing AI ecosystem.

AI landscape

  • The National Artificial Intelligence Plan outlines high-level goals for ethical, inclusive AI development aligned with the country’s economic and social priorities.
  • The Data Protection Law No. 25.326 (Habeas Data, 2000) requires consent, transparency, and data minimisation in automated processing. The AAIP can sanction entities that misuse personal data, including performing AI-driven profiling.
  • Recommendations for Trustworthy Artificial Intelligence (2023), approved by the Undersecretariat for Information Technologies, promote human-centred AI in public-sector projects, emphasising ethics, responsibility, and oversight.

Argentina’s AI governance relies on existing data protection rules, plus emerging policy instruments rather than a single, dedicated, and overarching AI law. Public institutions like the Ministry of Science, Technology, and Innovation (MINCyT) and the Ministry of Economy coordinate research and innovation, working with the AAIP to ensure privacy compliance. The government also supports pilot programmes, testing practical AI solutions.

Argentina’s newly launched AI unit within the Ministry of Security, designed to predict and prevent future crimes, has sparked controversy over surveillance, data privacy, and ethical concerns, prompting calls for greater transparency and regulation.

12. QATAR

Qatar flag

Summary

While no single, overarching AI law is in place, Law No. 13 of 2016 Concerning Privacy and Protection of Personal Data serves as a key legal framework for AI-related personal data processing. The Ministry of Communications and Information Technology (MCIT) leads Qatar’s AI agenda through the National Artificial Intelligence Strategy for Qatar (2019), focusing on local expertise development, ethical guidelines, and strategic infrastructure – aligned with Qatar National Vision 2030. Enforcement of data privacy obligations is handled by MCIT’s Compliance and Data Protection (CDP) Department, which can impose fines for noncompliance. Oversight in finance, Sharia-compliant credit scoring, and other sensitive domains is provided by the Qatar Financial Centre Regulatory Authority and the Central Bank.

AI landscape

  • The National Artificial Intelligence Strategy for Qatar (2019) sets goals for talent development, research, ethics, and cross-sector collaboration, supporting the country’s economic diversification.
  • Law No. 13 of 2016 Concerning Privacy and Protection of Personal Data enforces consent, transparency, and robust security for personal data usage in AI. MCIT’s Compliance and Data Protection (CDP) Department monitors data privacy compliance, imposing monetary penalties for violations.
  • The Qatar Financial Centre Regulatory Authority and the Central Bank regulate AI-driven financial services, ensuring consumer protection and adherence to Sharia principles.
  • Lusail City, which brands itself as the city of the future and one of the most technologically advanced cities in the world, leverages AI-based traffic management, energy optimisation, and advanced surveillance. 

Although Qatar has not enacted a single, overarching AI law, its National AI Strategy and the work of the Artificial Intelligence Committee provide a structured blueprint, prioritizing responsible, culturally aligned AI applications. 

Qatar’s AI market is projected to reach USD 567 million by 2025, driven by strategic investments and digital infrastructure development that is expected to boost economic growth, attract global partnerships, and continue efforts to align national regulations with international standards.

13. PAKISTAN

Pakistan considers tighter digital surveillance amid VPN controversy.

Summary

While no single, overarching AI law is in place, the Ministry of Information Technology & Telecommunication (MoITT) spearheads AI policy through the Digital Pakistan Policy and the Draft National Artificial Intelligence Policy (2023), focusing on responsible AI adoption, skill-building, and risk management. Although the Personal Data Protection Bill is still pending, its adoption would introduce dedicated oversight for AI-driven personal data processing. In parallel, the proposed Regulation of Artificial Intelligence Act 2024 seeks to mandate human oversight of AI systems and impose substantial fines for violations.

AI landscape

  • The Ministry of Information Technology & Telecommunication (MoITT) drives Pakistan’s AI policy under the Digital Pakistan Vision, integrating AI across e-government services, education, and agritech.
  • The National Centre of Artificial Intelligence, under the Higher Education Commission, fosters research collaborations among universities.
  • Digital Pakistan Policy (2018) underscores AI’s role in public-sector digitalisation and workforce development. The Draft National Artificial Intelligence Policy (2023) underscores ethically guided AI growth, job creation, and specialised training initiatives.
  • The Personal Data Protection Bill proposes establishing a data protection authority with enforcement powers over AI-related personal data misuse. 
  • The Regulation of Artificial Intelligence Act 2024 would fine violators up to PKR 2.5 billion (approximately USD 9 million), mandate transparent data collection, require human oversight in sensitive applications, and create a National AI Commission in Islamabad.
  • Pakistan uses AI to expedite citizen inquiries through chatbots, streamline government operations with digital ID systems, and address food security by optimising crop monitoring and yields. AI-based credit scoring broadens microfinance access but raises questions of fairness and privacy. 

Pakistan’s AI trajectory is propelled by the MoITT’s Digital Pakistan agenda, with the National Centre of Artificial Intelligence coordinating academic research in emerging fields like machine learning and robotics. 

Legislative initiatives are rapidly evolving. The Regulation of Artificial Intelligence Act 2024, currently under review by the Senate Standing Committee on Information Technology, aims to ensure responsible AI deployment, penalising misuse and unethical practices with high-value fines. Once enacted, the law would establish the National Artificial Intelligence Commission to govern AI adoption and uphold social welfare goals, with commissioners prohibited from holding public or political office. Parallel to this, the Personal Data Protection Bill would further strengthen consumer data rights by regulating AI-driven profiling.

Ongoing debates centre on balancing innovation with privacy, transparency, and accountability. As Pakistan expands international collaborations, particularly through the China-Pakistan Economic Corridor and broader Islamic cooperation forums, more concrete regulations are expected to emerge by the end of 2025.

14. VIETNAM

Meta plans to increase its investment in Vietnam, focusing on virtual reality production and innovation.

Summary

While Vietnam has not enacted a single, overarching AI law, the Law on Cyberinformation Security (2015) provides a basic legal framework that partially governs AI-driven data handling. Two ministries – the Ministry of Science and Technology (MOST) and the Ministry of Information and Communications (MIC) – jointly drive AI initiatives under Vietnam’s National Strategy on Research, Development and Application of AI by 2030, with an emphasis on AI education, R&D, and responsible use in manufacturing, healthcare, and e-governance. Although the national strategy references ethics and bias prevention, there is no single oversight body or binding ethical code for AI, prompting growing calls from civil society for greater transparency and accountability.

AI landscape

  • The Ministry of Science and Technology (MOST) allocates funds for AI research, supporting collaborations between universities, startups, and private enterprises.
  • The Ministry of Information and Communications (MIC) oversees the broader digital transformation agenda, sets cybersecurity standards, and can impose fines for data misuse under existing regulations.
  • The National Strategy on AI (2021–2030) aims to develop an AI-trained workforce (50,000 professionals), expand AI usage in public services through chatbots, and digital government, and promote AI-based solutions in manufacturing, healthcare diagnostics, and city management. The strategy mentions ethical principles like bias mitigation and accountability but does not specify formal enforcement or an AI ethics board.
  • The Law on Cyberinformation Security (2015) outlines baseline data security measures for organisations, which partially apply to AI-related activities, as the law’s general data protection and system security requirements extend to AI systems that process or store personal or sensitive information. The MIC can impose fines or restrict services for cybersecurity breaches and unauthorised data processing. 
  • The State Bank of Vietnam can issue additional rules for AI deployments in finance or consumer lending.
  • Factories adopt AI for predictive maintenance, robotics, and supply-chain optimisation. AI-based diagnostics and imaging pilot projects are implemented in major hospitals, partially funded by MOST grants. AI chatbots reduce administrative backlogs. Ho Chi Minh City explores AI-driven traffic control and security systems. Tech hubs in Hanoi and Ho Chi Minh City foster AI-focused enterprises in fintech, retail analytics, and EdTech.

Vietnam’s push for AI is central to its ambition of enhancing economic competitiveness and digitizing governance. However, comprehensive AI legislation remains absent. The National Strategy on AI acknowledges concerns around fairness, personal data rights, and possible algorithmic bias, but explicit regulatory mandates or ethics boards have yet to be instituted. 

Vietnam collaborates with ASEAN on a regional digital masterplan and maintains partnerships with tech-leading countries, such as Japan and South Korea for AI research and capacity development. The  government is also formulating new regulations in the digital technology sector, including a draft Law on Digital Technology Industry, expected to be adopted in May 2025, which may introduce risk-based rules for AI and a sandbox approach for emerging technologies.

15. RUSSIA

 Flag, Russia Flag

Summary

Russia has adopted multiple AI-related policies including an AI regulation framework, the National AI Development Strategy (2019–2030), the National Digital Economy Programme, and experimental legal regimes (ELRs) – to advance AI in tightly regulated environments. The recently enacted rules mandating liability insurance for AI developers in ELRs signal a shift toward stricter risk management.

AI landscape

  • The National AI Development Strategy (2019–2030) was adopted via presidential decree and it sets ambitious goals for AI R&D, talent growth, and widespread adoption in the healthcare, finance, and defence sectors.
  • Effective in 2025, Russia’s updated AI regulation framework prohibits AI in education if it simply completes student assignments (to prevent cheating), clarifies legal liability for AI-generated content, mandates accountability for AI-related harm, promotes human oversight, and focuses on national security through industry-specific guidelines.
  • Experimental Legal Regimes (ELRs) allow the testing of AI-driven solutions (e.g., autonomous vehicles in Moscow and Tatarstan). Federal Law 123-FZ adopted in 2024 now requires developers in ELRs to insure civil liability for potential AI-related harm.

Instead, authorities have relied on diverse initiatives, laws, and financial incentives to direct AI governance. The key remains the National AI Development Strategy (2019), focusing on technological sovereignty, a deeper investment in research, and investments in attracting talent.

Alongside it, the digital economy framework has bankrolled significant projects, from data centres to connectivity enhancements, enabling the preliminary deployment of advanced AI solutions.

In 2020, policymakers introduced the Conceptual Framework for the Regulation of AI and Robotics, identifying gaps in liability allocation among AI developers, users, and operators. This has been effective as of 2025, as noted above.

Technical Committee 164 under Rosstandart issues AI-related safety and interoperability guidelines. Personal data management is governed by Federal Law No. 152-FZ, complemented by updated biometric data regulations that organise the handling of facial and voice profiles. The voluntary AI Ethics Code, shaped in collaboration with governmental entities and technology companies, aims to curb risks such as algorithmic bias, discriminatory profiling, and the unchecked use of personal data.

AI adoption is especially visible in the following:

  • Companies like Yandex are conducting trials of self-driving cars in designated zones. Under the new insurance requirements, liabilities for potential accidents must be covered.
  • The Central Bank endorses AI-driven services for fraud prevention and credit analysis, ensuring providers remain responsible under established banking and consumer protection laws.
  • AI-assisted diagnostic tools and telemedicine applications go through a registration process akin to medical device approval, overseen by Roszdravnadzor. 
  • Russian authorities use AI-driven facial recognition in public surveillance, managed by biometric data policies and overseen by security services. Advocacy groups have voiced concerns regarding privacy and data retention practices.

Data Protection Day 2025: A new mandate for data protection

This analysis will be a detailed summary of Data Protection Day, providing the most relevant aspects from each session. The event welcomed people to Brussels, as well as virtually, to celebrate Data Protection Day 2025 together.

Filled with a tight schedule, the event programme kicked off with opening remarks by the Secretary General of the European Data Protection Supervisor (EDPS), followed by a day of panels, speeches and side sessions from the brightest of minds in the data protection field.

Keynote speech by Leonardo Cervera Navas

Given the recent political turmoil in the EU, specifically the repealing of the Romanian elections a few months ago, it was no surprise that the first keynote speech addressed how algorithms are used to destabilise democracies and threaten them. Navas explained how third-country algorithms are used against EU democracies to target their values.

He then went on to discuss how there is a big power imbalance when certain wealthy people with their companies dominate the tech world and end up violating our privacy. However, he turned towards a hopeful future when he spoke about how the crisis in Europe is making us Europeans stronger. ‘Our values are what unite us, and part of them are the data protection values the EDPB strongly upholds’, he emphasised.

He acknowledged the evident overlap of rules and regulations between different legal instruments but also highlighted the creation of tools that can help uphold our privacy, such as the Digital Clearing House 2.0.

Organiser’s panel moderated by Kait Bolongaro

This panel discussed a wide variety of data protection topics, such as the developments on the ground, how international cooperation played a role in the fight against privacy violations, and what each panellist’s priorities were for the upcoming years. That last question was especially interesting to hear given the professional affiliations of each panellist.

What is interesting about these panels, is the fact that the organisers spent a lot of time curating a diverse panel. They had people from academia, private industry, public bodies, and even the EDPS. This ensures that a panel’s topic is discussed from more than one point of view, which is much more engaging.

Wojciech Wiewiorowski, the current European Data Protection Supervisor, reminded us of the important role that data protection authorities (DPAs) play in the effective enforcement of the GDPR. Matthias Kloth, Head of Digital Governance and Sport, CoE, showed us a broader perspective. As his work surrounds the evolved Convention 108, now known as Convention 108+, he shed some light on the advancements of updating and bringing past laws into today’s modern age.

Regarding international cooperation, each panellist had their own unique take on how to facilitate and streamline it. Wiewiorowski correctly stated that data has no borders and that cooperation with everyone is needed, as a global effort. However, he reminded, that in the age of cooperation, we cannot have a low level of protection by following the ‘lowest common denominator level of protection’.

Jo Pierson, Professor at the Vrije University Brussels and the Hasselt University, said that international cooperation is very challenging. He gave the example that country’s values may change overnight, giving the example of Trump’s recent re-election victory.

Audience questions

A member of the audience posed a very relevant question regarding the legal field as a whole.
He asked the panellists what they thought of the fact that enforcing one’s rights is a difficult and
costly process. To provide context, he explained how a person must be legally literate and bear their own costs for litigation to litigate or filing an appeal.

Wiewiorowski of the EDPS pointed out that changing the procedural rules of the GDPR is not feasible to tackle this issue. There is the option for small-scale procedural amendments, but he does not foresee the GDPR being opened up in the coming years.

However, Pierson had a more practical take on the matter and suggested that this is where individuals and civil society organisations can join forces. Individuals can approach organisations such as noyb, Privacy International, and EDRi for help or advice on the matter. But then it begs the question, on whose shoulders should this burden rest?

One last question from the audience was about the bombshell new Chinese AI ‘DeepSeek’ recently dropped onto the market. The panellists were asked whether this new AI is an enemy or a friend to us Europeans. Each panellist avoided calling Chinese AI an enemy or a friend, but they did find common ground on the fact that we need international cooperation and that an open-source AI is not a bad thing if it can be trained by Europeans.

The last remark regarding this panel was Wiewiorowski’s comment on Chinese AI and how he compared it to ‘Sputnik Day’ (the 1950s space race between the United States and the USSR). Are we in a new technological gap? Will non-Western allies and foes beat us in this digital arms race?

Data protection in a changing world: What lies ahead? Moderated by Anna Buchta

This session also had a series of interesting questions for high-profile panellists. The range of this panel was impressive as it regrouped opinions from the European Commission, the Polish Minister of Digital Affairs, the European Parliament, the UK’s Information Commissioner, and DIGITALEUROPE.

Notably, Marina Kaljurand from LIBE and her passion for cyber matters. She revealed that many people in the European Parliament are not tech literate. On the other hand, some people are extremely well-versed in how the technology is used. There seems to be a big information asymmetry within the European Parliament that needs to be addressed if they are to vote on digital regulations.

She gave an important overview of the state of data transfers with the UK and the USA. The UK has in place an adequacy decision that has raised multiple flags in the European Parliament and is set to expire in June 2025.

The future of data transfer in the UK is very uncertain. As for the USA, she mentioned that there will be difficult times due to the actions of the recently re-elected President Trump that are degrading US-EU relations. Regarding her views on the child sexual abuse material regulation, she stresses how important it is to protect children and that the debate is not about whether or not to protect them or not, but that it is difficult to find out ‘how’ to protect them.

The current proposed regulations will put too much stress on violating one’s privacy, but on the other hand, it is difficult to find alternatives to protect children. This reflects how difficult regulating can be even when everyone at the table may have the same goals.

Irena Moozova, the Deputy Director-General of DG JUST at the European Commission, said that her priorities for the upcoming years are to cut red tape, simplify guidelines for businesses to work and support business compliance efforts for small and medium-sized enterprises. She mentions the public consultation phases that will be held for the upcoming Digital Fairness Act this summer.

John Edwards, the UK Information Commissioner, highlighted the transformative impact of emerging technologies, particularly Chinese AI, and how disruptive innovations can rapidly reshape markets. He discussed the ICO’s evolving strategies, noting their alignment with ideas shared by other experts. The organisation’s focus for the next two years includes key areas such as AI’s role in biometrics and tracking, as well as safeguarding children’s privacy. To address these priorities, the ICO has published an online tracking strategy and conducted research on children’s data privacy, including the development of systems tailored to protect young users.

Alberto Di Felice, Legal Counsel to DIGITALEUROPE, stressed the importance of simplifying regulations. He repeatedly stated numerous times that there is too much bureaucracy and too many actors involved in regulation. For example, if a company wants to operate in the EU market, they will have to consult DPAs, AI Act authority, data from the public sector (Data Governance Act), manufacturers or digital products (authorities for this), and financial sector authorities.

He advocated for a single regulator. He also mentioned how the quality of regulation in Europe
is poor and that sometimes regulations are too long. For example, some AI Act articles are 17 lines long with exceptions and sub-exceptions that lawyers cannot even make sense of. He suggested reforms such as having one regulator and proposing changes to streamline legal compliance.

Keynote speech by Beatriz de Anchorena on global data protection

Beatriz de Anchorena, Head of Argentina’s DPA and current Chair of the Convention 108+ Committee, delivered a compelling address on the importance of global collaboration in data protection. Representing a non-European perspective, she emphasised Argentina’s unique contribution to the Council of Europe (CoE).

Argentina was the first country outside Europe to receive an EU adequacy decision, which has since been renewed. Despite having data protection laws originating in the 2000s, Argentina remains a leader in promoting modernised frameworks.

Anchorena highlighted Argentina’s role as the 23rd state to ratify the Convention 108+, noting that only seven more countries need to ratify it to come into force fully. She advocated Convention 108+ as a global standard for data protection, capable of upgrading current data protection standards without demanding complete homogeneity. Instead, it offers a common ground for nations to align on privacy matters.

What’s on your mind: Neuroscience and data protection moderated by Ella Mein

Marcello Ienca, a Professor of Ethics of AI and Neuroscience at the University of Munich, gave everyone in the audience a breakdown of how data and neuroscience intersect and the real-world implications for people’s privacy.

The brain, often described as the largest data repository in the world, presents a vast opportunity for exploration and AI is acting as a catalyst in this process. Large-scale language models are helping researchers in decoding the brain’s ‘hardware’ and ‘software’, although the full ‘language of thought’ remains unclear and uncertain.

Neurotechnology raises real privacy and ethical concerns. For instance, the ability to biomark
conditions like schizophrenia or dementia introduces new vulnerabilities, such as the risk of
‘neuro discrimination’, where predicting one’s illness might lead to stigmatisation or unequal
treatment.

However, it is argued that understanding and predicting neurological conditions is important, as nearly every individual is expected to experience at least one neurological condition in their lifetime. As one panellist put it, ‘We cannot cure what we don’t understand, and we cannot understand what we don’t measure.’

This field also poses questions about data ownership and access. Who should have the ‘right to read brains’, and how can we ensure that access to such sensitive data, particularly emotions and memories unrelated to clinical goals, is tightly controlled? With the data economy in an ‘arms race’, there is a push to extract information directly from its source: the human brain.

As neurotechnology advances, balancing its potential benefits with safeguards will be important to ensure that innovation does not come at the cost of individual privacy and autonomy as mandated by law.

In addition to this breakdown, Jurisconsult Anna Austin explained to us the ECtHR’s legal background surrounding this. A jurisconsult plays a key role in keeping the court informed by maintaining a network that monitors relevant case law from member states and central to this discussion are questions of consent and waiver.

Current ECtHR case law states that any waiver must be unequivocal, fully informed, and fully understand its consequences, which can be challenging to meet. This high standard exists to safeguard fundamental rights, such as protection from torture and inhumane treatment and ensuring the right to a fair trial. As it stands, she stated that there is no fully comprehensive waiver mechanism.

The right to a fair trial is an absolute right that needs to be understood in this context. One nuance in this context is therapeutic necessity where forced medical interventions can be justified under strict conditions with safeguards to ensure proportionality.

Yet concerns remain regarding self-incrimination under Article 6. Particularly in scenarios where reading one’s mind could improperly compel evidence, raising questions about the abuse of such technologies.

Alessandra Pierucci from the Italian DPA made a relevant case for whether new laws should be
created for this matter or whether existing ones are sufficient. Within the context of her work, they are developing a mental privacy risk assessment.

Beyond privacy unveiling the true stakes of data protection. Moderated by Romain Robert

Nathalie Laneret, Vice President of Government Affairs and Public Policy at Criteo, presented her viewpoint on the role of AI and data protection. Addressing the balance between data protection and innovation, Laneret explained that these areas must work together.

She stressed the importance of finding a ways to use pseudonymised data and clear codes of conduct for businesses to use when pointing out that innovation is high on the European Commission’s political agenda.

Laneret addressed concerns about sensitive data, such as children’s data, highlighting Criteo’s proactive approach. With an internal ethics team, the company anticipated potential regulatory challenges around sensitive data, ensuring it stayed ahead of ethical and compliance issues.

In contrast, Max Schrems, Chair of noyb, offered a more critical perspective on data practices. He pointed out the economic disparity in the advertising model, explaining that while advertisers generate minimal revenue per user annually, they often charge users huge fees for their data. Schrems highlighted the importance of individuals having the right to freely give up their privacy if they choose, provided that consent is genuinely voluntary and given.

Forging the future: reinventing data protection? Moderated by Gabriela Zanfir-Fortuna

In this last panel, Johnny Ryan from the Irish Council for Civil Liberties painted a stark picture of
the societal challenges tied to data misuse. He described a crisis fuelled by external influence,
misunderstandings, and data being weaponised against individuals.

However, Ryan argued that the core issue is not merely the problems themselves but the fact that the EU lacks an effective and immediate response strategy. He stated the need for swift protective measures, criticising the current underuse of interim tools that could mitigate harm in real-time.

Nora Ni Loideain, a Lecturer and Director at the University of London’s Information Law and Policy Centre, discussed the impact of the GDPR on data protection enforcement. Explaining how DPAs had limited powers in the past and, for example, in events like the Cambridge Analytica scandal, she noted that the UK’s Data Protection Authority could only fine Facebook £500,000 due to a lack of resources and authority.

This is where the GDPR has allowed for DPAs to step up with independence, greater resources, and stronger enforcement capabilities, significantly improving their ability to hold companies accountable for their privacy violations.

Happy Data Protection Day 2025!

Legacy media vs social media and alternative media channels

In today’s digital age, the rapid proliferation of information has empowered and complicated the way societies communicate and stay informed. At its best, this interconnectedness fosters creativity, knowledge-sharing, and transparency. However, it also opens the floodgates for misinformation, disinformation, and the rise of deepfakes, tools that distort truth and challenge our ability to distinguish fact from fiction. These modern challenges are not confined to the fringes of the internet; they infiltrate mainstream platforms, influencing public opinion, political decisions, and cultural narratives on an unprecedented scale.

The emergence of alternative media platforms like podcasts, social media networks, and independent streaming channels has disrupted the traditional gatekeepers of information. While these platforms offer voices outside the mainstream a chance to be heard, they also often lack the editorial oversight of traditional media. This peculiarity has created a complex media ecosystem where authenticity competes with sensationalism, and viral content can quickly overshadow fact-checking.

Content policy has become a battlefield, with platforms struggling to balance free expression and the need to curb harmful or deceptive narratives. The debate is further complicated by the increasing sophistication of deepfake technology and AI-generated content, which can fabricate convincing yet entirely false narratives. Whether it is a politician giving a speech they never delivered, a celebrity endorsing a product they have never used, or a manipulated video sparking social unrest, the stakes are high.

These challenges have sparked fierce debates among tech giants, policymakers, journalists, and users on who should bear responsibility for ensuring accurate and ethical content. Against this backdrop, recent high-profile incidents, such as Novak Djokovic’s response to perceived media bias and Joe Rogan’s defiance of traditional norms, or Elon Musk’s ‘nazi salute’, highlight the tension between established media practices and the uncharted territory of modern communication channels. These case studies shed light on the shifting dynamics of information dissemination in an era where the lines between truth and fabrication are increasingly blurred.

Case study No. 1: The Djokovic incident, traditional media vs social media dynamics

The intersection of media and public discourse took centre stage during the 2025 Australian Open when tennis icon Novak Djokovic decided to boycott an on-court interview with Channel 9, the official broadcaster of the tournament. The decision, rooted in a dispute over comments made by one of its journalists, Tony Jones, highlighted the ongoing tension between traditional media’s content policies and the freedom of expression offered by modern social media platforms.

The incident

Namely, on 19 January 2025, following his victory over Jiri Lehecka in the fourth round of the Australian Open, Novak Djokovic, the 24-time Grand Slam champion, refused to engage in the customary on-court interview for Channel 9, a long-standing practice in tennis that directly connects players with fans. The reason was not due to personal animosity towards the interviewer, Jim Courier, but rather a response to remarks made by Channel 9 sports journalist Tony Jones. During a live broadcast, Jones had mocked Serbian fans chanting for Djokovic, calling the player ‘overrated’ and a ‘has-been,’ and even suggested they ‘kick him out’, a phrase that resonated deeply given Djokovic’s previous deportation from Australia over vaccine mandate issues in 2022.

The response and social media amplification

In his post-match press conference, Djokovic clarified his stance, saying that he would not conduct interviews with Channel 9 until he received an apology from both Jones and the network for what he described as ‘insulting and offensive’ comments. The incident quickly escalated beyond the tennis courts when Djokovic took to X (formerly Twitter) to share a video explaining his actions, directly addressing his fans and the broader public. 

What happened was a protest against the Australian broadcaster and the strategic use of social media to bypass traditional media channels, often seen as gatekeepers of information with their own biases and agendas. The response was immediate; the video went viral, drawing comments from various quarters, including from Elon Musk, the owner of X. Musk retweeted Djokovic’s video with a critique of ‘legacy media’, stating, ‘It’s way better just to talk to the public directly than go through the negativity filter of legacy media.’ Djokovic’s simple reply, ‘Indeed’, underscored his alignment with this view, further fuelling the discussion about media integrity and control.

Content policy and misinformation

The incident brings to light several issues concerning content policy in traditional media. Traditional media like Channel 9 operate under strict content policies where editorial decisions are made to balance entertainment and journalistic integrity. However, remarks like those from Jones can blur this line, leading to public backlash and accusations of bias or misinformation.

The response from Channel 9, an apology after the public outcry, showcases the reactive nature of traditional media when managing content that might be deemed offensive or misinformative, often after significant damage has been done to public perception.

Unlike social media, where anyone can broadcast their viewpoint, traditional media has the infrastructure for fact-checking but can also be accused of pushing a narrative. The Djokovic case has raised questions about whether Jones’s comments were intended as humour or reflected a deeper bias against Djokovic or his nationality.

The role of social media

Social media platforms such as X enable figures like Djokovic to communicate directly with their audience, controlling their narrative without the mediation of traditional media. Direct public exposure can be empowering, but it can also bypass established journalistic checks and balances.

While this incident showcased the power of social media for positive storytelling, it also highlights the platform’s potential for misinformation. Messages can be amplified without context or correction without editorial oversight, leading to public misinterpretation.

Case study No. 2: Alternative media and political discourse – The Joe Rogan experience

As traditional media grapples with issues of trust and relevance, alternative media platforms like podcasts have risen, offering new avenues for information dissemination. Joe Rogan’s podcast, ‘The Joe Rogan Experience’, has become a significant player in this space, influencing political discourse and public opinion, mainly through his interviews with high-profile figures such as Donald Trump and Kamala Harris.

Donald Trump’s podcast appearance

In 2024, Donald Trump’s appearance on Joe Rogan’s podcast was a pivotal moment, often credited with aiding his resurgence in the political arena, leading to his election as the 47th President of the USA. The podcast format allowed for an extended, unscripted conversation, allowing Trump to discuss his policies, personality, and plans without the usual media constraints. 

Unlike traditional media interviews, where questions and answers are often tightly controlled, Rogan’s podcast allowed Trump to engage with audiences more authentically, potentially influencing voters who felt alienated by mainstream media.

Critics argue that such platforms can spread misinformation due to the lack of immediate fact-checking. Yet, supporters laud the format for allowing a deeper understanding of the candidate’s views without the spin of journalists.

Kamala Harris’s conditional interview

Contrastingly, Kamala Harris’s approach to the same platform was markedly different. She requested special conditions for her interview, including pre-approved questions, which Rogan declined. Harris then chose not to participate, highlighting a critical difference in how politicians view and interact with alternative media. Her decision reflects a broader strategy among some politicians to control their media exposure, preferring environments where the narrative can be shaped to their advantage, which is often less feasible in an open podcast format.

Some might see her refusal as avoidance of tough, unfiltered questions, potentially impacting her public image as less transparent than figures like Trump, who embraced the platform.

Vladimir Klitschko’s interview on ‘The Joe Rogan Experience

Adding another layer to this narrative, former Ukrainian boxer and political figure Vladimir Klitschko appeared on Rogan’s show, discussing his athletic career and geopolitical issues affecting Ukraine. This interview showcased how alternative media like podcasts can give a voice to international figures, offering a different perspective on global issues that might be underrepresented or misrepresented in traditional media.

Rogan’s discussions often delve into subjects with educational value, providing listeners with nuanced insights into complex topics, something traditional news might cover in soundbites.

Analysing media dynamics

Content policy in alternative media: While Rogan’s podcast does not adhere to the same content policies as traditional media, it does have its own set of guidelines, which include a commitment to free speech and a responsibility not to platform dangerous misinformation.

Fact-checking and public accountability: Unlike traditional media, where fact-checking can be institutional, podcast listeners often take on this role, leading to community-driven corrections or discussions on platforms like Reddit or X.

The spread of disinformation: Like social media, podcasts can be vectors of misinformation if not moderated or if hosts fail to challenge or correct inaccuracies. However, Rogan’s approach often includes challenging guests, providing a counterbalance.

Impact on journalism: The rise of podcasts challenges traditional journalism by offering alternative narratives, sometimes at the cost of depth or accuracy but gaining in terms of directness and personal connection with the audience.

Case study No. 3: Elon Musk and the ‘Nazi salute’

The evolution of media consumption has been profound, with the rise of social media and alternative channels significantly altering the landscape traditionally dominated by legacy media. The signs of this evolution are poignantly highlighted in a tweet by Elon Musk, where he commented on the dynamics of media interaction:

‘It was astonishing how insanely hard legacy media tried to cancel me for saying “my heart goes out to you” and moving my hand from my heart to the audience. In the end, this deception will just be another nail in the coffin of legacy media.’ – Elon Musk, 24 January 2025, 10:22 UTC 

Legacy media: the traditional gatekeepers

Legacy media, encompassing print, television, and radio, has long been the public’s primary source of news and information. These platforms have established content policies to ensure journalistic integrity, fact-checking, and editorial oversight. However, as Musk’s tweet suggests, they are often perceived as inherently biased, sometimes acting as ‘negativity filters’ that skew public perception. This critique reflects a broader sentiment that legacy media can be slow to adapt, overly cautious, and sometimes accused of pushing an agenda, as seen in Musk’s experience of being ‘cancelled’ over a simple gesture interpreted out of context. The traditional model involves gatekeepers who decide what news reaches the audience, which can lead to a controlled narrative that might not always reflect the full spectrum of public discourse. 

Modern social media: direct engagement

In contrast, social media platforms like X (formerly Twitter) democratise information dissemination by allowing direct communication from individuals to the public, bypassing traditional media gatekeepers. Musk’s use of X to address his audience directly illustrates this shift. Social media provides an unfiltered stage where public figures can share their stories, engage in real-time, and counteract what they see as biassed reporting from legacy media. This directness enhances transparency and authenticity but also poses significant challenges. Without the same level of editorial oversight, misinformation can spread rapidly, as social media algorithms often prioritise engagement over accuracy, potentially amplifying falsehoods or sensational content. 

Alternative media channels: a new frontier

Beyond social media, alternative channels like podcasts, independent streaming services, and blogs have emerged, offering even more diverse voices and perspectives. These platforms often operate with less stringent content policies, emphasising freedom of speech and direct audience interaction. For instance, podcasts like ‘The Joe Rogan Experience’ have become influential by hosting long-form discussions that delve deeper into topics than typical news segments. This format allows for nuanced conversations but lacks the immediate fact-checking mechanisms of traditional media, relying instead on the community or the host’s discretion to challenge or correct misinformation. The rise of alternative media has challenged the monopoly of legacy media, providing platforms where narratives can be shaped by content creators themselves, often leading to a richer, albeit sometimes less regulated, exchange of ideas. 

Content policy and freedom of expression

The tension between content policy and freedom of expression is starkly highlighted in Musk’s tweet. Legacy media’s structured approach to content can sometimes suppress voices or misrepresent intentions, as Musk felt with his gesture. On the other hand, social media and alternative platforms offer broader freedom of expression, yet this freedom comes with the responsibility to manage content that might be misleading or harmful. The debate here revolves around how much control should be exerted over content to prevent harm while preserving the open nature of these platforms. Musk’s situation underscores the need for a balanced approach where the public can engage with authentic expressions without the distortion of ‘legacy media’s negativity filter’. 

To summarise:

The juxtaposition of Djokovic’s media strategies and the political interviews on ‘The Joe Rogan Experience’ illustrates a shift in how information is consumed, controlled, and critiqued. Traditional media continues to wield considerable influence but is increasingly challenged by platforms offering less censorship, potentially more misinformation, and direct, unfiltered communication. 

Elon Musk’s tweet is another vivid example of the ongoing battle between legacy media’s control over narrative and the liberating yet chaotic nature of modern social media and alternative channels. These platforms have reshaped the way information is consumed, offering both opportunities for direct, unmediated communication and challenges in maintaining the integrity of information. 

As society continues to navigate this complex media landscape, the balance between ensuring factual accuracy, preventing misinformation, and respecting freedom of speech will remain a critical discussion point. The future of media lies in finding this equilibrium, where the benefits of both traditional oversight (perhaps through stringent/severe regulatory measures) and modern openness can coexist to serve an informed and engaged public.

DeepSeek: Speeding up the planet or levelling with ChatGPT?

Although the company’s name somewhat overlaps with Google DeepMind, which was launched earlier, the new player in the market has sparked a leap in attention and public interest, becoming one of the biggest AI surprises on the planet upon its launch.

DeepSeek, a company headquartered in China, enjoys significant popularity primarily because its most sought-after features keep pace with giants like OpenAI and Google, as well as due to notable stock market changes that are far from negligible.

In the following points, we will explore these factors and what the future holds for this young company, particularly in the context of the dynamics between China and the US.

How did it start? Origins of DeepSeek

DeepSeek is an AI company from China based in Hangzhou, Zhejiang, founded by entrepreneur and businessman Liang Wenfeng. The company develops open-source LLMs and is owned by a Chinese hedge fund, High-Flyer.

It all started back in 2015 when Liang Wenfeng cofounded High-Flyer. At first, it was a startup, but in 2019, it grew into a hedge fund focused on developing and using AI trading algorithms. For the first two years, they used AI only for trading.

In 2023, High-Flyer founded a startup called DeepSeek, and Liang Wenfeng was appointed CEO. Two years later, on 10 January 2025, DeepSeek announced the release of its first free-to-use chatbot app. The app surpassed its main competitor, ChatGPT, as the most downloaded free app in the US in just 17 days, causing an unprecedented stir on the market.

Unprecedented impact on the market

Few missed the launch of the DeepSeek model, which is why the stock market felt the impact, and so did some of the biggest giants.

For instance, the value of Nvidia shares dropped by as much as 18%. Similar declines were experienced by giants like OpenAI, Google, and other AI companies focused on small and medium-sized enterprises.

On top of this, there is justified concern among investors, who could quickly shift their focus and redirect their investments. However, this could lead to an even more significant drop in the shares of the largest companies.

Open-source approach

DeepSeek embraces an open-source philosophy, making its AI algorithms, models, and training details freely accessible to the public. The company stated that it is committed to transparency and fosters collaboration among developers and researchers worldwide. They also advocate for a more inclusive and innovative AI ecosystem.

Their strategy has the potential to reshape the AI landscape, as it empowers individuals and organisations to contribute to the evolution of AI technology. DeepSeek’s initiative highlights the importance of open collaboration in driving progress and solving complex challenges in the tech industry.

DeepSeek quickly secured the information after being alerted.

With the growing demand for ethical and transparent AI development, DeepSeek’s open-source model sets a precedent for the industry. The company paves the way for a future where AI breakthroughs are driven by collective effort rather than proprietary control.

Cheaper AI model that shook the market

By being cheaper than the competition, DeepSeek has opened the doors of the AI market to many other companies that do not have as much financial power. As dr Jovan Kurbalija, executive director of Diplo, says in his blog post titled ‘How David outwits Goliath in the age of AI?‘, ‘the age of David challenging Goliath has arrived in AI’.

For individuals, this means monthly costs are reduced by 30% to 50%, which can be, and often is, the biggest incentive for users looking to save.

The privileges once enjoyed by those with greater financial resources are now available to those who want to advance their small and medium-sized businesses.

Cyber threats and challenges faced by DeepSeek

Shortly after its launch, DeepSeek faced a significant setback when it was revealed that an error had exposed sensitive information to the public.

This raised alarms for many, especially as the immense popularity led to the AI Assistant being removed from the AppStore more times than OpenAI’s offering, and a large amount of data became accessible.

Experts have expressed concerns that others may have accessed the leaked data. The company has not yet commented on the incident, while the system’s vulnerability provides a foundation for hacking groups to exploit.

DeepSeek for the top spot, ChatGPT defends the throne

The AI race is heating up as DeepSeek challenges industry leader ChatGPT, aiming to claim the top spot in AI. With its open-source approach, DeepSeek is rapidly gaining attention by publicly making its models and training methods available, fostering innovation and collaboration across the AI community.

The race was further spiced up by DeepSeek’s claim that it built an AI model on par with OpenAI’s ChatGPT for under $6 million (£4.8 million). In comparison, Microsoft, OpenAI’s main partner, plans to invest around $80 billion in AI infrastructure this year.

OpenAI’s ChatGPT search tool faces risks of manipulation via hidden content, leading to biased or harmful outputs.

As DeepSeek pushes forward with its transparent and accessible model, the battle for AI supremacy intensifies. Whether openness will outmatch ChatGPT’s established presence remains to be seen, but one thing is sure—the AI landscape is evolving faster than ever.

Why is DeepSeek gaining popularity in 2025?

DeepSeek has emerged as a major player in AI by embracing an open-source philosophy, making its models and training data freely available to developers. This transparency has fueled rapid innovation, allowing researchers and businesses to build upon its technology and contribute to advancements in AI.

Unlike closed systems controlled by major tech giants, DeepSeek’s approach promotes accessibility and collaboration, attracting a growing community of AI enthusiasts. Its cost-effective development, reportedly achieving results comparable to top-tier models with significantly lower investment, has also drawn attention.

As the demand for more open and adaptable AI solutions rises, DeepSeek’s commitment to shared knowledge positions it as a strong contender in the industry. Whether this strategy will redefine the AI landscape remains to be seen, but its growing influence in 2025 is undeniable.

DeepSeek in the future: Development, features, and strategies

Now that it has experienced ‘overnight success,’ the Chinese company aims to push DeepSeek to the top and position it among the most powerful AI firms in the world.

Users can definitely expect many advanced features that will fuel a fierce battle with giants like DeepMind and ChatGPT.

Strategically, DeepSeek will attempt to break into the American market and offer more financially accessible solutions, forcing the key players to make significant cuts.

DeepSeek is undoubtedly a real hit in the market, but it remains to be seen whether price is the only measure of its success.

Whether it will make a leap in its own technology and completely outpace the competition or remain shoulder to shoulder with the giants—or even falter—will be revealed in the near future.

One thing is sure: the Chinese company has seriously shaken up the market, which will need considerable time to recover.

Can quantum computing break the cryptocurrency’s code?

The digital revolution has brought in remarkable innovations, and quantum computing is emerging as one of its brightest stars. As this technology begins to showcase its immense potential, questions are being raised about its impact on blockchain and cryptocurrency. With its ability to tackle problems thought to be unsolvable, quantum computing is redefining the limits of computational power.

At the same time, its rapid advancements leave many wondering whether it will bolster the crypto ecosystem or undermine its security and decentralised nature. Can this computing breakthrough empower crypto, or does it pose a threat to its very foundations? Let’s dive deeper. 

What is quantum computing? 

Quantum computing represents a groundbreaking leap in technology. Unlike classical computers that process data in binary (0s and 1s), quantum computers use qubits, capable of existing in multiple states simultaneously due to quantum phenomena such as superposition and entanglement.

For example, Google’s new chip, Willow, is claimed to solve a problem in just five minutes—a task that would take the world’s fastest supercomputers approximately ten septillion years—highlighting the extraordinary power of quantum computing and fuelling further debate about its implications. 

These advancements enable quantum machines to handle problems with countless variables, benefiting fields such as electric vehicles, climate research, and logistics optimisation. While quantum computing promises faster, more efficient processing, its intersection with blockchain technology adds a layer of complexity so the story takes an interesting twist. 

 Nature, Night, Outdoors, Astronomy, Outer Space, Text

How does quantum computing relate to blockchain?

Blockchain technology relies on cryptographic protocols to secure transactions and ensure decentralisation. Cryptocurrencies like Bitcoin and Ethereum use elliptic curve cryptography (ECC) to safeguard wallets and transactions through mathematical puzzles that classical computers cannot solve quickly. 

Quantum computers pose a significant challenge to these cryptographic foundations. Their advanced processing power could potentially expose private keys or alter transaction records, threatening the trustless environment that blockchain depends upon.

Opportunities: Can crypto benefit from quantum computing? 

While the risks are concerning, quantum computing offers several opportunities to revolutionise blockchain: 

  • Faster transactions: Quantum algorithms could significantly accelerate transaction validation, addressing scalability challenges. 
  • Enhanced security: Developers can leverage quantum principles to create stronger, quantum-secure algorithms. 
  • Smarter decentralisation: Quantum-powered computations could enhance the functionality of smart contracts and decentralised apps (DApps). 

By embracing quantum advancements, the blockchain industry could evolve to become more robust and scalable— hopefully great news for the crypto community, which is optimistic about the potential for progress. 

How does quantum computing threaten cryptocurrency? 

Despite its potential benefits, quantum computing poses significant risks to the cryptocurrency ecosystem, depending on how it is used and who controls it: 

  1. Breaking public key cryptography
    Quantum computers equipped with Shor’s algorithm can decrypt ECC and RSA encryption. Tasks that would take classical computers millennia could be accomplished by a quantum computer in mere hours. This capability threatens to expose private keys, allowing hackers to access wallets and steal funds. 
  2. Mining oligopoly 
    The mining process, vital for cryptocurrency creation and transaction validation, depends on computational difficulty. Quantum computers could dominate mining activities, disrupting the decentralisation and fairness fundamental to blockchain systems.
  3. Dormant wallet risks
    Wallets with exposed public keys, particularly older ones, are at heightened risk. A quantum attack could compromise these funds before users can adopt protective measures.

With projections suggesting that quantum computers capable of breaking current encryption standards could emerge within 10–20 years—or perhaps even sooner—the urgency to address these threats is intensifying.

Solutions: Quantum-resistant tokens and cryptography

 Baby, Person, Body Part, Finger, Hand

Where there is a challenge, there is a solution. The crypto industry is proactively addressing quantum threats with quantum-resistant tokens and post-quantum cryptography. Lattice-based cryptography, for example, creates puzzles too complex for quantum computers, with projects like CRYSTALS-Kyber leading the charge. Hash-based methods, such as QRL’s XMSS, ensure data integrity, while code-based cryptography, like the McEliece system, uses noisy signals to protect messages. Multivariate polynomial cryptography also adds robust defences through complex equations. 

As we can see, promising solutions are already actively working to uphold blockchain principles. These innovations are crucial not only for securing crypto assets but also for maintaining the integrity of blockchain networks. Quantum-resistant measures ensure that transaction records remain immutable, safeguarding the trust and transparency that decentralised systems are built upon.

The quantum future for crypto 

Quantum computing holds tremendous promise for humanity, but it also brings challenges, particularly for blockchain and cryptocurrency. As its capabilities grow, the risks to existing cryptographic protocols become more apparent. However, the crypto community has shown remarkable resilience, with quantum-resistant technologies already being developed to secure the ecosystem. This cycle of threats and solutions is a perpetual motion—each technological advancement introduces new vulnerabilities, met with equally innovative defences. It is the inevitable price to pay for embracing the modern decentralised finance era and the transformative potential it brings. 

The future of crypto does not have to be at odds with quantum advancements. With proactive innovation, collaboration, and the implementation of quantum-safe solutions, blockchain can survive and thrive in the quantum era. So, is quantum computing a threat to cryptocurrency? The answer lies in our ability to adapt. After all, with great power comes great responsibility—and opportunity.