China’s tech firms growing influence

Big tech competition heats up

Chinese big tech companies have emerged as some of the most influential players in the global technology landscape, driving innovation and shaping industries across the board. These companies are deeply entrenched in everyday life in China, offering a wide range of services and products that span e-commerce, social media, gaming, cloud computing, ΑΙ, and telecommunications. Their influence is not confined to China, they also play a significant role in global markets, often competing directly with US tech giants.

The rivalry between China and the US has become one of the defining geopolitical struggles of the 21st century. This competition oscillates between cooperation, fierce competition, and confrontation, influenced by regulatory policies, national security concerns, and shifting political priorities. The geopolitical pendulum of China-US tech firms, totally independent from the US election outcome, reflects the broader tensions between the two powers, with profound implications for global tech industries, innovation, and market dynamics.

China’s access to US technology will face further restrictions after the election.

The Golden Shield Project

In 2000, under Chairman Jiang Zemin’s leadership, China launched the Golden Shield Project to control media and information flow within the country. The initiative aimed to safeguard national security and restrict the influence of Western propaganda. As part of the Golden Shield, many American tech giants such as Google, Facebook, and Netflix were blocked by the Great Firewall for not complying with China’s data regulations, while companies like Microsoft and LinkedIn were allowed to operate.

 Logo, Armor

At the same time, China’s internet user base grew dramatically, reaching 800 million netizens by 2018, with 98% using mobile devices. This rapid expansion provided a fertile ground for Chinese tech firms, which thrived without significant competition from foreign players. Among the earliest beneficiaries of this system were the BATX companies, which capitalised on China’s evolving internet landscape and rapidly established a dominant presence in the market.

The powerhouses of Chinese tech

The major Chinese tech companies, often referred to as the Big Tech of China, include Alibaba Group, Tencent, Baidu, ByteDance, Huawei, Xiaomi, JD.com, Meituan, Pinduoduo, and Didi Chuxing.

 Logo

Alibaba Group is a global e-commerce and technology conglomerate, operating platforms such as Taobao and Tmall for e-commerce, AliExpress for international retail, and Alipay for digital payments. The company also has significant investments in cloud computing with Alibaba Cloud and logistics.

Tencent, a massive tech conglomerate, is known for its social media and entertainment services. It owns WeChat, a widely used messaging app that offers payment services, social media features, and more. Tencent also has investments in gaming, owning major stakes in Riot Games, Epic Games, and Activision Blizzard, as well as interests in financial services and cloud computing.

Baidu, often called China’s Google, is a leading search engine provider. In addition to its search services, Baidu has a strong presence in AI development, autonomous driving, and cloud computing, particularly focusing on natural language processing and autonomous vehicles.

ByteDance, the company behind TikTok, has made a name for itself in short-form video content and AI-driven platforms. It also operates Douyin, the Chinese version of TikTok, along with Toutiao, a popular news aggregation platform. ByteDance has expanded into gaming, e-commerce, and other AI technologies.

Huawei is a global leader in telecommunications equipment and consumer electronics, particularly smartphones and 5G infrastructure. The company is deeply involved in cloud computing and AI, despite facing significant geopolitical challenges.

Xiaomi is a leading smartphone manufacturer that also produces smart home devices, wearables, and a wide range of consumer electronics. The company is growing rapidly in the Internet of Things (IoT) space and AI-driven products.

JD.com, one of China’s largest e-commerce platforms, operates similarly to Alibaba, focusing on direct sales, logistics, and tech solutions. JD.com has also made significant strides in robotics, AI, and logistics technology.

Meituan is best known for its food delivery and local services platform, offering everything from restaurant reservations to hotel bookings. The company also operates in sectors like bike-sharing, travel, and ride-hailing.

Pinduoduo has rapidly grown in e-commerce by focusing on group buying and social commerce, particularly targeting lower-tier cities and rural markets in China. The platform offers discounted products to users who buy in groups.

Didi Chuxing is China’s dominant ride-hailing service, offering various transportation services such as ride-hailing, car rentals, and autonomous driving technology.

But what are the BATX companies we mentioned earlier?

BAXT

The term BATX refers to a group of the four dominant Chinese tech companies: Baidu, Alibaba, Tencent, and Xiaomi. These companies are central to China’s technology landscape and are often compared to the US “FAANG” group (Facebook, Apple, Amazon, Netflix, Google) because of their major influence across a range of industries, including e-commerce, search engines, social media, gaming, ΑΙ and telecommunications. Together, BATX companies are key players in shaping China’s tech ecosystem and have a significant impact on global markets.

 Text, Symbol, Number, Sign, Business Card, Paper, Logo

China’s strategy for tech growth

China’s technology development strategy has proven effective in propelling the country to the forefront of several high-tech industries. This ambitious approach, which involves broad investments across both large state-owned enterprises and smaller private startups, has fostered significant innovation and created a competitive business environment. As a result, it has the potential to serve as a model for other countries looking to stimulate tech growth.

A key driver of China’s success is its diverse investment strategy, supported by government-led initiatives like the “Made in China 2025” and the “Thousand Talents Plan“. These programs offer financial backing and attract top talent from around the globe. This inclusive approach has helped China rapidly emerge as a global leader in fields like AI, robotics, and semiconductors. However, critics argue that the strategy may be overly aggressive, potentially stifling competition and innovation.

 Person, Text, Symbol

Some have raised concerns that China’s government support unfairly favours domestic companies, providing subsidies and other advantages that foreign competitors do not receive. Yet, this type of protectionist approach is not unique to China; other countries have implemented similar strategies to foster the growth of their own industries.

Another critique is that China’s broad investment model may encourage risky ventures and the subsidising of failures, potentially leading to a market that is oversaturated with unprofitable businesses. While this criticism holds merit in some cases, the overall success of China’s strategy in cultivating a dynamic and competitive tech landscape remains evident.

Looking ahead, China’s technology development strategy is likely to continue evolving. As the country strengthens its position on the global stage, it may become more selective in its investments, focusing on firms with the potential for global leadership.

In any case, China’s strategy has shown it can drive innovation and foster growth. Other nations hoping to advance their technological sectors should take note of this model and consider implementing similar policies to enhance their own competitive and innovative business environments.

But under what regulatory framework does Chinese tech policy ultimately operate? How does it affect the whole project? Are there some negative effects of the tight state grip?

China’s regulatory pyramid: Balancing control and consequences

China’s regulatory approach to its booming tech sector is defined by a precarious balance of authority, enforcement, and market response. Angela Zhang, author of High Wire: How China Regulates Big Tech and Governs Its Economy, proposes a “dynamic pyramid model” to explain the system’s intricate dynamics. This model highlights three key features: hierarchy, volatility, and fragility.

 Advertisement, Sword, Weapon, Adult, Female, Person, Woman, Face, Head, Shilpa Gupta

The top-down structure of China’s regulatory system is a hallmark of its hierarchy. Regulatory agencies act based on directives from centralised leadership, creating a paradox. In the absence of clear signals, agencies exhibit inaction, allowing industries to flourish unchecked. Conversely, when leadership calls for stricter oversight, regulators often overreach. A prime example of this is the drastic shift in 2020 when China moved from years of leniency toward its tech giants to implementing sweeping crackdowns on firms like Alibaba and Tencent.

This erratic enforcement underscores the volatility of the system. Chinese tech regulation is characterised by cycles of lax oversight followed by abrupt crackdowns, driven by shifts in political priorities. The 2020 – 2022 crackdown, which involved antitrust investigations and record-breaking fines, sent shockwaves through markets, wiping out billions in market value. While the government eased its stance in 2022, the uncertainty created by such pendulum swings has left investors wary, with many viewing the Chinese market as unpredictable and risky.

Despite its intentions to address pressing issues like antitrust violations and data security, China’s heavy-handed regulatory approach often results in fragility. Rapid interventions can undermine confidence, stifle innovation, and damage the very sectors the government seeks to strengthen. Years of lax oversight exacerbate challenges, leaving regulators with steep issues to address and markets vulnerable to overcorrection.

This model offers a lens into the broader governance dynamics in China. The system’s centralised control and reactive policies aim to maintain stability but often generate unintended economic consequences. As Chinese tech firms look to expand overseas amid domestic challenges, the long-term impact of these regulatory cycles remains uncertain, potentially influencing China’s ability to compete on the global stage.

The battle for tech supremacy between the USA and China

The incoming US President Donald Trump is expected to adopt a more aggressive, unilateral approach to counter China’s technological growth, drawing on his history of quick, broad measures such as tariffs. Under his leadership, the USA is likely to expand export controls and impose tougher sanctions on Chinese tech firms. Trump’s advisors predict a significant push to add more companies to the US Entity List, which restricts US firms from selling to blacklisted companies. His administration might focus on using tariffs (potentially up to 60% on Chinese imports) and export controls to pressure China, even if it strains relations with international allies.

 People, Person, Adult, Male, Man, American Flag, Flag, Crowd, Face, Head, Accessories, Formal Wear, Tie, Xi Jinping, donald trump

The escalating tensions have been further complicated by China’s retaliatory actions. In response to US export controls, China has targeted American companies like Micron Technology and imposed its own restrictions on essential materials for chipmaking and electric vehicle production. These moves highlight the interconnectedness of both economies, with the US still reliant on China for critical resources such as rare earth elements, which are vital for both technology and defence.

This intensifying technological conflict reflects broader concerns over data security, military dominance, and leadership in AI and semiconductors. As both nations aim to protect their strategic interests, the tech war is set to continue evolving, with major consequences for global supply chains, innovation, and the international balance of power in technology.

UN Cybercrime Convention: What does it mean and how will it impact all of us?

After three years of negotiations initiated by Russia in 2017, the UN member states at the Ad Hoc Committee (AHC) adopted the draft of the first globally binding legal instrument on cybercrime. This convention will be presented to the UN General Assembly for formal adoption later this year. The Chair emphasised that the convention represents a criminal justice legal instrument and that the aim is to combat cybercrime by prohibiting certain behaviours by physical persons rather than to regulate the behaviour of member states.

The convention’s adoption has proceeded despite significant opposition from human rights groups, civil society, and technology companies, who had raised concerns about the potential risks of increased surveillance. In July, DiploFoundation invited experts from various stakeholder groups to discuss their expectations before the final round of UN negotiations and to review the draft treaty. Experts noted an unprecedented alignment between industry and civil society on concerns with the draft, emphasising the urgent need for a treaty focused on core cybercrime offences, strengthened by robust safeguards and precise intent requirements.

Once formally adopted, how will the UN Cybercrime Convention (further – UN Convention) impact the security of users in the cyber environment? What does this legal instrument actually state about cross-border cooperation in combating cybercrime? What human rights protections and safeguards does it provide?

We invited experts representing the participating delegations in these negotiations to provide us with a better understanding of the agreed draft convention and its practical implications for all of us. 

Below, we’re sharing the main takeaways, and if you wish to watch the entire discussion, please follow this link.

Overview of the treaty: What would change once the UN Convention comes into effect?

Irene Grohsmann, Political Affairs Officer, Arms Control, Disarmament and Cybersecurity at the Federal Department of Foreign Affairs FDFA (Switzerland), started outlining that there are a few things that will change once the convention comes into force. The Convention will be new in the sense that it provides a legal basis for the first time at the UN level for states to request mutual legal assistance from each other and other cooperation measures to fight cybercrime. It will also provide, for the first time, a global legal basis for further harmonisation of criminal legal provisions regarding cybercrime between those future states parties to the convention. 

‘The Convention will be new in a sense that it provides a legal basis for the first time at UN level for states to request mutual legal assistance from each other and other cooperation measures to fight cybercrime. It will also provide, for the first time, a global legal basis for further harmonisation of criminal legal provisions, regarding cybercrime, between those future states parties to the convention.’

Irene Grohsmann, Political Affairs Officer, Arms Control, Disarmament and Cybersecurity at the Federal Department of Foreign Affairs FDFA (Switzerland)

At the same time, as Irene mentioned, the Convention will remain the same, specifically not the currently applicable standards (such as data protection and human rights safeguards) for fighting cybercrime in the context of law enforcement or cooperation measures. The new UN Convention does not change those existing standards but rather upholds them. 

UN Convention vs. the existing instruments: How would they co-exist?

Irene reminded that the UN Convention largely relies on, and was particularly inspired by the Budapest Convention, and therefore will not exclude the application of other existing international or regional instruments, nor will it take precedence over them. It will rather exist, side by side, with other relevant legal frameworks. This is explicitly stated in the Convention’s preamble and Article 60. Furthermore, regional conventions are typically more concrete and thus remain highly relevant in combating cybercrime. Irene noted that when states are parties to a regional convention and the UN Convention, they can opt for the regional one if it offers a more specific basis for cooperation. When states have ratified multiple conventions, they use key principles to decide which to apply, such as specificity and favorability.

Andrew Owusu-Agyemang, Deputy Manager at the Cyber Security Authority (Ghana), agreed with Irene, highlighting the Malabo Convention’s specific provisions on data protection, cybersecurity, and national cybersecurity policy. Andrew noted that the Budapest Convention complements Malabo by covering procedural powers and international cooperation gaps, benefiting parties like Ghana, a member of both. The novelty in the UN Cybercrime Convention, however, is the fact that the text introduces the criminalisation of the non-consensual dissemination of intimate images. Together, these instruments are complementary, filling gaps where others need more.

‘All these treaties can coexist because they are complementary in nature and do not polarize each other. However, the novelty in the UN Cybercrime Convention is that it introduces the criminalization of the non-consensual dissemination of intimate images.’

Andrew Owusu-Agyemang, Deputy Manager at the Cyber Security Authority (Ghana)

Cross-border cooperation and access to electronic evidence: What does the UN Convention say about this, including Article 27?

Catalina Vera Toro, Alternate Representative, Permanent Mission of Chile to the OAS, Ministry of Foreign Affairs (Chile), addressed how the UN Cybercrime Convention, particularly Article 27, handles cross-border cooperation for accessing electronic evidence, allowing states to compel individuals to produce data stored domestically or abroad if they have access to it. However, this raises concerns over accessing data across borders without the host country’s consent—a contentious issue in cybercrime. The Convention emphasises state sovereignty and encourages cooperation through mutual legal assistance rather than unilateral actions, advising states to request data access through established frameworks. While Article 27 allows states to order individuals within their borders to provide electronic data, it does not provide for unilateral cross-border data access without the consent of the other state involved.

‘The fact that we have a convention is also a positive note on what diplomacy and multilateralism can achieve. This convention helps bridge gaps between existing agreements and brings in new countries that are not part of those instruments, making it an instrumental tool for addressing cybercrime. That’s another positive aspect to consider.’

Catalina Vera Toro, Alternate Representative, Permanent Mission of Chile to the OAS, Ministry of Foreign Affairs (Chile)

Catalina noted that this approach balances effective law enforcement with respect for sovereignty. Unlike the Budapest Convention, which raised sovereignty concerns, the UN Convention emphasises cooperation to address these fears. While some states worry it may bypass formal processes, the Convention’s focus on mutual assistance aims to respect jurisdictions while enabling cybercrime cooperation.

Briony Daley Whitworth, Assistant Secretary, Cyber Affairs & Critical Technology Branch, Department of Foreign Affairs and Trade (Australia), added on the placement of this article in the convention as it pertains to law enforcement powers for investigating cybercrime within a state’s territory, distinct from cross-border data sharing. This article must be considered alongside the jurisdiction chapter, which outlines the treaty’s provisions for investigating cybercrimes, including those linked to the territory of each state party. The sovereignty provisions set limits on enforcement powers, dictating where they apply. The article also includes procedural safeguards for data submission requests, such as judicial review. Importantly, ‘specified electronic data’ must be clarified, covering data on personal devices and data controlled but not possessed by individuals, such as cloud-stored information. Legal entities, not just individuals, may be involved; for example, law enforcement would need to request data from a provider like Google rather than the user. Briony highlighted that this framework in the UN Convention drew heavily from the Budapest Convention and stressed the importance of examining its existing interpretations, used by over 76 countries, to guide how Article 27 might be applied, reinforcing that cross-border data access requires the knowledge of the state involved.

Does the convention clarify how individuals and entities can challenge data requests from law enforcement? Briony emphasised the need for clear conditions and safeguards, noting that the convention requires compliance with international human rights laws and domestic review mechanisms. Individuals can challenge orders through judicial review, and law enforcement must justify warrants with scope, duration, and target limitations. However, Briony cautioned that the treaty’s high-level language relies on countries implementing these safeguards domestically. Catalina added that the convention’s protections work best as an integrated framework, noting that countries with strong checks and balances, like Chile, already offer resources for individual rights protection.

‘Human rights protections were really at the forefront of a lot of the negotiations over the last couple of years. We managed to set a uniquely high bar in the general provisions on human rights protections for a UN convention, particularly a criminal convention. This convention not only affirms that human rights apply but also states that nothing in it can be interpreted to permit the suppression of human rights. Additionally, it includes an article on the protection of personal data during international transfers, which is rare for a UN crime convention. Objectively, this convention offers more numerous and robust safeguards than other UN conventions. One of our priorities was ensuring that this convention does not legitimise bad actions. While we cannot stop bad actors, we can ensure that this convention helps combat their actions without legitimising them, which we have largely achieved through the human rights protections.’

Briony Daley Whitworth, Assistant Secretary, Cyber Affairs & Critical Technology Branch, Department of Foreign Affairs and Trade (Australia)

How does the UN Convention define and protect ‘electronic data’?

Catalina noted that defining ‘electronic data’ was challenging throughout negotiations, with interpretations varying based on a country’s governance, which impacts legal frameworks and human rights protections. The convention defines electronic data broadly, covering all types of data stored in digital services, including personal documents, photos, and notes – regardless of whether that data has been communicated to anyone. Importantly, accessing electronic data generally has a lower threshold than accessing content or traffic data, which have more specific definitions within the convention.

This broader definition enables states to request access to electronic data, even if it contains private information intended to remain confidential. However, Catalina emphasised that domestic legal frameworks and other provisions within the convention are designed to protect human rights and safeguard individual privacy. 

Briony also clarified that electronic data’ specifically refers to stored data, not actively communicated data. States differentiate electronic data from subscriber, traffic, and content data related to network communications. This definition is based on the Budapest Convention’s terminology for computer data, allowing for a wider interpretation of the types of data involved. She also emphasised that the UN Convention establishes a high standard for human rights protections, affirming their applicability and stating that it should not be interpreted to suppress rights. It includes provisions for protecting personal data during international transfers and reinforcing commitment to human rights in electronic data contexts. However, Briony added that the Convention has some flaws, noting that Australia wishes certain elements had been more thoroughly addressed. Nonetheless, the UN convention is a foundational framework for building trust among states to combat cybercrime effectively while balancing human rights commitments.

Technology transfer: What are the main takeaways from the convention to facilitate capacity building?

Andrew highlighted that technical assistance and capacity development are fundamental to effectively implementing this convention. The UN Cybercrime Treaty lays a robust foundation for technical assistance and capacity development, offering practical mechanisms such as MOUs, personnel exchanges, and collaborative events to strengthen countries’ capacities in their fight against cybercrime. The convention’s technical assistance chapter encourages parties to enter multilateral or bilateral agreements to implement relevant provisions. These MOUs, in particular, can facilitate the development of the capacities of law enforcement agencies, judges, and prosecutors, ensuring that cybercrime is prosecuted effectively.

Implementation and additional protocols: Which mechanisms does the draft convention include for keeping up to date with the pace of technological developments?

Irene clarified that, although the UN Convention has been adopted at the AHC, some topics need further discussion among member states. Due to time constraints, these discussions were postponed, including which crimes should be included in the criminalisation chapter. Some states, like Switzerland, prefer a focused list of cyber-dependent crimes, while others advocate for a broader inclusion of both cyber-dependent and cyber-enabled crimes. Irene noted that resource considerations influence Switzerland’s perspective, emphasising the need to focus on ratification and implementation rather than dividing resources with a supplementary protocol. While a supplementary protocol will need discussion in the future, there is still time to determine its content or negotiation topics.

Irene emphasised that the convention uses technology-neutral language to keep the text up-to-date with technological developments, allowing it to focus on behaviour rather than specific technologies, similar to the successful Budapest Convention. Adopted in 2001, the Budapest Convention has remained relevant for over two decades, and we hope for the same with the UN Convention. Additionally, the convention allows for future amendments; once in force and the Conference of States Parties is established, member states can address any coverage inadequacies and consider amendments five years after implementation.

Ambassador Asoke Mukerji, India’s former ambassador to the United Nations in New York, who chaired India’s national multiple-stakeholder group on recommending cyber norms for India in 2018, noted that, despite initial scepticism about the feasibility of such a framework, the current momentum demonstrates that, with trust and commitment, it is possible to establish international agreements addressing cybercrime. He also praised the effectiveness of multistakeholder participation in addressing the evolving challenges in cyberspace. However, Ambassador Mukerji cautioned about challenges regarding technology transfer, referring to recent statements at the UN General Assembly that could restrict such efforts. He expressed hope that developing countries would receive the necessary flexibility to negotiate favourable terms.

‘The negotiations took place against a very difficult global environment, and our participation from India proved to be useful. It demonstrated that countries, committed to a functional multilateral system, can benefit from it, impacting our objectives of international cooperation. Additionally, the process highlighted the effectiveness of multistakeholder participation in cyberspace. The convention and its negotiation process validate our choice to use this model to address the new challenges facing multilateralism.’

Ambassador Asoke Mukerji, India’s former ambassador to the United Nations in New York

Concluding remarks

The panellists unanimously highlighted the indispensable role of human rights standards, emphasising that any practical international cooperation against cybercrime must prioritise these principles. Briony also pointed out that the increasingly complex cyber threat landscape demands a collective response to enhance cybersecurity resilience and capabilities. The treaty’s significant achievements, including protections against child exploitation and the non-consensual dissemination of intimate images, reflect a commitment to safeguarding both victims’ and offenders’ rights. Catalina highlighted that certain types of crimes, such as gender-based violence, were also included in the text, and this is another significant achievement.

All experts also agreed that the active involvement of civil society, NGOs, and the private sector is vital for ensuring that diverse expertise contributes meaningfully to the ratification and implementation processes. Public-private partnerships were specifically mentioned as essential for fostering collaboration in cybercrime prevention. Ultimately, the success of the Convention lies not only in its provisions but also in the collaborative spirit that must underpin its implementation. By working together, stakeholders can create a safer and more secure cyberspace for all.

We at Diplo invite you all to re-watch the online expert discussion and engage in a broader conversation about the impacts of this negotiation process. In the meantime, stay tuned! We’ll further provide updates and analysis on the UN cybercrime convention and relevant processes.

Trump vs Harris: The tech industry’s pivotal role in 2024

US Presidential elections

As the 5 November US presidential election approaches, all eyes are on the tight race between former President Donald Trump and current Vice President Kamala Harris. Polls show the candidates are neck and neck, making voter mobilisation critical for both sides. In this high-stakes environment, the backing of major business groups could be a game changer, with influential figures like Elon Musk stepping into the spotlight.

a table with a few flags on it

Musk, the founder of X and one of the world’s wealthiest individuals, has recently rallied support for Trump’s campaign, highlighting the significant role that Big Tech, particularly the so-called ‘Magnificent Seven’, could play in determining the election’s outcome. As both candidates vie for the favour of corporate America, their strategies will likely reflect the growing influence of these business leaders in shaping public policy and voter sentiment.

The Magnificent Seven

The term ‘Magnificent Seven‘ originated with the 1960 Western film The Magnificent Seven, directed by John Sturges. The film follows a group of seven gunslingers, led by Yul Brynner and Steve McQueen, who are hired to protect a Mexican village from bandits. Its legacy spans sequels, a remake in 2016, and cultural resonance, especially for themes of bravery and teamwork.

In finance, The Magnificent Seven is a group of large American tech companies – Apple, Microsoft, Amazon, Nvidia, Meta Platforms, Tesla, and Alphabet. These companies are celebrated for their significant impact on consumer habits, influence over technological advancements, and dominance in the stock market. Holding immense weight in indices like the S&P 500 and NASDAQ, they are seen as critical drivers of market growth and key indicators of economic trends in areas like AI, e-commerce, and social media.

So, it’s quite understandable why the support of these tech giants might be the key to Trump or Harris winning their contested electoral duel.

Trump and tech executives

Top executives from major tech companies are increasingly reaching out to Donald Trump as the presidential election approaches. With polls showing a tight race between Trump and Vice President Kamala Harris, figures like Apple CEO Tim Cook and Amazon CEO Andy Jassy have initiated conversations with the former president. Even Mark Zuckerberg has expressed admiration for Trump following an assassination attempt on him. This shift comes after a tumultuous relationship marked by Facebook’s ban on Trump following the 6 January Capitol riot, a ban that was lifted in 2023.

Trump noted on the Barstool Sports podcast that he appreciates Zuckerberg’s current approach, emphasising that Zuckerberg is staying out of the election. Meta has taken steps to reduce political content on its platforms, including changes to Instagram that limit political recommendations unless users opt-in. Zuckerberg has also stated that he will not endorse any candidates in the 2024 election and plans to avoid significant political engagement. Despite their past conflicts, including Trump’s characterisation of Facebook as an ‘enemy of the people,’ Zuckerberg praised Trump’s resilient response to a recent assassination attempt, calling it ‘badass.’

 Body Part, Finger, Hand, Person, People, Adult, Male, Man, Crowd, Face, Head, Performer, Solo Performance, Audience, Mark Zuckerberg

This comment reflects a complicated dynamic between the two, as Trump claimed Zuckerberg expressed difficulty in voting for a Democrat in the upcoming election. However, Meta denied this, reiterating that Zuckerberg has not indicated any intention to vote or endorse the race.

Elon Musk’s relationship with Donald Trump has seen various phases, reflecting both support and criticism over the past years. Just two years ago, Musk voiced his disapproval of the former president, tweeting in 2022 that it was ‘time for Trump to hang up his hat & sail into the sunset.’ This tweet was in response to Trump publicly calling Musk a liar, accusing him of not being truthful about who he had voted for in past elections. Trump even doubted Musk’s then-pending purchase of Twitter, quipping to a rally crowd, ‘Elon is not going to buy Twitter.’ Of course, Musk did end up buying the platform, now called X, and has since made headlines for his shifting political alliances and increasingly public alignment with issues near Trump’s campaign.

Musk’s stance on US politics was historically more progressive, with nearly exclusive support for Democrats. However, his views on President Biden have notably soured, particularly over unionisation efforts and Biden’s perceived lack of recognition of Tesla’s achievements. Notably, Tesla was not invited to Biden’s 2021 White House electric vehicle summit, despite its status as a major EV manufacturer. Musk’s frustration only grew as his companies have faced federal investigations under the Biden administration, including scrutiny over Tesla’s autopilot feature and his controversial acquisition of Twitter. By 2023, Musk expressed his dissatisfaction with the Biden administration, stopping short of an endorsement for Trump but hinting at his disapproval.

a person speaking into a microphone

Since taking over Twitter, Musk has shifted noticeably to the right, aligning with Trump on issues like government censorship and criticisms of ‘woke’ ideology. He has lifted Trump’s previous ban on Twitter and frequently shares opinions that echo Trump’s base, from distrust of the media to concerns about unchecked immigration. Political analyst Ryan Broderick suggests that Musk’s stance has transformed drastically since 2018, noting that his earlier, more liberal ‘neoliberal, happy-go-lucky’ messages have given way to tweets that often appeal to the far-right, drawing criticism and sparking debates across the platform.

Trump has responded to this shift with a warmer stance toward Musk. Recently, he praised Musk at a news conference, lauding his patriotism and mutual concern for the country. Musk also seems to have cemented his support for Trump, especially after publicly endorsing him and calling for his recovery following an alleged assassination attempt.

Additionally, Musk has committed $100 million to support Trump, and now, in a move stirring debate, he’s offering $1 million a day to selected voters who sign a petition supporting the First and Second Amendments. This campaign, led by Musk’s America PAC, is focused on registering Trump supporters and has been actively promoting the initiative in Pennsylvania, a key battleground state.

Musk’s financial support and giveaway campaign have raised concerns among election law experts. The PAC requires participants to be registered voters to be eligible for the million-dollar check, which some experts say may cross legal lines. UCLA Law professor Rick Hasen noted that while it is legal to pay people to sign petitions, tying eligibility to voter registration could potentially violate laws against incentivising voter registration.

Kamala Harris and Silicon Valley

On the other hand, Kamala Harris’s presidential campaign has also garnered substantial support from Silicon Valley’s elite, signalling a strong connection between her candidacy and tech industry leaders. Harris’s relationship with Silicon Valley extends back over a decade, partly attributed to her tenure as California’s attorney general and her subsequent role as a US senator. This long-standing connection has led many tech leaders to believe she might adopt a friendlier stance towards the industry than the Biden administration. Notable figures like former Facebook CEO Sheryl Sandberg, LinkedIn co-founder Reid Hoffman, philanthropist Melinda French Gates, and IAC chair Barry Diller are among those supporting Harris, and billionaire Laurene Powell Jobs, Steve Jobs’ widow, has been a close ally since 2013, hosting a fundraiser for Harris that year.

a woman laughing and holding a piece of paper

Beyond billionaires, Harris has also drawn support from a broad base of venture capitalists and tech workers. Employees at Alphabet, Amazon, and Microsoft have collectively contributed over $3 million to her campaign. Alphabet workers alone have donated $2.16 million, nearly 40 times their contribution to Trump. Amazon and Microsoft employees have also shown a strong preference for Harris, with their donations amounting to ten and twelve times that of Trump, respectively. While Meta and Apple have not reached the $1 million mark in contributions, their support for Harris also far exceeds what they have given to Trump.

Over 800 VCs have signed a ‘VCs For Kamala’ pledge, and a separate Tech4Kamala letter has gathered more than 1,200 signatures. Among her backers is Steve Spinner, a major Democratic fundraiser who has worked to consolidate Silicon Valley’s support behind Harris, arguing that the majority of the tech industry remains Democratic despite high-profile endorsements of Trump by figures like Elon Musk. Spinner emphasises that ‘for every one person who’s backing Trump, there’s 20 who are backing Kamala,’ dismissing pro-Trump tech figures as outliers in an overwhelmingly liberal industry.

However, this alignment is not without exceptions. David Marcus, former president of PayPal and CEO of the payment company Lightspark, has publicly shifted his allegiance from Democrats to Republicans, criticising what he sees as the Democratic leadership’s ‘hubris’ and its embrace of an ‘increasingly leftist ideology.’ His move underscores a divide within the tech sector, with some executives pulling away from a party they feel is distancing itself from the industry’s priorities.

Tech firms under scrutiny

A key point of focus is the regulatory scrutiny that Big Tech faces under President Joe Biden’s administration, specifically targeting companies like Apple and Google. Biden’s Department of Justice (DOJ) has pursued antitrust actions, arguing that Apple manipulates the smartphone market to limit competition and that Google’s practices resemble those of the AT&T monopoly that was dismantled in the 1980s. This intense scrutiny has created uncertainty for the tech giants, as they face regulatory challenges both at home and abroad, including significant tax penalties imposed by the EU —$14.4 billion for Apple and $2.6 billion for Google.

a man speaking into a microphone

In older statements, Trump expressed dissatisfaction with Google’s treatment of him, previously calling for maximum-level prosecution against the company for alleged bias. However, he recently noted a shift in Google’s stance, commenting that they appear ‘more inclined’ to support him.

He also mentioned discussing Apple’s European tax rulings with CEO Tim Cook, implying that such regulatory issues would be addressed more favourably under his leadership. Donald Trump has hinted that he might ease this pressure if reelected, suggesting that regulatory hurdles for Big Tech might lessen under his administration.

Trump’s tech policy

Donald Trump’s vision for tech policy includes reducing regulatory barriers to foster innovation and growth. Trump has expressed concern over what he sees as ‘illegal censorship’ by Big Tech, particularly social media platforms, which he claims display bias against conservative viewpoints. The Trump administration previously pursued antitrust actions against tech giants like Google and Meta, and he remains critical of companies he believes unfairly limit free speech online.

Trump’s approach to AI and cryptocurrencies favours a hands-off approach, arguing that the industry should be allowed time to develop without heavy government oversight. His policies suggest he would scale back initiatives such as the electric vehicle challenge and roll back consumer protections implemented under the Biden administration. Trump’s tech policy largely reflects a belief that the market will regulate itself and that minimising government intervention will drive US competitiveness on the world stage. He is also promising favourable policies such as corporate tax cuts.

In general, Trump’s rhetoric suggests a friendlier approach to tech giants, framing his administration as one that would ‘set free’ companies burdened by regulation. This would represent a significant departure from Biden’s approach, which could lead to more extensive oversight, adding another layer of importance to the election’s outcome for these powerful tech companies.

Harris’s point of view

On the contrary, Kamala Harris was appointed by Biden as the AI czar, tasking her with enhancing regulations surrounding AI technology as outlined in his executive order. During her tenure in this role, Harris collaborated with leaders from major tech firms like OpenAI, Microsoft, Alphabet, and Anthropic, emphasising a commitment to prioritising safety over corporate profits. She voiced concerns at the Global Summit on AI Safety last year, asserting that without robust government oversight, tech companies often prioritise profit at the expense of public well-being and democratic stability.

Kamala AI

Harris’ approach has also involved data privacy and bias protection, advocating for legislation to mitigate potential harms associated with AI and emerging digital platforms.

A major achievement for the Biden-Harris administration is the CHIPS and Science Act of 2022 which invested in American semiconductor production and tech research and development. This legislation supports clean energy projects and green tech, aiming to secure the country’s tech independence and strengthen national security by bringing more tech manufacturing stateside. Harris’ policies have targeted consumer protection against data misuse and online misinformation, echoing the administration’s interest in strengthening net neutrality and advocating for clearer data privacy laws.

In that sense, experts predict that Harris will largely continue Biden’s current regulatory framework on technology and AI, with only minor adjustments.

However, Harris’s policy positions, particularly on issues crucial to the tech industry such as tax reform, immigration, and antitrust enforcement, remain largely unarticulated, prompting Silicon Valley to tread carefully. Although Harris’s long history in California politics has earned her a base of goodwill, her campaign must address these policy uncertainties to secure substantial financial and strategic backing from an industry navigating the political flux. This balancing act is particularly challenging as she vies to retain traditional Democratic support without alienating a tech sector that remains cautious in light of growing regulatory pressures under the Biden administration.

The future of the tech sector

In conclusion -as technology continues to shape the economy- both candidates’ policies reflect the broader economic vision they hope to achieve. Harris envisions an inclusive, equitable tech landscape where consumer protection and innovation go hand-in-hand, while Trump’s policies prioritise a market-driven model that incentivises growth with minimal intervention. These differences underscore the fundamental contrast in their governance styles and philosophies regarding the role of government in technology.

Ultimately, the next president’s approach to technology will play a crucial role in determining how Americans interact with the digital world, work in an AI-driven economy, and navigate issues of privacy and digital citizenship. As the candidates refine their platforms, voters will face a choice between competing visions of how to guide the nation through a transformative era in technology and innovation.

Just-in-time reporting from the UN Security Council: Leveraging AI for diplomatic insight

On 21 and 24 October, DiploFoundation provided just-in time reporting from the UN Security Council sessions on scientific development and on women, peace, and security. Supported by Switzerland, this initiative aims to enhance the work of the UN Security Council and the broader UN system.

At the core of this effort is DiploAI, an advanced platform shaped by years of training on UN materials, which played a crucial role in unlocking the knowledge generated by the Security Council’s deliberations. This knowledge, often trapped in video recordings and transcripts, is now more accessible, providing valuable insights for diplomacy and global peace.

Unlocking the power of AI for peace and security

AI-supported reporting from the UN Security Council (UNSC) demonstrates the potential of combining cutting-edge technology with deep expertise in peace and security. This effort is part of ongoing work by DiploAI, which has been providing detailed reports on Security Council sessions in 2023-2024 and has covered the UN General Assembly (UNGA) for eight consecutive years. DiploAI is actively contributing to expanding the UN’s knowledge ecosystem.

Seamless interplay between experts and AI

The success of this initiative lies in the seamless interplay between DiploAI and security experts well-versed in UNSC procedures. The collaboration began with tailoring the AI system to the unique needs of the Council, using input from experts and diplomats to build a relevant knowledge base. Experts supplied key documents and session materials, which enhanced the AI’s contextual understanding. Feedback loops on keywords, topics, and focus areas ensured the AI’s output remained both accurate and diplomatically relevant.

A pivotal moment in this collaboration was the analysis of New Agenda for Peace , where Security Council experts helped DiploAI identify over 400 critical topics, laying the foundation for a comprehensive taxonomy on peace and security at the UN. This expertise, combined with DiploAI’s technical capabilities, has resulted in an AI system attuned to the subtleties of diplomatic language and priorities. Furthermore, the project introduced a Knowledge Graph—a visual tool for displaying sentiment and relational analysis between statements and topics—which adds new depth to the analysis of Council sessions.

Building on this foundation, DiploAI developed a custom chatbot capable of moving beyond standard Q&A interactions. By integrating data from all 2024 sessions and associated documents, the chatbot allows users to interact conversationally with the content, providing in-depth answers and real-time insights. This evolution marks a significant leap forward in accessing and understanding diplomatic data—shifting from static reports to interactive exploration of session materials.

AI and diplomatic sensitivities

The development of DiploAI’s Q&A module, refined through approximately ten iterations with feedback from UNSC experts, underscores the value of human-AI(-nism) collaboration. This module addresses essential diplomatic questions, with iterative refinements ensuring that responses meet the Council’s standards for accuracy and relevance. The result is an AI system capable of addressing critical inquiries while respecting the sensitivity required in diplomatic settings.

What’s new?

DiploAI’s suite of tools—including real-time meeting transcription and analysis—has transformed reporting and transparency at the UNSC. By integrating customized AI systems like retrieval-augmented generation (RAG) and knowledge graphs, DiploAI adds context, depth, and relevance to the extracted information. Trained on a vast corpus of diplomatic knowledge generated at Diplo over the last two decades, the AI system generates context-specific responses, providing comprehensive answers to questions about transcribed sessions.

Such an approach has enabled DiploAI to go beyond the simple transcription of panels’ dialogues, allowing diplomats and the public to access detailed transcripts, insightful reports, and an AI-powered chatbot, where they can obtain answers to questions related to the UNSC deliberations.

Key numbers from UN Security Council reports

Here are some numbers from 10 UNSC meetings that took place between January 2023 and October 2024: 

AD 4nXdWreUEHJQHzJdB4nK8RZO9UTxjycMDGJZWmUHYlJzjfhpcWieP36YOzgii QEPHk5T0sSvWH2 KKRuJL0SmT0A6Lb3HtGRK05z yDNaDQzdyyduitizcTW1CDFii2nWc5OOzc8Z1ZtLiu4VD35CrjOBegB?key=TIuvyxbTAag0O 7z8OmSfN9u

In conclusion…

DiploAI’s reporting from the Security Council, supported by Switzerland, shows how AI can enhance diplomacy while staying grounded in human expertise and practical needs. This blend of technical capability and domain-specific knowledge demonstrates how AI, when developed collaboratively, can contribute to more inclusive, informed, and impactful diplomacy.  

Comparative analysis: the Budapest Convention vs the UN Convention Against Cybercrime

This summer, the UN Member States reached a milestone by agreeing on a draft for the organisation’s first-ever international convention against cybercrime. While this marks a significant step, it has raised many questions among those closely following cybercrime issues. One of the key concerns is how this new UN convention will coexist with current frameworks, particularly the Budapest Convention of the Council of Europe, which has been ratified by 76 countries, and is considered by the Council of Europe as the first international framework to address cybercrime. What distinguishes the UN convention from the Budapest Convention, and how will the two interact moving forward?

In this analysis, we closely look at different chapters of both conventions to highlight the similarities and differences between the two documents. 

 Art, Graphics, Advertisement, Poster, City, Text

Status and parties

The ‘United Nations Convention Against Cybercrime; strengthening international cooperation for combating certain crimes committed by means of information and communications technology systems and for the sharing of evidence in electronic form of serious crimes’, or simply the UN Convention, is not formally adopted yet: while the draft was adopted by the Ad Hoc Committee by consensus, the text will be further considered by the General Assembly. Once formally adopted, the convention should come into force if ratified by 40 UN Member States.

The Convention on Cybercrime or Budapest Convention is the legally binding treaty established by a regional organisation, i.e. Council of Europe. The Convention was ratified by 76 States, including both members and non-members of the Council of Europe.

The Convention includes two protocols, developed and adopted over time. The first protocol on xenophobia and racism via computer systems was opened for signature in 2003. The second protocol on enhanced cooperation and disclosure of electronic evidence was finalised in 2022 and has been, for now, only ratified by Serbia and Japan. To come into force, the second protocol requires 5 ratifications.

The distinction between the two by parties that negotiated the treaties should also be noted: all UN Member States vs. 46 Member States of Council of Europe.

Purposes & Scope 

While both the Budapest Convention and the UN Convention share the overarching goal (which is to address cybercrime), their scopes are not exactly the same. 

The Budapest Convention primarily focuses on the criminalisation of specific offences (e.g. illegal access, data/system interferences, computer-related fraud, child sexual abuse material), procedural powers to address cybercrime, and fostering international cooperation, by offering an advanced framework for cross-border access to electronic evidence (e-evidence). 

The UN Convention’s aim is broader and takes a more comprehensive approach: it emphasises the need to prevent and combat cybercrime by strengthening international cooperation, providing technical assistance and capacity building for developing countries, particularly. 

In view of scope, the UN Convention offers a broader institutional and global cooperation framework, while the Budapest Convention covers a wider and more specific range of criminal offences and procedural powers related to cybercrime.

Specifically, the Budapest Convention and its Second Protocol apply to e-evidence related to any criminal offence, while the UN Convention limits its scope to offences with a serious crime threshold, defined in the treaty as those punishable by a maximum deprivation of liberty of at least four years or a more serious penalty. 

At the same time, the UN Convention is broader by addressing a wider range of issues, including the protection of state sovereignty, preventive measures, and provisions for technical assistance and information exchange, thus extending beyond the criminalisation and procedural focus of the Budapest Convention.

Definitions 

To a large extent, the definitions in the Budapest Convention have been replicated in the UN Convention. However, there are some significant differences, particularly reflecting the broader scope of the UN Convention.

The UN Convention specifically uses the terms ‘ICT’ and ‘ICT systems’ instead of ‘computer’ or ‘computer systems,’ broadening its applicability to a wider range of devices and technologies. This language has been a key point of criticism. Notably, in articles like 23(2)(b) and (c), and 35(1)(c), the reference to ‘any criminal offense’ extends beyond cybercrime, potentially allowing the collection of data for any crime as defined by national laws, raising concerns about overreach and the scope of its application. It also uses ‘electronic data’ instead of ‘computer data’ (as the Budapest Convention does) to encompass all forms of electronic data.

Specifically, article 2 defines ‘electronic data’ as ‘any representation of facts, information or concepts in a form suitable for processing in an information and communications technology system, including a program suitable to cause an information and communications technology system to perform a function’, which was criticised by civil society for taking too broad an approach to the terminology. The UN Convention also explicitly introduces ‘content data’ and ‘serious crime’, which are not defined in the Budapest Convention though are mentioned (and what triggered criticism from civil society as the definitions of ‘serious offences’ are left to domestic law and thus will vary from country to country).

Criminalisation 

The UN Convention is broader in scope compared to the Budapest Convention, as it criminalises additional forms of conduct. While some offences, like illegal access, are defined similarly in both conventions, the UN treaty expands the range of criminalised activities, addressing areas beyond the cyber-dependent crimes covered by the Budapest Convention, for instance by criminalising money laundering. The UN Convention also providers a provider scope to similar offences – for instance, this broader approach can be seen in provisions related to child sexual abuse material (below).

While article 9 of the Budapest Convention criminalises actions related to material, article 15 of the UN Cybercrime Convention extends beyond content/material and addresses solicitation, grooming, or making arrangements for the purpose of committing sexual offences against children, thus focusing more on preventing sexual offences from occurring by targeting the preparatory actions (solicitation or grooming), not just the possession or distribution of illegal content. However, it’s important to note that both instances refer to content-based crimes, with criticism focusing on the risk that victims may face prosecution simply for possessing certain types of content – particularly when real-time data collection is involved. This raises concerns about how such provisions might be misused to target individuals rather than the perpetrators of the crimes.

Both the Budapest Convention and the UN Convention address the integration of child protection into domestic legislation. However, they do not make a reference to the Optional Protocol to the Convention on the Rights of the Child on the sale of children, child prostitution, and child pornography that was ratified by 176 countries and already has this obligation in it. While both instruments touch on other treaties, they fail to incorporate or cite them directly in their text. The Budapest Convention is somewhat more comprehensive in this respect, as it explicitly references human rights treaties.

Offences related to child pornography (Art 9), the Budapest Convention

Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally and without right, the following conduct: producing child sexual abuse material for the purpose of its distribution through a computer system; offering or making available child sexual abuse material through a computer system; distributing or transmitting child sexual abuse material through a computer system; procuring child sexual abuse material through a computer system for oneself or for another person; possessing child sexual abuse material in a computer system or on a computer-data storage medium.




Solicitation or grooming for the purpose ofcommitting a sexual offence against a child (Art 15), the UN Convention

Each State Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law the act of intentionally communicating, soliciting, grooming, or making any arrangement through an information and communications technology system for the purpose of committing a sexual offence against a child, as defined in domestic law, including for the commission of any of the offences established in accordance with article 14 of this Convention. A State Party may require an act in furtherance of the conduct described in paragraph 1 of this article. A State Party may consider extending criminalization in accordance with paragraph 1 of this article in relation to a person believed to be a child. States Parties may take steps to exclude the criminalization of conduct as described in paragraph 1 of this article when committed by children.

The Budapest Convention doesn’t contain specific provisions for critical infrastructure protection, while the UN Convention specifically addresses the need to protect critical information infrastructures in article 21. At the same time, the UN Convention omits offences related to copyright infringement, which are included in the Budapest Convention. 

It should also be noted that the Budapest Convention integrates its criminalisation provisions across different sections (compared to the UN Convention) and is more focused on core cybercrime offences such as illegal access, data interference, and system interference. This structure reflects a narrower focus on crimes directly involving computer systems and data, without expanding into broader cyber-enabled crimes. 

Procedural powers 

The UN Convention (Articles 23-30) has a broader scope than the Budapest Convention (Articles 14-21), as it incorporates additional measures from UNCAC and UNTOC, such as provisions for the confiscation of crime proceeds (e.g. article 31) and witness protection (article 33 and 34), which are not covered in the Budapest Convention.

However, the core procedural powers between the two conventions are largely similar. Both conventions outline comparable conditions and safeguards, though the UN Convention has faced significant criticism from civil society due to its reliance on domestic laws to establish how these safeguards would be applied, which can vary widely across countries. This variation can lead to inadequate protections in states where local laws do not meet high human rights standards. This concern has also been raised in relation to the Budapest Convention and its protocols for a failure to provide specific procedural protections for privacy and freedom of expression

Conditions and Safeguards (Art 15), the Budapest Convention

1. Each Party shall ensure that the establishment, implementation and application of the powers and procedures provided for in this Section are subject to conditions and safeguards provided for under its domestic law, which shall provide for the adequate protection of human rights and liberties, including rights arising pursuant to obligations it has undertaken under the 1950 Council of Europe Convention for the Protection of Human Rights and Fundamental Freedoms, the 1966 United Nations International Covenant on Civil and Political Rights, and other applicable international human rights instruments, and which shall incorporate the principle of proportionality.

2. Such conditions and safeguards shall, as appropriate in view of the nature of the procedure or power concerned, inter alia, include judicial or other independent supervision, grounds justifying application, and limitation of the scope and the duration of such power or procedure. 

3. To the extent that it is consistent with the public interest, in particular the sound administration of justice, each Party shall consider the impact of the powers and procedures in this section upon the rights, responsibilities, and legitimate interests of third parties.









Conditions and safeguards (Art 24), the UN Convention

1. Each State Party shall ensure that the establishment, implementation and application of the powers and procedures provided for in this chapter are subject to conditions and safeguards provided for under its domestic law, which shall provide for the protection of human rights, in accordance with its obligations under international human rights law, and which shall incorporate the principle of proportionality

2. In accordance with and pursuant to the domestic law of each State Party, such conditions and safeguards shall, as appropriate in view of the nature of the procedure or power concerned, include, inter alia, judicial or other independent review, the right to an effective remedy, grounds justifying application, and limitation of the scope and the duration of such power or procedure.

3. To the extent that it is consistent with the public interest, in particular the proper administration of justice, each State Party shall consider the impact of the powers and procedures in this chapter upon the rights, responsibilities and legitimate interests of third parties. 

4. The conditions and safeguards established in accordance with this article shall apply at the domestic level to the powers and procedures set forth in this chapter, both for the purpose of domestic criminal investigations and proceedings and for the purpose of rendering international cooperation by the requested State Party. 

5. References to judicial or other independent review in paragraph 2 of this article are references to such review at the domestic level.

International cooperation 

Firstly, the Budapest Convention and its Second Protocol allow international cooperation for the collection of electronic evidence related to any criminal offence. This broad scope means that countries can assist each other in investigations involving crimes beyond cyber-related activities, as long as electronic evidence is involved. The Budapest Convention emphasises cross-border cooperation through established networks and mechanisms like 24/7 contact points.

The UN Convention limits its scope of international cooperation to serious crimes’ as defined by the treaty. These are offences punishable by a maximum of at least four years of imprisonment or more. However, as previously noted, articles such as 23(2)(b) and (c), and 35(1)(c) broaden the scope by referencing ‘any criminal offense.’

Secondly, the Budapest Convention in its Second Protocol includes a broader list of advanced tools (e.g. emergency mutual assistance in article 10 or video conferencing in article 11 etc.) for cross-border cooperation to obtain electronic evidence, and none of such tools have been included in the UN Convention. The Budapest Convention also emphasises timely preservation and sharing of data across borders, with an established network of 24/7 contact points to ensure rapid response in cybercrime investigations. The Second Protocol further strengthens data-sharing provisions, including direct cooperation with service providers and expedited disclosure of data in emergency situations.

The UN Convention provides mechanisms for data sharing but has been criticised for its provisions on confidentiality and transparency. Critics, including industry leaders, argue that the treaty has too many references to keeping requests confidential, which might limit transparency and oversight. This could lead to concerns about how certain countries use this data for surveillance or other purposes.

On the other hand, the UN Convention provides more areas for international cooperation since it includes the provisions from the UNTOC and UNCAC and includes provisions on crime prevention as well as freezing, seizure, confiscation and return of the proceeds (article 31), which are not included in the Budapest Convention.

The UN Convention, at the same time, lacks detailed safeguards, particularly, regarding how surveillance and data sharing might impact privacy. One of the provisions in article 22 grants states the authority to assert jurisdiction over crimes committed outside their borders if their nationals are affected, which would effectively allow other states to interfere in their domestic affairs. This also means that if states want to use the convention to prosecute the conduct of individuals outside their territory, they can do so.

Further, article 27 allows states to access electronic data (which is very broadly defined in the treaty) from individuals if they are located in their country, no matter where that data is stored. The same power is designed to order service providers that offer their services in the territory of a state to provide subscriber information relating to such services and this may include phone, emails, account details and other personally identifiable information.

Conclusion

As both the UN Cybercrime Convention and the Budapest Convention continue to shape global cybercrime policy, the challenge of how these instruments will coexist becomes increasingly relevant. The Budapest Convention, as the first international treaty on cybercrime, has long served as a foundational framework, providing a robust structure for addressing cyber-related offences while emphasising human rights and alignment with other international treaties.

However, states already party to the Budapest Convention may find themselves caught between the narrower, more established approach of that treaty and the broader mandates of the UN Convention. The latter’s focus on ‘serious crimes’ and the ambiguity around the scope of data collection for any offense defined by domestic law could lead to inconsistencies in how cybercrime is addressed globally, especially when legal definitions of cyber offences differ between nations.

The ability of these two instruments to coexist may depend on diplomatic efforts to create a complementary relationship between the two. Ensuring that both conventions are implemented in a way that respects existing international norms and human rights will be key to avoiding legal fragmentation and ensuring that global cybercrime prevention efforts are effective and coordinated.

Revolutionising medicine with AI: From early detection to precision care

It has been more than four years since AI was first introduced into clinical trials involving humans. Even back then, it was evident that the advancement of artificial intelligence—currently the most popular buzzword online in 2024—would enhance every aspect of society, including medicine.

Thanks to AI-powered tools, diseases that once baffled humanity are now much better understood. Some conditions are also easier to detect, even in their earliest stages, significantly improving diagnosis outcomes. For these reasons, AI in medicine stands out as one of the most valuable technological advances, with the potential to improve individual health and, ultimately, the overall well-being of society.

Although ethical concerns and doubts about the accuracy of AI-assisted diagnostic tools persist, it is clear that the coming years and decades will bring developments and improvements that once seemed purely theoretical.

AI collaborates with radiologists to enhance diagnostic accuracy

AI has been a crucial aid in medical diagnostics for some time now. A Japanese study showed that ChatGPT performed more accurate assessments than experts in the field.

After performing 150 diagnostics, neuroradiologists recorded an 80% accuracy rate for AI. These promising results encouraged the research team to explore integrating such AI systems into apps and medical devices. They also highlighted the importance of incorporating AI education into medical curricula to better prepare future healthcare professionals.

Early detection of brain tumours and lung cancer

Early detection of diseases, particularly cancer, is critical to a patient’s chances of survival. Many companies are focusing on improving AI within medical equipment to diagnose brain tumours and lung cancer in their earliest stages.

AI-enhanced lung nodule detection aims to improve cancer outcomes.

The algorithm developed by Imidex, which has received FDA approval, is currently in clinical trials. Its purpose is to improve the screening of potential lung cancer patients.

Collaborating with Spesana, the company is expected to be among the first to market once the research is finalised.

Growing competition shows AI’s progress

An increasing number of companies entering the AI-in-medicine field suggests that these advancements will be more widely accessible than initially expected. While the mentioned companies are set to dominate the North American market, a French startup Bioptimus is targeting Europe.

Their AI model, trained on millions of medical images, is capable of identifying cancerous cells and genetic anomalies within tumours, pushing the boundaries of precision medicine.

Public trust in AI medical diagnosis

New technologies often face public scepticism and AI in medicine is no exception. A 2023 study found that many patients feel uneasy with doctors relying on AI during treatment.

The Pew Research Centre report revealed that 60% of Americans are against AI-assisted diagnostics, while only 39% support it. Furthermore, 57% believe AI could worsen the doctor-patient relationship, compared to 13% who think it might improve it.

Doctor, Patient, Hospital, Doctor's office, Medical equipment, Medicine, AI

As for treatment outcomes, 38% anticipate improvements with AI, 33% expect negative results, and 27% believe no major changes will occur.

AI’s role in tackling dementia

Dementia, a progressive illness affecting cognitive functions, remains a major challenge for healthcare. However, AI has shown promising potential in this area. Through advanced pattern recognition, AI systems can analyse massive datasets, detect changes in brain structure, and identify early warning signs of dementia, long before symptoms manifest.

By processing various test results and brain scans, AI algorithms enable earlier interventions, which can greatly improve patients’ quality of life. In particular, researchers from Edinburgh and Dundee are hopeful that their AI tool, SCAN-DAN, will revolutionise the early detection of this neurodegenerative disease.

The project is part of the larger global NEURii collaboration, which aims to develop digital health tools that can address some of the most pressing challenges in dementia research.

Helping with early breast cancer detection

AI has shown great potential in improving the effectiveness of ultrasound, mammography, and MRI scans for breast cancer detection. Researchers in the USA have developed an AI system capable of refining disease staging by accurately distinguishing between benign and malignant tumours.

Moreover, the AI system can reduce false positives and negatives, a common problem in traditional breast cancer detection methods. The ability to improve diagnostic accuracy and provide a better understanding of disease stages is crucial in treating breast cancer from its earliest signs.

Computer, AI, Breast cancer, Disease prevention, Cancer detection

Investment in AI set to skyrocket

With early diagnosis playing a pivotal role in curing diseases, more companies are seeking partnerships and funding to keep pace with the leading investors in AI technology.

Recent projections indicate that AI could add nearly USD $20 trillion to the global economy by 2030. While it is still difficult to estimate healthcare’s share in this growth, some early predictions suggest that AI in medicine could account for more than 10% of that value.

What is clear, however, is that major global companies are not missing the opportunity to invest in businesses developing AI-driven medical equipment.

What can we expect in the future?

AI is making significant progress across various industries, and its impact on medicine could be transformational. If healthcare receives as much or more AI focus than fields like economics and ecology, the potential to revolutionise medicine as a science is immense.

Various AI systems that learn about diseases and treatment processes have the capacity to gather and analyse far more information than the human brain. As regulatory frameworks evolve worldwide, AI-driven diagnostic tools may lead to faster, more accurate disease detection than ever before, potentially marking a major turning point in the history of medical science.

El Salvador: Blueprint for the bitcoin economy

On 7 September 2021, El Salvador became the first country in the world to adopt bitcoin as legal tender, sparking global discussions about the role of cryptocurrencies in national economies. This groundbreaking decision transformed El Salvador into a beacon for financial innovation as other nations began to closely monitor its bold experiment. Initially seen as a monetary gamble, El Salvador’s decision has evolved into a strategy with far-reaching implications, both domestically and internationally. While the International Monetary Fund (IMF) and other financial institutions have raised concerns about potential risks, El Salvador’s commitment to cryptocurrency adoption has set a precedent by reshaping global economic systems.

From experiment to national strategy

When El Salvador made bitcoin legal tender, it was an ambitious experiment aimed at solving several economic challenges. The country, reliant on remittances and with a significant part of its population unbanked, saw cryptocurrency as a way to promote financial inclusion. Today, with 5,748.8 bitcoins held in national reserves, El Salvador’s leadership continues to buy bitcoin, signalling confidence in the long-term potential of the digital asset. In this way, the initial idea of bitcoin adoption has transformed from a simple test into a cornerstone of the nation’s financial strategy. El Salvador is now laying the foundation for broader economic development by positioning itself as a crypto-friendly environment.

 Logo, Emblem, Symbol, Hockey, Ice Hockey, Ice Hockey Puck, Rink, Skating, Sport

Economic impact: benefits and challenges

El Salvador’s embrace of bitcoin has left a significant mark on its economy, though it has not been without its challenges. One of the major benefits has been the ability to streamline remittances, allowing Salvadorians abroad (of which there are many in emigration) to send money home using bitcoin, cutting out the traditional intermediaries and lowering fees. This move has made remittances faster, more affordable, and more accessible.

The country has also witnessed a surge in foreign investment, as businesses interested in cryptocurrency see El Salvador as an attractive hub. Crypto enthusiasts and digital nomads have flocked to the country, boosting tourism and putting El Salvador on the global map as a bitcoin-friendly destination.

Moreover, El Salvador’s innovation goes beyond adopting bitcoin as legal tender; it has also ventured into the creation of bitcoin bonds and infrastructure projects like ‘Bitcoin City.’ President Nayib Bukele’s vision for Bitcoin City includes a tax-free, crypto-friendly zone designed to attract foreign investment. The city, with a projected USD $1.6 billion investment, will feature modern infrastructure and create an environment conducive to the growth of blockchain and cryptocurrency businesses. If successful, Bitcoin City could become a global hub for digital finance, further cementing El Salvador’s position at the forefront of this financial revolution.

However, bitcoin volatility remains a persistent issue. Critics argue that heavy reliance on such a fluctuating asset could jeopardise financial stability. Unpredictable price swings in the crypto market pose a risk, potentially leading to instability in the national economy. While El Salvador continues to bet on bitcoin’s long-term success, these challenges highlight the need to carefully navigate the balancing act between innovation and economic resilience.

 City, Urban, Metropolis

Educating for a bitcoin future

One of the latest initiatives El Salvador has undertaken is its Bitcoin certification programme. Spearheaded by the National Bitcoin Office (ONBTC), the programme aims to educate 80,000 government employees on the intricacies of bitcoin and blockchain technology. This strategic move underscores the nation’s commitment to integrating bitcoin into its broader governance structure.

By equipping civil servants with essential knowledge, El Salvador ensures that bitcoin adoption is not just a top-down policy but becomes deeply embedded in the daily functioning of the state. Beyond focusing on external performance, El Salvador is working to seed crypto into the core of its state organisations, ensuring that government employees fully understand the nature of cryptocurrency and not merely reproduce its use. This educational initiative is also expected to create a ripple effect across other sectors, solidifying El Salvador’s place as a leader in the global crypto space.

Global influence and partnerships

El Salvador’s progressive approach to cryptocurrency is beginning to influence other nations. Argentina, for example, has recently started collaborating with El Salvador to learn from its experience. Argentina’s pro-crypto president, Javier Milei, has shown interest in using cryptocurrencies to stabilise the country’s economy. This collaboration is a testament to the growing recognition of El Salvador’s pioneering role in this space. As more countries begin to explore cryptocurrency adoption, El Salvador’s approach provides a practical case study, proving that integrating digital assets into a national economy can have tangible benefits.

 Land, Nature, Outdoors, Sea, Water, Shoreline, Coast, Scenery

Regulatory challenges and criticism

Despite the enthusiasm surrounding Bitcoin adoption, El Salvador has faced significant criticism from international organisations. The IMF has been particularly vocal, warning that the adoption of cryptocurrency as legal tender poses risks to financial stability, consumer protection, and market integrity. These warnings highlight the regulatory challenges El Salvador faces, especially when dealing with global institutions that remain sceptical of digital currencies. However, the country has responded by reinforcing its regulatory frameworks and increasing transparency around its bitcoin activities. While the road is not without obstacles, El Salvador’s approach showcases a willingness to navigate these complexities and maintain its position as a leader in the crypto space.

El Salvador’s Chivo wallet project

One of the most significant elements of El Salvador’s bitcoin adoption is the introduction of the Chivo wallet, which plays a pivotal role in promoting financial inclusion. Chivo, the government-backed digital wallet, allows Salvadorians to easily access and use bitcoin, providing a crucial gateway to financial services for those previously excluded from the traditional banking system.

To help citizens become familiar with the cryptocurrency, the government offered USD $30 worth of bitcoin to each individual through the Chivo wallet, the country’s digital currency platform. However, public reception was mixed, with an August 2021 poll indicating that 70% of respondents opposed the initiative, and only 15% expressed confidence in bitcoin. Concerns about volatility also led to protests in San Salvador, as many feared the potential for drastic price fluctuations.

The Chivo wallet, available on mobile devices, empowers even the unbanked population to participate in the digital economy by enabling seamless transactions and easy access to remittances sent from abroad. By leveraging this digital wallet project, El Salvador has not only embraced crypto but has also laid the foundation for a more inclusive financial ecosystem. This approach serves as a model for other developing nations, showing how the integration of a government-supported crypto platform can help bypass traditional banking barriers, delivering financial tools to millions and boosting both individual economic prospects and national economies.

 Art, Graphics, Text, Logo

The broader global implications

El Salvador’s bold experiment is already making waves across the world. The Central African Republic has followed in its footsteps, adopting bitcoin as legal tender. As other nations watch closely, it is becoming clear that El Salvador’s approach could inspire a global movement towards cryptocurrency-driven economies. For countries struggling with inflation, financial exclusion, or dependence on foreign currencies, bitcoin adoption represents an alternative path. The world sees that cryptocurrency is not just a speculative asset—it can be a powerful tool for economic development and innovation.

A leader in the new digital financial order

El Salvador’s decision to adopt bitcoin as legal tender has positioned the country at the forefront of a financial revolution. What started as a daring experiment has blossomed into a comprehensive national strategy with global implications. Despite the challenges, including market volatility and regulatory pushback, El Salvador’s proactive approach sets a powerful and inspiring example for other countries. By embracing cryptocurrency from the deepest level of society, from education to infrastructure, El Salvador is showing the world that digital currencies can drive economic progress. As more nations observe its success, the small Central American nation may just be paving a historical way for global financial transformation.

AI and ethics in modern society

Humanity’s rapid advancements in robotics and AI have shifted many ethical and philosophical dilemmas from the realm of science fiction into pressing real-world issues. AI technologies now permeate areas such as medicine, public governance, and the economy, making it critical to ensure their ethical use. Multiple actors, including governments, multinational corporations, international organisations, and individual citizens, share the responsibility to navigate these developments thoughtfully.

What is ethics?

Ethics refers to the moral principles that guide individual behaviour or the conduct of activities, determining what is considered right or wrong. In AI, ethics ensures that technologies are developed and used in ways that respect societal values, human dignity, and fairness. For example, one ethical principle is respect for others, which means ensuring that AI systems respect the rights and privacy of individuals.

What is AI?

Artificial Intelligence (AI) refers to systems that analyse their environment and make decisions autonomously to achieve specific goals. These systems can be software-based, like voice assistants and facial recognition software, or hardware-based, such as robots, drones, and autonomous cars. AI has the potential to reshape society profoundly. Without an ethical framework, AI could perpetuate inequalities, reduce accountability, and pose risks to privacy, security, and human autonomy. Embedding ethics in the design, regulation, and use of AI is essential to ensuring that this technology advances in a way that promotes fairness, responsibility, and respect for human rights.

AI ethics and its importance

AI ethics focuses on minimising risks related to poor design, inappropriate applications, and misuse of AI. Problems such as surveillance without consent and the weaponisation of AI have already emerged. This calls for ethical guidelines that protect individual rights and ensure that AI benefits society as a whole.

a person in a white suit

Global and regional efforts to regulate AI ethics

There are international initiatives to regulate AI ethically. For example, UNESCO‘s 2021 Recommendation on the Ethics of AI offers guidelines for countries to develop AI responsibly, focusing on human rights, inclusion, and transparency. The European Union’s AI Act is another pioneering legislative effort, which categorises AI systems by their risk level. The higher the risk, the stricter the regulatory requirements.

The Collingridge dilemma and AI

The Collingridge dilemma points to the challenge of regulating new technologies like AI. Early regulation is difficult due to limited knowledge of the technology’s long-term effects, but once the technology becomes entrenched, regulation faces opposition from stakeholders. AI is currently in a dual phase: while its long-term implications are uncertain, we already have enough examples of its immediate impact—such as algorithmic bias and privacy violations—to justify regulation in key areas.

Asimov’s Three Laws of Robotics: Ethical inspiration for AI

Isaac Asimov’s Three Laws of Robotics, while fictional, resonate with many of the ethical concerns that modern AI systems face today. These laws—designed to prevent harm to humans, ensure obedience to human commands, and prioritise the self-preservation of robots—provide a foundational, if simplistic, framework for responsible AI behaviour.

 Page, Text, Chart, Plot

Modern ethical challenges in AI

However, real-world AI introduces a range of complex challenges that cannot be adequately managed by simple rules. Issues such as algorithmic bias, privacy violations, accountability in decision-making, and unintended consequences complicate the ethical landscape, necessitating more nuanced and adaptive strategies for effectively governing AI systems.

As AI continues to develop, it raises new ethical dilemmas, including the need for transparency in decision-making, accountability in cases of accidents, and the possibility of AI systems acting in ways that conflict with their initial programming. Additionally, there are deeper questions regarding whether AI systems should have the capacity for moral reasoning and how their autonomy might conflict with human values.

Categorising AI and ethics

Modern AI systems exhibit a spectrum of ethical complexities that reflect their varying capabilities and applications. Basic AI operates by executing tasks based purely on algorithms and pre-programmed instructions, devoid of any moral reasoning or ethical considerations. These systems may efficiently sort data, recognise patterns, or automate simple processes, yet they do not engage in any form of ethical deliberation.

In contrast, more advanced AI systems are designed to incorporate limited ethical decision-making. These systems are increasingly being deployed in critical areas such as healthcare, where they help diagnose diseases, recommend treatments, and manage patient care. Similarly, in autonomous vehicles, AI must navigate complex moral scenarios, such as how to prioritise the safety of passengers versus pedestrians in unavoidable accident situations. While these advanced systems can make decisions that involve some level of ethical consideration, their ability to fully grasp and navigate complex moral landscapes remains constrained.

The variety of ethical dilemmas

 Logo, Nature, Outdoors, Person

Legal impacts

The question of AI accountability is increasingly relevant in our technologically driven society, particularly in scenarios involving autonomous vehicles, where determining liability in the event of an accident is fraught with complications. For instance, if an autonomous car is involved in a collision, should the manufacturer, software developer, or vehicle owner be held responsible? As AI systems become more autonomous, existing legal frameworks may struggle to keep pace with these advancements, leading to legal grey areas that can result in injustices. Additionally, AI technologies are vulnerable to misuse for criminal activities, such as identity theft, fraud, or cyberattacks. This underscores the urgent need for comprehensive legal reforms that not only address accountability issues but also develop robust regulations to mitigate the potential for abuse.

Financial impacts

The integration of AI into financial markets introduces significant risks, including the potential for market manipulation and exacerbation of financial inequalities. For instance, algorithms designed to optimise trading strategies may inadvertently favour wealthy investors, perpetuating a cycle of inequality. Furthermore, biased decision-making algorithms can lead to unfair lending practices or discriminatory hiring processes, limiting opportunities for marginalised groups. As AI continues to shape financial systems, it is crucial to implement safeguards and oversight mechanisms that promote fairness and equitable access to financial resources.

Environmental impacts

The environmental implications of AI cannot be overlooked, particularly given the substantial energy consumption associated with training and deploying large AI models. The computational power required for these processes contributes significantly to carbon emissions, raising concerns about the sustainability of AI technologies. In addition, the rapid expansion of AI applications in various industries may lead to increased electronic waste, as outdated hardware is discarded in favour of more advanced systems. To address these challenges, stakeholders must prioritise the development of energy-efficient algorithms and sustainable practices that minimise the ecological footprint of AI technologies.

Social impacts

AI-driven automation poses a profound threat to traditional job markets, particularly in sectors that rely heavily on routine tasks, such as manufacturing and customer service. As machines become capable of performing these jobs more efficiently, human workers may face displacement, leading to economic instability and social unrest. Moreover, the deployment of biassed algorithms can deepen existing social inequalities, especially when applied in sensitive areas like hiring, loan approvals, or criminal justice. The use of AI in surveillance systems also raises significant privacy concerns, as individuals may be monitored without their consent, leading to a chilling effect on free expression and civil liberties.

Psychological impacts

The interaction between humans and AI systems can have far-reaching implications for emotional well-being. For example, AI-driven customer service chatbots may struggle to provide the empathetic responses that human agents can offer, leading to frustration among users. Additionally, emotionally manipulative AI applications in marketing may exploit psychological vulnerabilities, promoting unhealthy consumer behaviours or contributing to feelings of inadequacy. As AI systems become more integrated into everyday life, understanding and mitigating their psychological effects will be essential for promoting healthy human-computer interactions.

Trust issues

Public mistrust of AI technologies is a significant barrier to their widespread adoption. This mistrust is largely rooted in the opacity of AI systems and the potential for algorithmic bias, which can lead to unjust outcomes. To foster trust, it is crucial to establish transparent practices and accountability measures that ensure AI systems operate fairly and ethically. This can include the development of explainable AI, which allows users to understand how decisions are made, as well as the implementation of regulatory frameworks that promote responsible AI development. By addressing these trust issues, stakeholders can work toward creating a more equitable and trustworthy AI landscape.

These complex ethical challenges require global coordination and thoughtful, adaptable regulation to ensure that AI serves humanity’s best interests, respects human dignity, and promotes fairness across all sectors of society. The ethical considerations around AI extend far beyond individual technologies or industries, impacting fundamental human rights, economic equality, environmental sustainability, and societal trust.

As AI continues to advance, the collective responsibility of governments, corporations, and individuals is to build robust, transparent systems that not only push the boundaries of innovation but also safeguard society. Only through an ethical framework can AI fulfil its potential as a transformative force for good rather than deepening existing divides or creating new dangers. The journey towards creating ethically aware AI systems necessitates ongoing research, interdisciplinary collaboration, and a commitment to prioritising human well-being in all technological advancements.

Digital Public Infrastructure: An innovative outcome of India’s G20 leadership

From latent concept to global consensus

Not more than a couple of years back, this highly jingled acronym of the present time – DPI (Digital Public Infrastructure), was merely a latent term. However, today it has gained an ‘internationally agreed vocabulary’ with wide-ranging global recognition. This could not imply that efforts in this direction had not been laid earlier, yet a tangible global consensus over the formal incorporation of the term was unattainable. 

The complex dynamics of such a long-standing impasse or ambiguity over a potential consensus-based acknowledgement of DPI, has been prominently highlighted in a recently published report of ‘India’s G20 Task Force on Digital Public Infrastructure’. The report clearly underlines that, 

While DPI was being designed and built independently by selected institutions around the world for over a decade, there was an absence of a global movement that identified the common design approach that drove success, as well as low political awareness at the highest levels of the impacts of DPI on accelerating development. 

It was only at the helm of India’s G20 Presidency in September 2023 that the first-ever multilateral consensus was reached on recognising DPI as being a ‘safe, secure, trusted, accountable, and inclusive’ driver of socioeconomic development across the globe. Notably, the ‘New Delhi Declaration’ has cultivated a DPI approach, intending to enhance a robust, resilient, innovative and interoperable digital ecosystem steered by a crucial interplay of technology, business, governance, and the community.

The DPI approach persuasively offers a middle way between a purely public and a purely private strand, with an emphasis on addressing ‘diversity and choice’; encouraging ‘innovation and competition’;  and ensuring ‘openness and sovereignty’. 

Ontologically, this marks a perceptible shift from the exclusive idea of technocratic-functionalism to embracing the concepts of multistakeholderism and pluralistic universalism.  These conceptualisations hold substance in the realm of India’s greater quest to democratise and diversify the power of innovation, based on delicate trade-offs and cross-sectional intersubjective understanding. Nevertheless, it is also to be construed that an all-pervasive digital transition increasingly entrenched into the burgeoning international DPI approach, has been exceptionally drawn from India’s own successful experience of the domestic DPI framework, namely India Stack.

India Stack is primarily an agglomeration of open Application Programming Interfaces (APIs) and digital public goods, aiming to enhance a broadly vibrant social, financial, and technological ecosystem. It offers multiple benefits and ingenious services, like faster digital payments through UPI, Aadhaar Enabled Payments System (AEPS), direct benefit transfers, digital lending, digital health measures, education and skilling, and secure sharing of data. The remarkable journey of India’s digital progress and coherently successful implementation of DPI over the last decade indisputably turned out to be the centre of attention during the G20 deliberations. 

India’s role in advancing DPI through G20 engagement and strategic initiative

What seems quite exemplary is the procedural dynamism with which actions have been undertaken to mobilise the vocabulary and effectiveness of DPI during various G20 meetings and conferences held within India. Most importantly, the Digital Economy Working Group (DEWG) meetings and negotiations were organised in collaboration with all the G20 members, guest countries, and eminent knowledge partners, like ITU, OECD, UNDP, UNESCO and the World Bank. As an effect, the Outcome Document of the Digital Economy Ministers’ Meeting was unanimously agreed to by all the G20 members and presented a comprehensive global digital agenda with appropriate technical nuances and risk-management strategies. 

Along with gaining traction in DEWG, the DPI agenda also got prominence in other G20 working groups under India’s Presidency. These include the Global Partnership for Financial Inclusion Working Group; the Health Working Group; the Agriculture Working Group; the Trade and Investment Working Group; and the Education Working Group. 

Commensurate to these diverse group meetings, the Indian leadership also conducted bilateral negotiations with its top G20 strategic and trading partners, namely the USA, the EU, France, Japan, and Australia. Interestingly, the official joint statements of all these bilateral meetings decisively entailed the catchword ‘DPI’. It could be obviously considered whether the time was ripe, or it was India’s well-laid-out strategy that ultimately paid off. Yet, it could not be repudiated that a well-thought-out parallel negotiation process had certainly played an instrumental role in providing leverage to the DPI approach. 

Further, in follow-up to the New Delhi Declaration of September 2023, the Prime Minister of India announced the launch of two landmark India-led initiatives during the G20 Virtual Leaders’ Summit in November 2023. The two initiatives denominated as the Global Digital Public Infrastructure Repository (GDPIR) and the Social Impact Fund (SIF) are mainly inclined towards the advancement of DPI in the Global South, particularly by offering upstream technical-financial assistance and knowledge-based expertise. This kind of forward-looking holistic approach reasonably fortifies the path towards a transformative global digital discourse. 

India 2025 Towards a Multilateral Framework for Digital Public Infrastructure.
Digital Public Infrastructure: An innovative outcome of India’s G20 leadership 37

Building on momentum: Brazil’s role in advancing DPI

Ever since India passed on the wand of the G20 Presidency to Brazil, expectations have been pretty high from the latter to carry forward the momentum and ensure that emerging digital technologies effectively meet the requirements of the Global South. It is encouraging to witness that Brazil is vehemently making a step forward to maintain the drive, with a greater emphasis on deepening the discussion over crucial DPI components such as digital identification, data governance, data sharing infrastructure, and global data safeguards. Although Brazil has seized an impressive track record of using digital infrastructure to promote poverty alleviation and inclusive growth within the country, a considerable measure of success at the forthcoming G20 summit will be its efficacy in stimulating political and financial commitments for a broader availability of such infrastructure. 

Despite the fact that concerted endeavours are being deployed to boost the interoperability, scalability and accessibility of DPIs, it becomes highly imperative to ensure their confidentiality and integrity. This turns out to be more alarming in the wake of increased cybersecurity breaches, unwarranted data privacy intrusions, and potential risks attached to emerging technologies like AI. Hence, at this critical juncture, it is quintessential to foster more refined, coordinated and scaled-up global efforts, or to be more precise, an effective global digital cooperation.

Pavel Durov, a transgressor or a fighter for free speech and privacy?

It has not been that long since Elon Musk was hardly criticised by the British government for spreading extremist content and advocating for the freedom of speech on his platform. This freedom of speech has probably become a luxury few people can afford, especially on platforms whose owners are less committed to those principles while trying to comply with the requirements of governments worldwide. The British riots, where individuals were allegedly arrested for social media posts, further illustrate the complexity of regulating social media digital policies. While governments and like-minded people may argue that these actions are necessary to curb violent extremism and exacerbation of critical situations, others see them as a dangerous encroachment and undermining of free speech. 

The line between expressing controversial opinions and inciting violence or allowing crime on social media platforms is often blurred, and the consequences of crossing it can be severe. However, let us look at a situation where someone is arrested for allegedly turning a blind eye to organised crime activities on his platform, as in the case of Telegram’s CEO. 

Namely, Pavel Durov, Telegram’s founder and CEO, became another symbol of resistance against government control over digital communications alongside Elon Musk. His arrest in Paris on 25 August 2024 sparked a global debate on the fine line between freedom of speech and the responsibilities that come with running a platform that allows for uncensored, encrypted communication. French authorities allegedly detained Durov based on an arrest warrant related to his involvement in a preliminary investigation and his unwillingness to grant authorities access to his encrypted messaging app, which has over 1 billion users worldwide. The investigation concerns Telegram’s alleged role in enabling a wide range of crimes due to insufficient moderation and lack of cooperation with law enforcement. The charges against him—allegations of enabling criminal activities such as child exploitation, drug trafficking, terrorism, and fraud, as well as refusing to cooperate with authorities —are severe. However, they also raise critical questions about the extent to which a platform owner can or should be held accountable for the actions of its users.

Durov’s journey from Russia to France highlights the complex interplay between tech entrepreneurship and state control. He first made his mark in Russia, founding VKontakte, a platform that quickly became a refuge for political dissenters. His refusal to comply with Kremlin demands to hand over user data and sell the platform eventually forced him out of the country in 2014. Meanwhile, Durov launched Telegram in 2013, a messaging app focused on privacy and encryption, which has since become a tool for those seeking to avoid government surveillance. However, his commitment to privacy has put him at odds with various governments, leading to a life of constant movement across borders to evade legal and political challenges.

In France, Durov’s initially promising relationship with the government soured over time. Invited by President Emmanuel Macron in 2018 to consider moving Telegram to Paris, Durov even accepted French citizenship in 2021. However, the French government’s growing concerns about Telegram’s role in facilitating illegal activities, from terrorism to drug trafficking, led to increased scrutiny. The tension as we already know, culminated in Durov’s recent detention, which is part of a broader investigation into whether platforms like Telegram enable online criminality.

Durov’s relationship with the United Arab Emirates adds another layer of complexity. After leaving Russia, Durov based Telegram in the UAE, where he was granted citizenship and received significant financial backing. However, the UAE’s restrictive political environment and stringent digital controls have made this partnership a delicate one, with Durov carefully navigating the country’s security concerns while maintaining Telegram’s operations.

The USA, too, has exerted pressure on Durov. Despite repeated attempts by US authorities to enlist his cooperation in controlling Telegram, Durov has steadfastly resisted, reinforcing his reputation as a staunch defender of digital freedom. He recently told to Tucker Carlson in an interview that the FBI approached a Telegram engineer, attempting to secretly hire him to install a backdoor that would allow US intelligence agencies to spy on users. However, his refusal to collaborate with the FBI has only heightened his standing as a symbol of resistance against governmental overreach in the digital realm.

With such an intriguing biography of his controversial tech entrepreneurship, Durov’s arrest indeed gives us reasons for speculation. At the same time, it seems not just a simple legal dispute but a symbol of the growing diplomatic and legal tensions between governments and tech platforms over control of cyberspaces. His journey from Russia to his current predicament in France highlights a broader issue: the universal challenge of balancing free expression with national security. 

Accordingly, Telegram, based in Dubai and widely used across Russia and the former Soviet Union, has faced scrutiny for its role in disseminating unfiltered content, especially during the Russia-Ukraine conflict. Durov, who left Russia in 2014 after refusing to comply with government demands, has consistently maintained that Telegram is a neutral platform committed to user privacy and free speech. Additionally, his multiple citizenships, including Russian (since the devolution in 1991, previously the Soviet Union from birth), Saint Kitts and Nevis (since 2013), French (since 2021), and UAE (since 2021), are only escalating tenseness between concerned governments pressing on French President Emmanuel Macron and asking for clarifications on the matter. Even Elon Musk confronted Emanuel Macron by responding directly to his post on X, claiming that ‘It would be helpful to the global public to understand more details about why he was arrested’, as he described it as an attack on free speech.

Despite the unclear circumstances and vague official evidence justifying the arrest and court process, Durov will undoubtedly face the probe and confront the accusations under the prescribed laws concerning the case. Therefore, it would be preferable to look at the relevant laws and clarify which legal measures are coherent with the case. 

The legal backdrop to Durov’s arrest is complex, involving both US and EU laws that govern digital platforms. However, Section 230 of the US Communications Decency Act of 1996, often called the ‘twenty-six words that created the internet,’ is the governing law that should be consulted and under which, among others, this case would be conducted. The law, in its essence, protects online platforms from liability for user-generated content as long as they act in good faith to remove unlawful material. This legal shield has allowed platforms like Telegram to flourish, offering robust encryption and a promise of privacy that appeals to millions of users worldwide. However, this immunity is not absolute. Section 230 does not protect against federal criminal liability, which means that if Telegram is found to have knowingly allowed illegal activities to increase without taking adequate steps to curb them, Durov could indeed be held liable.

In the EU context, the recently implemented Digital Services Act (DSA) imposes stricter obligations on digital platforms, particularly those with significant user bases. Although Telegram, with its 41 million users in the EU, falls short of the ‘very large online platforms’ (VLOP) category that would subject it to the most stringent DSA requirements, it would probably still be obligated to act against illegal content. The DSA emphasises transparency, accountability, and cooperation with law enforcement—a framework that contrasts sharply with Telegram’s ethos of privacy and minimal interference.

 Performer, Person, Solo Performance, Adult, Male, Man, Head, Face, Happy, Pavel Durov

The case also invites comparisons with other tech moguls who have faced similar dilemmas. Elon Musk’s acquisition of Twitter, now rebranded as X, has been marked by his advocacy for free speech. However, even Musk has had to navigate the treacherous waters of content moderation, facing governments’ pressure to combat disinformation and extremist content on his platform. The last example is the dispute with Brazil’s Supreme Court, where Elon Musk’s social media platform X could be easily ordered to shut down in Brazil due to alleged misinformation and extremist content concerning the case that was spread on X. The conflict has deepened tensions between Musk and Supreme Court Judge Alexandre de Moraes, whom Musk accused of engaging in censorship.

Similarly, Mark Zuckerberg’s Meta has been embroiled in controversies over its role in child exploitation, but especially in spreading harmful content, from political misinformation to hate speech. On the other hand, Zuckerberg’s recent confession in an official letter that, in 2021, the White House and other Biden administration officials exerted considerable pressure on Meta to suppress certain COVID-19-related content, including humour and satire, adds fuel to the fire concerning the abuse of legal measures to stifle freedom of speech and excessive content moderation by government officials. Nevertheless, both Musk and Zuckerberg have had to strike a balance between maintaining a platform that allows for open dialogue and complying with legal requirements to prevent the spread of harmful content.

The story of Chris Pavlovski, CEO of Rumble, further complicates this narrative. His decision to leave the EU following Durov’s arrest underscores the growing unease among tech leaders about the increasing regulatory pressures of the EU. Pavlovski’s departure can be seen as a preemptive move to avoid the legal and financial risks of operating in a jurisdiction that tightens its grip on digital platforms. It also reflects a broader trend of tech companies seeking more favourable regulatory environments, often at the expense of user rights and freedoms.

All these controversial examples bring us to the heart of this debate: where to draw the line between free speech and harm prevention. Encrypted platforms like Telegram offer unparalleled privacy but pose significant challenges for law enforcement. The potential for these platforms to be used by criminals and extremists cannot be ignored. However, the solution is more complex. Overzealous regulation risks stifling free expression and driving users to even more secretive and unregulated corners of the internet.

Pavel Durov’s case is a microcosm of the larger global struggle over digital rights. It forces us to confront uncomfortable questions: Do platforms like Telegram have a responsibility to monitor and control the content shared by their users, even at the cost of privacy? Should governments have the power to compel these platforms to act, or does this represent an unacceptable intrusion into the private sphere? Should social media companies that monetise content on their platforms be held responsible for the content they allow? And ultimately, how do we find the balance in the digital world we live in to optimally combine privacy and security in our society? 

These questions will only become more pressing as we watch Durov’s and similar legal cases unfold. The outcome of his case could set a precedent that shapes the future of digital communication, influencing not just Telegram but all platforms that value user privacy and free speech. Either way, Durov’s case also highlights the inherent conflict between cyberspace and real space. There was once a concept that the online world—the domain of bits, bytes, and endless data streams—existed apart from the physical reality we live in. In the early days of the internet, this virtual space seemed like an expansive, unregulated frontier where the laws of the physical world did not necessarily apply. However, cyberspace was never a separate entity; rather, it was an extension, a layer added to the world we already knew. Therefore, the concept of punishment in the digital world has always been, and still is, rooted in the physical world. Those held responsible for crimes or who commit crimes online are not confined to a virtual jail; they are subject to controversies in the real world and legal systems, courts, and prisons.