Trump vs Harris: The tech industry’s pivotal role in 2024

US Presidential elections

As the 5 November US presidential election approaches, all eyes are on the tight race between former President Donald Trump and current Vice President Kamala Harris. Polls show the candidates are neck and neck, making voter mobilisation critical for both sides. In this high-stakes environment, the backing of major business groups could be a game changer, with influential figures like Elon Musk stepping into the spotlight.

a table with a few flags on it

Musk, the founder of X and one of the world’s wealthiest individuals, has recently rallied support for Trump’s campaign, highlighting the significant role that Big Tech, particularly the so-called ‘Magnificent Seven’, could play in determining the election’s outcome. As both candidates vie for the favour of corporate America, their strategies will likely reflect the growing influence of these business leaders in shaping public policy and voter sentiment.

The Magnificent Seven

The term ‘Magnificent Seven‘ originated with the 1960 Western film The Magnificent Seven, directed by John Sturges. The film follows a group of seven gunslingers, led by Yul Brynner and Steve McQueen, who are hired to protect a Mexican village from bandits. Its legacy spans sequels, a remake in 2016, and cultural resonance, especially for themes of bravery and teamwork.

In finance, The Magnificent Seven is a group of large American tech companies – Apple, Microsoft, Amazon, Nvidia, Meta Platforms, Tesla, and Alphabet. These companies are celebrated for their significant impact on consumer habits, influence over technological advancements, and dominance in the stock market. Holding immense weight in indices like the S&P 500 and NASDAQ, they are seen as critical drivers of market growth and key indicators of economic trends in areas like AI, e-commerce, and social media.

So, it’s quite understandable why the support of these tech giants might be the key to Trump or Harris winning their contested electoral duel.

Trump and tech executives

Top executives from major tech companies are increasingly reaching out to Donald Trump as the presidential election approaches. With polls showing a tight race between Trump and Vice President Kamala Harris, figures like Apple CEO Tim Cook and Amazon CEO Andy Jassy have initiated conversations with the former president. Even Mark Zuckerberg has expressed admiration for Trump following an assassination attempt on him. This shift comes after a tumultuous relationship marked by Facebook’s ban on Trump following the 6 January Capitol riot, a ban that was lifted in 2023.

Trump noted on the Barstool Sports podcast that he appreciates Zuckerberg’s current approach, emphasising that Zuckerberg is staying out of the election. Meta has taken steps to reduce political content on its platforms, including changes to Instagram that limit political recommendations unless users opt-in. Zuckerberg has also stated that he will not endorse any candidates in the 2024 election and plans to avoid significant political engagement. Despite their past conflicts, including Trump’s characterisation of Facebook as an ‘enemy of the people,’ Zuckerberg praised Trump’s resilient response to a recent assassination attempt, calling it ‘badass.’

 Body Part, Finger, Hand, Person, People, Adult, Male, Man, Crowd, Face, Head, Performer, Solo Performance, Audience, Mark Zuckerberg

This comment reflects a complicated dynamic between the two, as Trump claimed Zuckerberg expressed difficulty in voting for a Democrat in the upcoming election. However, Meta denied this, reiterating that Zuckerberg has not indicated any intention to vote or endorse the race.

Elon Musk’s relationship with Donald Trump has seen various phases, reflecting both support and criticism over the past years. Just two years ago, Musk voiced his disapproval of the former president, tweeting in 2022 that it was ‘time for Trump to hang up his hat & sail into the sunset.’ This tweet was in response to Trump publicly calling Musk a liar, accusing him of not being truthful about who he had voted for in past elections. Trump even doubted Musk’s then-pending purchase of Twitter, quipping to a rally crowd, ‘Elon is not going to buy Twitter.’ Of course, Musk did end up buying the platform, now called X, and has since made headlines for his shifting political alliances and increasingly public alignment with issues near Trump’s campaign.

Musk’s stance on US politics was historically more progressive, with nearly exclusive support for Democrats. However, his views on President Biden have notably soured, particularly over unionisation efforts and Biden’s perceived lack of recognition of Tesla’s achievements. Notably, Tesla was not invited to Biden’s 2021 White House electric vehicle summit, despite its status as a major EV manufacturer. Musk’s frustration only grew as his companies have faced federal investigations under the Biden administration, including scrutiny over Tesla’s autopilot feature and his controversial acquisition of Twitter. By 2023, Musk expressed his dissatisfaction with the Biden administration, stopping short of an endorsement for Trump but hinting at his disapproval.

a person speaking into a microphone

Since taking over Twitter, Musk has shifted noticeably to the right, aligning with Trump on issues like government censorship and criticisms of ‘woke’ ideology. He has lifted Trump’s previous ban on Twitter and frequently shares opinions that echo Trump’s base, from distrust of the media to concerns about unchecked immigration. Political analyst Ryan Broderick suggests that Musk’s stance has transformed drastically since 2018, noting that his earlier, more liberal ‘neoliberal, happy-go-lucky’ messages have given way to tweets that often appeal to the far-right, drawing criticism and sparking debates across the platform.

Trump has responded to this shift with a warmer stance toward Musk. Recently, he praised Musk at a news conference, lauding his patriotism and mutual concern for the country. Musk also seems to have cemented his support for Trump, especially after publicly endorsing him and calling for his recovery following an alleged assassination attempt.

Additionally, Musk has committed $100 million to support Trump, and now, in a move stirring debate, he’s offering $1 million a day to selected voters who sign a petition supporting the First and Second Amendments. This campaign, led by Musk’s America PAC, is focused on registering Trump supporters and has been actively promoting the initiative in Pennsylvania, a key battleground state.

Musk’s financial support and giveaway campaign have raised concerns among election law experts. The PAC requires participants to be registered voters to be eligible for the million-dollar check, which some experts say may cross legal lines. UCLA Law professor Rick Hasen noted that while it is legal to pay people to sign petitions, tying eligibility to voter registration could potentially violate laws against incentivising voter registration.

Kamala Harris and Silicon Valley

On the other hand, Kamala Harris’s presidential campaign has also garnered substantial support from Silicon Valley’s elite, signalling a strong connection between her candidacy and tech industry leaders. Harris’s relationship with Silicon Valley extends back over a decade, partly attributed to her tenure as California’s attorney general and her subsequent role as a US senator. This long-standing connection has led many tech leaders to believe she might adopt a friendlier stance towards the industry than the Biden administration. Notable figures like former Facebook CEO Sheryl Sandberg, LinkedIn co-founder Reid Hoffman, philanthropist Melinda French Gates, and IAC chair Barry Diller are among those supporting Harris, and billionaire Laurene Powell Jobs, Steve Jobs’ widow, has been a close ally since 2013, hosting a fundraiser for Harris that year.

a woman laughing and holding a piece of paper

Beyond billionaires, Harris has also drawn support from a broad base of venture capitalists and tech workers. Employees at Alphabet, Amazon, and Microsoft have collectively contributed over $3 million to her campaign. Alphabet workers alone have donated $2.16 million, nearly 40 times their contribution to Trump. Amazon and Microsoft employees have also shown a strong preference for Harris, with their donations amounting to ten and twelve times that of Trump, respectively. While Meta and Apple have not reached the $1 million mark in contributions, their support for Harris also far exceeds what they have given to Trump.

Over 800 VCs have signed a ‘VCs For Kamala’ pledge, and a separate Tech4Kamala letter has gathered more than 1,200 signatures. Among her backers is Steve Spinner, a major Democratic fundraiser who has worked to consolidate Silicon Valley’s support behind Harris, arguing that the majority of the tech industry remains Democratic despite high-profile endorsements of Trump by figures like Elon Musk. Spinner emphasises that ‘for every one person who’s backing Trump, there’s 20 who are backing Kamala,’ dismissing pro-Trump tech figures as outliers in an overwhelmingly liberal industry.

However, this alignment is not without exceptions. David Marcus, former president of PayPal and CEO of the payment company Lightspark, has publicly shifted his allegiance from Democrats to Republicans, criticising what he sees as the Democratic leadership’s ‘hubris’ and its embrace of an ‘increasingly leftist ideology.’ His move underscores a divide within the tech sector, with some executives pulling away from a party they feel is distancing itself from the industry’s priorities.

Tech firms under scrutiny

A key point of focus is the regulatory scrutiny that Big Tech faces under President Joe Biden’s administration, specifically targeting companies like Apple and Google. Biden’s Department of Justice (DOJ) has pursued antitrust actions, arguing that Apple manipulates the smartphone market to limit competition and that Google’s practices resemble those of the AT&T monopoly that was dismantled in the 1980s. This intense scrutiny has created uncertainty for the tech giants, as they face regulatory challenges both at home and abroad, including significant tax penalties imposed by the EU —$14.4 billion for Apple and $2.6 billion for Google.

a man speaking into a microphone

In older statements, Trump expressed dissatisfaction with Google’s treatment of him, previously calling for maximum-level prosecution against the company for alleged bias. However, he recently noted a shift in Google’s stance, commenting that they appear ‘more inclined’ to support him.

He also mentioned discussing Apple’s European tax rulings with CEO Tim Cook, implying that such regulatory issues would be addressed more favourably under his leadership. Donald Trump has hinted that he might ease this pressure if reelected, suggesting that regulatory hurdles for Big Tech might lessen under his administration.

Trump’s tech policy

Donald Trump’s vision for tech policy includes reducing regulatory barriers to foster innovation and growth. Trump has expressed concern over what he sees as ‘illegal censorship’ by Big Tech, particularly social media platforms, which he claims display bias against conservative viewpoints. The Trump administration previously pursued antitrust actions against tech giants like Google and Meta, and he remains critical of companies he believes unfairly limit free speech online.

Trump’s approach to AI and cryptocurrencies favours a hands-off approach, arguing that the industry should be allowed time to develop without heavy government oversight. His policies suggest he would scale back initiatives such as the electric vehicle challenge and roll back consumer protections implemented under the Biden administration. Trump’s tech policy largely reflects a belief that the market will regulate itself and that minimising government intervention will drive US competitiveness on the world stage. He is also promising favourable policies such as corporate tax cuts.

In general, Trump’s rhetoric suggests a friendlier approach to tech giants, framing his administration as one that would ‘set free’ companies burdened by regulation. This would represent a significant departure from Biden’s approach, which could lead to more extensive oversight, adding another layer of importance to the election’s outcome for these powerful tech companies.

Harris’s point of view

On the contrary, Kamala Harris was appointed by Biden as the AI czar, tasking her with enhancing regulations surrounding AI technology as outlined in his executive order. During her tenure in this role, Harris collaborated with leaders from major tech firms like OpenAI, Microsoft, Alphabet, and Anthropic, emphasising a commitment to prioritising safety over corporate profits. She voiced concerns at the Global Summit on AI Safety last year, asserting that without robust government oversight, tech companies often prioritise profit at the expense of public well-being and democratic stability.

Kamala AI

Harris’ approach has also involved data privacy and bias protection, advocating for legislation to mitigate potential harms associated with AI and emerging digital platforms.

A major achievement for the Biden-Harris administration is the CHIPS and Science Act of 2022 which invested in American semiconductor production and tech research and development. This legislation supports clean energy projects and green tech, aiming to secure the country’s tech independence and strengthen national security by bringing more tech manufacturing stateside. Harris’ policies have targeted consumer protection against data misuse and online misinformation, echoing the administration’s interest in strengthening net neutrality and advocating for clearer data privacy laws.

In that sense, experts predict that Harris will largely continue Biden’s current regulatory framework on technology and AI, with only minor adjustments.

However, Harris’s policy positions, particularly on issues crucial to the tech industry such as tax reform, immigration, and antitrust enforcement, remain largely unarticulated, prompting Silicon Valley to tread carefully. Although Harris’s long history in California politics has earned her a base of goodwill, her campaign must address these policy uncertainties to secure substantial financial and strategic backing from an industry navigating the political flux. This balancing act is particularly challenging as she vies to retain traditional Democratic support without alienating a tech sector that remains cautious in light of growing regulatory pressures under the Biden administration.

The future of the tech sector

In conclusion -as technology continues to shape the economy- both candidates’ policies reflect the broader economic vision they hope to achieve. Harris envisions an inclusive, equitable tech landscape where consumer protection and innovation go hand-in-hand, while Trump’s policies prioritise a market-driven model that incentivises growth with minimal intervention. These differences underscore the fundamental contrast in their governance styles and philosophies regarding the role of government in technology.

Ultimately, the next president’s approach to technology will play a crucial role in determining how Americans interact with the digital world, work in an AI-driven economy, and navigate issues of privacy and digital citizenship. As the candidates refine their platforms, voters will face a choice between competing visions of how to guide the nation through a transformative era in technology and innovation.

Just-in-time reporting from the UN Security Council: Leveraging AI for diplomatic insight

On 21 and 24 October, DiploFoundation provided just-in time reporting from the UN Security Council sessions on scientific development and on women, peace, and security. Supported by Switzerland, this initiative aims to enhance the work of the UN Security Council and the broader UN system.

At the core of this effort is DiploAI, an advanced platform shaped by years of training on UN materials, which played a crucial role in unlocking the knowledge generated by the Security Council’s deliberations. This knowledge, often trapped in video recordings and transcripts, is now more accessible, providing valuable insights for diplomacy and global peace.

Unlocking the power of AI for peace and security

AI-supported reporting from the UN Security Council (UNSC) demonstrates the potential of combining cutting-edge technology with deep expertise in peace and security. This effort is part of ongoing work by DiploAI, which has been providing detailed reports on Security Council sessions in 2023-2024 and has covered the UN General Assembly (UNGA) for eight consecutive years. DiploAI is actively contributing to expanding the UN’s knowledge ecosystem.

Seamless interplay between experts and AI

The success of this initiative lies in the seamless interplay between DiploAI and security experts well-versed in UNSC procedures. The collaboration began with tailoring the AI system to the unique needs of the Council, using input from experts and diplomats to build a relevant knowledge base. Experts supplied key documents and session materials, which enhanced the AI’s contextual understanding. Feedback loops on keywords, topics, and focus areas ensured the AI’s output remained both accurate and diplomatically relevant.

A pivotal moment in this collaboration was the analysis of New Agenda for Peace , where Security Council experts helped DiploAI identify over 400 critical topics, laying the foundation for a comprehensive taxonomy on peace and security at the UN. This expertise, combined with DiploAI’s technical capabilities, has resulted in an AI system attuned to the subtleties of diplomatic language and priorities. Furthermore, the project introduced a Knowledge Graph—a visual tool for displaying sentiment and relational analysis between statements and topics—which adds new depth to the analysis of Council sessions.

Building on this foundation, DiploAI developed a custom chatbot capable of moving beyond standard Q&A interactions. By integrating data from all 2024 sessions and associated documents, the chatbot allows users to interact conversationally with the content, providing in-depth answers and real-time insights. This evolution marks a significant leap forward in accessing and understanding diplomatic data—shifting from static reports to interactive exploration of session materials.

AI and diplomatic sensitivities

The development of DiploAI’s Q&A module, refined through approximately ten iterations with feedback from UNSC experts, underscores the value of human-AI(-nism) collaboration. This module addresses essential diplomatic questions, with iterative refinements ensuring that responses meet the Council’s standards for accuracy and relevance. The result is an AI system capable of addressing critical inquiries while respecting the sensitivity required in diplomatic settings.

What’s new?

DiploAI’s suite of tools—including real-time meeting transcription and analysis—has transformed reporting and transparency at the UNSC. By integrating customized AI systems like retrieval-augmented generation (RAG) and knowledge graphs, DiploAI adds context, depth, and relevance to the extracted information. Trained on a vast corpus of diplomatic knowledge generated at Diplo over the last two decades, the AI system generates context-specific responses, providing comprehensive answers to questions about transcribed sessions.

Such an approach has enabled DiploAI to go beyond the simple transcription of panels’ dialogues, allowing diplomats and the public to access detailed transcripts, insightful reports, and an AI-powered chatbot, where they can obtain answers to questions related to the UNSC deliberations.

Key numbers from UN Security Council reports

Here are some numbers from 10 UNSC meetings that took place between January 2023 and October 2024: 

AD 4nXdWreUEHJQHzJdB4nK8RZO9UTxjycMDGJZWmUHYlJzjfhpcWieP36YOzgii QEPHk5T0sSvWH2 KKRuJL0SmT0A6Lb3HtGRK05z yDNaDQzdyyduitizcTW1CDFii2nWc5OOzc8Z1ZtLiu4VD35CrjOBegB?key=TIuvyxbTAag0O 7z8OmSfN9u

In conclusion…

DiploAI’s reporting from the Security Council, supported by Switzerland, shows how AI can enhance diplomacy while staying grounded in human expertise and practical needs. This blend of technical capability and domain-specific knowledge demonstrates how AI, when developed collaboratively, can contribute to more inclusive, informed, and impactful diplomacy.  

Comparative analysis: the Budapest Convention vs the UN Convention Against Cybercrime

This summer, the UN Member States reached a milestone by agreeing on a draft for the organisation’s first-ever international convention against cybercrime. While this marks a significant step, it has raised many questions among those closely following cybercrime issues. One of the key concerns is how this new UN convention will coexist with current frameworks, particularly the Budapest Convention of the Council of Europe, which has been ratified by 76 countries, and is considered by the Council of Europe as the first international framework to address cybercrime. What distinguishes the UN convention from the Budapest Convention, and how will the two interact moving forward?

In this analysis, we closely look at different chapters of both conventions to highlight the similarities and differences between the two documents. 

 Art, Graphics, Advertisement, Poster, City, Text

Status and parties

The ‘United Nations Convention Against Cybercrime; strengthening international cooperation for combating certain crimes committed by means of information and communications technology systems and for the sharing of evidence in electronic form of serious crimes’, or simply the UN Convention, is not formally adopted yet: while the draft was adopted by the Ad Hoc Committee by consensus, the text will be further considered by the General Assembly. Once formally adopted, the convention should come into force if ratified by 40 UN Member States.

The Convention on Cybercrime or Budapest Convention is the legally binding treaty established by a regional organisation, i.e. Council of Europe. The Convention was ratified by 76 States, including both members and non-members of the Council of Europe.

The Convention includes two protocols, developed and adopted over time. The first protocol on xenophobia and racism via computer systems was opened for signature in 2003. The second protocol on enhanced cooperation and disclosure of electronic evidence was finalised in 2022 and has been, for now, only ratified by Serbia and Japan. To come into force, the second protocol requires 5 ratifications.

The distinction between the two by parties that negotiated the treaties should also be noted: all UN Member States vs. 46 Member States of Council of Europe.

Purposes & Scope 

While both the Budapest Convention and the UN Convention share the overarching goal (which is to address cybercrime), their scopes are not exactly the same. 

The Budapest Convention primarily focuses on the criminalisation of specific offences (e.g. illegal access, data/system interferences, computer-related fraud, child sexual abuse material), procedural powers to address cybercrime, and fostering international cooperation, by offering an advanced framework for cross-border access to electronic evidence (e-evidence). 

The UN Convention’s aim is broader and takes a more comprehensive approach: it emphasises the need to prevent and combat cybercrime by strengthening international cooperation, providing technical assistance and capacity building for developing countries, particularly. 

In view of scope, the UN Convention offers a broader institutional and global cooperation framework, while the Budapest Convention covers a wider and more specific range of criminal offences and procedural powers related to cybercrime.

Specifically, the Budapest Convention and its Second Protocol apply to e-evidence related to any criminal offence, while the UN Convention limits its scope to offences with a serious crime threshold, defined in the treaty as those punishable by a maximum deprivation of liberty of at least four years or a more serious penalty. 

At the same time, the UN Convention is broader by addressing a wider range of issues, including the protection of state sovereignty, preventive measures, and provisions for technical assistance and information exchange, thus extending beyond the criminalisation and procedural focus of the Budapest Convention.

Definitions 

To a large extent, the definitions in the Budapest Convention have been replicated in the UN Convention. However, there are some significant differences, particularly reflecting the broader scope of the UN Convention.

The UN Convention specifically uses the terms ‘ICT’ and ‘ICT systems’ instead of ‘computer’ or ‘computer systems,’ broadening its applicability to a wider range of devices and technologies. This language has been a key point of criticism. Notably, in articles like 23(2)(b) and (c), and 35(1)(c), the reference to ‘any criminal offense’ extends beyond cybercrime, potentially allowing the collection of data for any crime as defined by national laws, raising concerns about overreach and the scope of its application. It also uses ‘electronic data’ instead of ‘computer data’ (as the Budapest Convention does) to encompass all forms of electronic data.

Specifically, article 2 defines ‘electronic data’ as ‘any representation of facts, information or concepts in a form suitable for processing in an information and communications technology system, including a program suitable to cause an information and communications technology system to perform a function’, which was criticised by civil society for taking too broad an approach to the terminology. The UN Convention also explicitly introduces ‘content data’ and ‘serious crime’, which are not defined in the Budapest Convention though are mentioned (and what triggered criticism from civil society as the definitions of ‘serious offences’ are left to domestic law and thus will vary from country to country).

Criminalisation 

The UN Convention is broader in scope compared to the Budapest Convention, as it criminalises additional forms of conduct. While some offences, like illegal access, are defined similarly in both conventions, the UN treaty expands the range of criminalised activities, addressing areas beyond the cyber-dependent crimes covered by the Budapest Convention, for instance by criminalising money laundering. The UN Convention also providers a provider scope to similar offences – for instance, this broader approach can be seen in provisions related to child sexual abuse material (below).

While article 9 of the Budapest Convention criminalises actions related to material, article 15 of the UN Cybercrime Convention extends beyond content/material and addresses solicitation, grooming, or making arrangements for the purpose of committing sexual offences against children, thus focusing more on preventing sexual offences from occurring by targeting the preparatory actions (solicitation or grooming), not just the possession or distribution of illegal content. However, it’s important to note that both instances refer to content-based crimes, with criticism focusing on the risk that victims may face prosecution simply for possessing certain types of content – particularly when real-time data collection is involved. This raises concerns about how such provisions might be misused to target individuals rather than the perpetrators of the crimes.

Both the Budapest Convention and the UN Convention address the integration of child protection into domestic legislation. However, they do not make a reference to the Optional Protocol to the Convention on the Rights of the Child on the sale of children, child prostitution, and child pornography that was ratified by 176 countries and already has this obligation in it. While both instruments touch on other treaties, they fail to incorporate or cite them directly in their text. The Budapest Convention is somewhat more comprehensive in this respect, as it explicitly references human rights treaties.

Offences related to child pornography (Art 9), the Budapest Convention

Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally and without right, the following conduct: producing child sexual abuse material for the purpose of its distribution through a computer system; offering or making available child sexual abuse material through a computer system; distributing or transmitting child sexual abuse material through a computer system; procuring child sexual abuse material through a computer system for oneself or for another person; possessing child sexual abuse material in a computer system or on a computer-data storage medium.




Solicitation or grooming for the purpose ofcommitting a sexual offence against a child (Art 15), the UN Convention

Each State Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law the act of intentionally communicating, soliciting, grooming, or making any arrangement through an information and communications technology system for the purpose of committing a sexual offence against a child, as defined in domestic law, including for the commission of any of the offences established in accordance with article 14 of this Convention. A State Party may require an act in furtherance of the conduct described in paragraph 1 of this article. A State Party may consider extending criminalization in accordance with paragraph 1 of this article in relation to a person believed to be a child. States Parties may take steps to exclude the criminalization of conduct as described in paragraph 1 of this article when committed by children.

The Budapest Convention doesn’t contain specific provisions for critical infrastructure protection, while the UN Convention specifically addresses the need to protect critical information infrastructures in article 21. At the same time, the UN Convention omits offences related to copyright infringement, which are included in the Budapest Convention. 

It should also be noted that the Budapest Convention integrates its criminalisation provisions across different sections (compared to the UN Convention) and is more focused on core cybercrime offences such as illegal access, data interference, and system interference. This structure reflects a narrower focus on crimes directly involving computer systems and data, without expanding into broader cyber-enabled crimes. 

Procedural powers 

The UN Convention (Articles 23-30) has a broader scope than the Budapest Convention (Articles 14-21), as it incorporates additional measures from UNCAC and UNTOC, such as provisions for the confiscation of crime proceeds (e.g. article 31) and witness protection (article 33 and 34), which are not covered in the Budapest Convention.

However, the core procedural powers between the two conventions are largely similar. Both conventions outline comparable conditions and safeguards, though the UN Convention has faced significant criticism from civil society due to its reliance on domestic laws to establish how these safeguards would be applied, which can vary widely across countries. This variation can lead to inadequate protections in states where local laws do not meet high human rights standards. This concern has also been raised in relation to the Budapest Convention and its protocols for a failure to provide specific procedural protections for privacy and freedom of expression

Conditions and Safeguards (Art 15), the Budapest Convention

1. Each Party shall ensure that the establishment, implementation and application of the powers and procedures provided for in this Section are subject to conditions and safeguards provided for under its domestic law, which shall provide for the adequate protection of human rights and liberties, including rights arising pursuant to obligations it has undertaken under the 1950 Council of Europe Convention for the Protection of Human Rights and Fundamental Freedoms, the 1966 United Nations International Covenant on Civil and Political Rights, and other applicable international human rights instruments, and which shall incorporate the principle of proportionality.

2. Such conditions and safeguards shall, as appropriate in view of the nature of the procedure or power concerned, inter alia, include judicial or other independent supervision, grounds justifying application, and limitation of the scope and the duration of such power or procedure. 

3. To the extent that it is consistent with the public interest, in particular the sound administration of justice, each Party shall consider the impact of the powers and procedures in this section upon the rights, responsibilities, and legitimate interests of third parties.









Conditions and safeguards (Art 24), the UN Convention

1. Each State Party shall ensure that the establishment, implementation and application of the powers and procedures provided for in this chapter are subject to conditions and safeguards provided for under its domestic law, which shall provide for the protection of human rights, in accordance with its obligations under international human rights law, and which shall incorporate the principle of proportionality

2. In accordance with and pursuant to the domestic law of each State Party, such conditions and safeguards shall, as appropriate in view of the nature of the procedure or power concerned, include, inter alia, judicial or other independent review, the right to an effective remedy, grounds justifying application, and limitation of the scope and the duration of such power or procedure.

3. To the extent that it is consistent with the public interest, in particular the proper administration of justice, each State Party shall consider the impact of the powers and procedures in this chapter upon the rights, responsibilities and legitimate interests of third parties. 

4. The conditions and safeguards established in accordance with this article shall apply at the domestic level to the powers and procedures set forth in this chapter, both for the purpose of domestic criminal investigations and proceedings and for the purpose of rendering international cooperation by the requested State Party. 

5. References to judicial or other independent review in paragraph 2 of this article are references to such review at the domestic level.

International cooperation 

Firstly, the Budapest Convention and its Second Protocol allow international cooperation for the collection of electronic evidence related to any criminal offence. This broad scope means that countries can assist each other in investigations involving crimes beyond cyber-related activities, as long as electronic evidence is involved. The Budapest Convention emphasises cross-border cooperation through established networks and mechanisms like 24/7 contact points.

The UN Convention limits its scope of international cooperation to serious crimes’ as defined by the treaty. These are offences punishable by a maximum of at least four years of imprisonment or more. However, as previously noted, articles such as 23(2)(b) and (c), and 35(1)(c) broaden the scope by referencing ‘any criminal offense.’

Secondly, the Budapest Convention in its Second Protocol includes a broader list of advanced tools (e.g. emergency mutual assistance in article 10 or video conferencing in article 11 etc.) for cross-border cooperation to obtain electronic evidence, and none of such tools have been included in the UN Convention. The Budapest Convention also emphasises timely preservation and sharing of data across borders, with an established network of 24/7 contact points to ensure rapid response in cybercrime investigations. The Second Protocol further strengthens data-sharing provisions, including direct cooperation with service providers and expedited disclosure of data in emergency situations.

The UN Convention provides mechanisms for data sharing but has been criticised for its provisions on confidentiality and transparency. Critics, including industry leaders, argue that the treaty has too many references to keeping requests confidential, which might limit transparency and oversight. This could lead to concerns about how certain countries use this data for surveillance or other purposes.

On the other hand, the UN Convention provides more areas for international cooperation since it includes the provisions from the UNTOC and UNCAC and includes provisions on crime prevention as well as freezing, seizure, confiscation and return of the proceeds (article 31), which are not included in the Budapest Convention.

The UN Convention, at the same time, lacks detailed safeguards, particularly, regarding how surveillance and data sharing might impact privacy. One of the provisions in article 22 grants states the authority to assert jurisdiction over crimes committed outside their borders if their nationals are affected, which would effectively allow other states to interfere in their domestic affairs. This also means that if states want to use the convention to prosecute the conduct of individuals outside their territory, they can do so.

Further, article 27 allows states to access electronic data (which is very broadly defined in the treaty) from individuals if they are located in their country, no matter where that data is stored. The same power is designed to order service providers that offer their services in the territory of a state to provide subscriber information relating to such services and this may include phone, emails, account details and other personally identifiable information.

Conclusion

As both the UN Cybercrime Convention and the Budapest Convention continue to shape global cybercrime policy, the challenge of how these instruments will coexist becomes increasingly relevant. The Budapest Convention, as the first international treaty on cybercrime, has long served as a foundational framework, providing a robust structure for addressing cyber-related offences while emphasising human rights and alignment with other international treaties.

However, states already party to the Budapest Convention may find themselves caught between the narrower, more established approach of that treaty and the broader mandates of the UN Convention. The latter’s focus on ‘serious crimes’ and the ambiguity around the scope of data collection for any offense defined by domestic law could lead to inconsistencies in how cybercrime is addressed globally, especially when legal definitions of cyber offences differ between nations.

The ability of these two instruments to coexist may depend on diplomatic efforts to create a complementary relationship between the two. Ensuring that both conventions are implemented in a way that respects existing international norms and human rights will be key to avoiding legal fragmentation and ensuring that global cybercrime prevention efforts are effective and coordinated.

Revolutionising medicine with AI: From early detection to precision care

It has been more than four years since AI was first introduced into clinical trials involving humans. Even back then, it was evident that the advancement of artificial intelligence—currently the most popular buzzword online in 2024—would enhance every aspect of society, including medicine.

Thanks to AI-powered tools, diseases that once baffled humanity are now much better understood. Some conditions are also easier to detect, even in their earliest stages, significantly improving diagnosis outcomes. For these reasons, AI in medicine stands out as one of the most valuable technological advances, with the potential to improve individual health and, ultimately, the overall well-being of society.

Although ethical concerns and doubts about the accuracy of AI-assisted diagnostic tools persist, it is clear that the coming years and decades will bring developments and improvements that once seemed purely theoretical.

AI collaborates with radiologists to enhance diagnostic accuracy

AI has been a crucial aid in medical diagnostics for some time now. A Japanese study showed that ChatGPT performed more accurate assessments than experts in the field.

After performing 150 diagnostics, neuroradiologists recorded an 80% accuracy rate for AI. These promising results encouraged the research team to explore integrating such AI systems into apps and medical devices. They also highlighted the importance of incorporating AI education into medical curricula to better prepare future healthcare professionals.

Early detection of brain tumours and lung cancer

Early detection of diseases, particularly cancer, is critical to a patient’s chances of survival. Many companies are focusing on improving AI within medical equipment to diagnose brain tumours and lung cancer in their earliest stages.

AI-enhanced lung nodule detection aims to improve cancer outcomes.

The algorithm developed by Imidex, which has received FDA approval, is currently in clinical trials. Its purpose is to improve the screening of potential lung cancer patients.

Collaborating with Spesana, the company is expected to be among the first to market once the research is finalised.

Growing competition shows AI’s progress

An increasing number of companies entering the AI-in-medicine field suggests that these advancements will be more widely accessible than initially expected. While the mentioned companies are set to dominate the North American market, a French startup Bioptimus is targeting Europe.

Their AI model, trained on millions of medical images, is capable of identifying cancerous cells and genetic anomalies within tumours, pushing the boundaries of precision medicine.

Public trust in AI medical diagnosis

New technologies often face public scepticism and AI in medicine is no exception. A 2023 study found that many patients feel uneasy with doctors relying on AI during treatment.

The Pew Research Centre report revealed that 60% of Americans are against AI-assisted diagnostics, while only 39% support it. Furthermore, 57% believe AI could worsen the doctor-patient relationship, compared to 13% who think it might improve it.

Doctor, Patient, Hospital, Doctor's office, Medical equipment, Medicine, AI

As for treatment outcomes, 38% anticipate improvements with AI, 33% expect negative results, and 27% believe no major changes will occur.

AI’s role in tackling dementia

Dementia, a progressive illness affecting cognitive functions, remains a major challenge for healthcare. However, AI has shown promising potential in this area. Through advanced pattern recognition, AI systems can analyse massive datasets, detect changes in brain structure, and identify early warning signs of dementia, long before symptoms manifest.

By processing various test results and brain scans, AI algorithms enable earlier interventions, which can greatly improve patients’ quality of life. In particular, researchers from Edinburgh and Dundee are hopeful that their AI tool, SCAN-DAN, will revolutionise the early detection of this neurodegenerative disease.

The project is part of the larger global NEURii collaboration, which aims to develop digital health tools that can address some of the most pressing challenges in dementia research.

Helping with early breast cancer detection

AI has shown great potential in improving the effectiveness of ultrasound, mammography, and MRI scans for breast cancer detection. Researchers in the USA have developed an AI system capable of refining disease staging by accurately distinguishing between benign and malignant tumours.

Moreover, the AI system can reduce false positives and negatives, a common problem in traditional breast cancer detection methods. The ability to improve diagnostic accuracy and provide a better understanding of disease stages is crucial in treating breast cancer from its earliest signs.

Computer, AI, Breast cancer, Disease prevention, Cancer detection

Investment in AI set to skyrocket

With early diagnosis playing a pivotal role in curing diseases, more companies are seeking partnerships and funding to keep pace with the leading investors in AI technology.

Recent projections indicate that AI could add nearly USD $20 trillion to the global economy by 2030. While it is still difficult to estimate healthcare’s share in this growth, some early predictions suggest that AI in medicine could account for more than 10% of that value.

What is clear, however, is that major global companies are not missing the opportunity to invest in businesses developing AI-driven medical equipment.

What can we expect in the future?

AI is making significant progress across various industries, and its impact on medicine could be transformational. If healthcare receives as much or more AI focus than fields like economics and ecology, the potential to revolutionise medicine as a science is immense.

Various AI systems that learn about diseases and treatment processes have the capacity to gather and analyse far more information than the human brain. As regulatory frameworks evolve worldwide, AI-driven diagnostic tools may lead to faster, more accurate disease detection than ever before, potentially marking a major turning point in the history of medical science.

El Salvador: Blueprint for the bitcoin economy

On 7 September 2021, El Salvador became the first country in the world to adopt bitcoin as legal tender, sparking global discussions about the role of cryptocurrencies in national economies. This groundbreaking decision transformed El Salvador into a beacon for financial innovation as other nations began to closely monitor its bold experiment. Initially seen as a monetary gamble, El Salvador’s decision has evolved into a strategy with far-reaching implications, both domestically and internationally. While the International Monetary Fund (IMF) and other financial institutions have raised concerns about potential risks, El Salvador’s commitment to cryptocurrency adoption has set a precedent by reshaping global economic systems.

From experiment to national strategy

When El Salvador made bitcoin legal tender, it was an ambitious experiment aimed at solving several economic challenges. The country, reliant on remittances and with a significant part of its population unbanked, saw cryptocurrency as a way to promote financial inclusion. Today, with 5,748.8 bitcoins held in national reserves, El Salvador’s leadership continues to buy bitcoin, signalling confidence in the long-term potential of the digital asset. In this way, the initial idea of bitcoin adoption has transformed from a simple test into a cornerstone of the nation’s financial strategy. El Salvador is now laying the foundation for broader economic development by positioning itself as a crypto-friendly environment.

 Logo, Emblem, Symbol, Hockey, Ice Hockey, Ice Hockey Puck, Rink, Skating, Sport

Economic impact: benefits and challenges

El Salvador’s embrace of bitcoin has left a significant mark on its economy, though it has not been without its challenges. One of the major benefits has been the ability to streamline remittances, allowing Salvadorians abroad (of which there are many in emigration) to send money home using bitcoin, cutting out the traditional intermediaries and lowering fees. This move has made remittances faster, more affordable, and more accessible.

The country has also witnessed a surge in foreign investment, as businesses interested in cryptocurrency see El Salvador as an attractive hub. Crypto enthusiasts and digital nomads have flocked to the country, boosting tourism and putting El Salvador on the global map as a bitcoin-friendly destination.

Moreover, El Salvador’s innovation goes beyond adopting bitcoin as legal tender; it has also ventured into the creation of bitcoin bonds and infrastructure projects like ‘Bitcoin City.’ President Nayib Bukele’s vision for Bitcoin City includes a tax-free, crypto-friendly zone designed to attract foreign investment. The city, with a projected USD $1.6 billion investment, will feature modern infrastructure and create an environment conducive to the growth of blockchain and cryptocurrency businesses. If successful, Bitcoin City could become a global hub for digital finance, further cementing El Salvador’s position at the forefront of this financial revolution.

However, bitcoin volatility remains a persistent issue. Critics argue that heavy reliance on such a fluctuating asset could jeopardise financial stability. Unpredictable price swings in the crypto market pose a risk, potentially leading to instability in the national economy. While El Salvador continues to bet on bitcoin’s long-term success, these challenges highlight the need to carefully navigate the balancing act between innovation and economic resilience.

 City, Urban, Metropolis

Educating for a bitcoin future

One of the latest initiatives El Salvador has undertaken is its Bitcoin certification programme. Spearheaded by the National Bitcoin Office (ONBTC), the programme aims to educate 80,000 government employees on the intricacies of bitcoin and blockchain technology. This strategic move underscores the nation’s commitment to integrating bitcoin into its broader governance structure.

By equipping civil servants with essential knowledge, El Salvador ensures that bitcoin adoption is not just a top-down policy but becomes deeply embedded in the daily functioning of the state. Beyond focusing on external performance, El Salvador is working to seed crypto into the core of its state organisations, ensuring that government employees fully understand the nature of cryptocurrency and not merely reproduce its use. This educational initiative is also expected to create a ripple effect across other sectors, solidifying El Salvador’s place as a leader in the global crypto space.

Global influence and partnerships

El Salvador’s progressive approach to cryptocurrency is beginning to influence other nations. Argentina, for example, has recently started collaborating with El Salvador to learn from its experience. Argentina’s pro-crypto president, Javier Milei, has shown interest in using cryptocurrencies to stabilise the country’s economy. This collaboration is a testament to the growing recognition of El Salvador’s pioneering role in this space. As more countries begin to explore cryptocurrency adoption, El Salvador’s approach provides a practical case study, proving that integrating digital assets into a national economy can have tangible benefits.

 Land, Nature, Outdoors, Sea, Water, Shoreline, Coast, Scenery

Regulatory challenges and criticism

Despite the enthusiasm surrounding Bitcoin adoption, El Salvador has faced significant criticism from international organisations. The IMF has been particularly vocal, warning that the adoption of cryptocurrency as legal tender poses risks to financial stability, consumer protection, and market integrity. These warnings highlight the regulatory challenges El Salvador faces, especially when dealing with global institutions that remain sceptical of digital currencies. However, the country has responded by reinforcing its regulatory frameworks and increasing transparency around its bitcoin activities. While the road is not without obstacles, El Salvador’s approach showcases a willingness to navigate these complexities and maintain its position as a leader in the crypto space.

El Salvador’s Chivo wallet project

One of the most significant elements of El Salvador’s bitcoin adoption is the introduction of the Chivo wallet, which plays a pivotal role in promoting financial inclusion. Chivo, the government-backed digital wallet, allows Salvadorians to easily access and use bitcoin, providing a crucial gateway to financial services for those previously excluded from the traditional banking system.

To help citizens become familiar with the cryptocurrency, the government offered USD $30 worth of bitcoin to each individual through the Chivo wallet, the country’s digital currency platform. However, public reception was mixed, with an August 2021 poll indicating that 70% of respondents opposed the initiative, and only 15% expressed confidence in bitcoin. Concerns about volatility also led to protests in San Salvador, as many feared the potential for drastic price fluctuations.

The Chivo wallet, available on mobile devices, empowers even the unbanked population to participate in the digital economy by enabling seamless transactions and easy access to remittances sent from abroad. By leveraging this digital wallet project, El Salvador has not only embraced crypto but has also laid the foundation for a more inclusive financial ecosystem. This approach serves as a model for other developing nations, showing how the integration of a government-supported crypto platform can help bypass traditional banking barriers, delivering financial tools to millions and boosting both individual economic prospects and national economies.

 Art, Graphics, Text, Logo

The broader global implications

El Salvador’s bold experiment is already making waves across the world. The Central African Republic has followed in its footsteps, adopting bitcoin as legal tender. As other nations watch closely, it is becoming clear that El Salvador’s approach could inspire a global movement towards cryptocurrency-driven economies. For countries struggling with inflation, financial exclusion, or dependence on foreign currencies, bitcoin adoption represents an alternative path. The world sees that cryptocurrency is not just a speculative asset—it can be a powerful tool for economic development and innovation.

A leader in the new digital financial order

El Salvador’s decision to adopt bitcoin as legal tender has positioned the country at the forefront of a financial revolution. What started as a daring experiment has blossomed into a comprehensive national strategy with global implications. Despite the challenges, including market volatility and regulatory pushback, El Salvador’s proactive approach sets a powerful and inspiring example for other countries. By embracing cryptocurrency from the deepest level of society, from education to infrastructure, El Salvador is showing the world that digital currencies can drive economic progress. As more nations observe its success, the small Central American nation may just be paving a historical way for global financial transformation.

AI and ethics in modern society

Humanity’s rapid advancements in robotics and AI have shifted many ethical and philosophical dilemmas from the realm of science fiction into pressing real-world issues. AI technologies now permeate areas such as medicine, public governance, and the economy, making it critical to ensure their ethical use. Multiple actors, including governments, multinational corporations, international organisations, and individual citizens, share the responsibility to navigate these developments thoughtfully.

What is ethics?

Ethics refers to the moral principles that guide individual behaviour or the conduct of activities, determining what is considered right or wrong. In AI, ethics ensures that technologies are developed and used in ways that respect societal values, human dignity, and fairness. For example, one ethical principle is respect for others, which means ensuring that AI systems respect the rights and privacy of individuals.

What is AI?

Artificial Intelligence (AI) refers to systems that analyse their environment and make decisions autonomously to achieve specific goals. These systems can be software-based, like voice assistants and facial recognition software, or hardware-based, such as robots, drones, and autonomous cars. AI has the potential to reshape society profoundly. Without an ethical framework, AI could perpetuate inequalities, reduce accountability, and pose risks to privacy, security, and human autonomy. Embedding ethics in the design, regulation, and use of AI is essential to ensuring that this technology advances in a way that promotes fairness, responsibility, and respect for human rights.

AI ethics and its importance

AI ethics focuses on minimising risks related to poor design, inappropriate applications, and misuse of AI. Problems such as surveillance without consent and the weaponisation of AI have already emerged. This calls for ethical guidelines that protect individual rights and ensure that AI benefits society as a whole.

a person in a white suit

Global and regional efforts to regulate AI ethics

There are international initiatives to regulate AI ethically. For example, UNESCO‘s 2021 Recommendation on the Ethics of AI offers guidelines for countries to develop AI responsibly, focusing on human rights, inclusion, and transparency. The European Union’s AI Act is another pioneering legislative effort, which categorises AI systems by their risk level. The higher the risk, the stricter the regulatory requirements.

The Collingridge dilemma and AI

The Collingridge dilemma points to the challenge of regulating new technologies like AI. Early regulation is difficult due to limited knowledge of the technology’s long-term effects, but once the technology becomes entrenched, regulation faces opposition from stakeholders. AI is currently in a dual phase: while its long-term implications are uncertain, we already have enough examples of its immediate impact—such as algorithmic bias and privacy violations—to justify regulation in key areas.

Asimov’s Three Laws of Robotics: Ethical inspiration for AI

Isaac Asimov’s Three Laws of Robotics, while fictional, resonate with many of the ethical concerns that modern AI systems face today. These laws—designed to prevent harm to humans, ensure obedience to human commands, and prioritise the self-preservation of robots—provide a foundational, if simplistic, framework for responsible AI behaviour.

 Page, Text, Chart, Plot

Modern ethical challenges in AI

However, real-world AI introduces a range of complex challenges that cannot be adequately managed by simple rules. Issues such as algorithmic bias, privacy violations, accountability in decision-making, and unintended consequences complicate the ethical landscape, necessitating more nuanced and adaptive strategies for effectively governing AI systems.

As AI continues to develop, it raises new ethical dilemmas, including the need for transparency in decision-making, accountability in cases of accidents, and the possibility of AI systems acting in ways that conflict with their initial programming. Additionally, there are deeper questions regarding whether AI systems should have the capacity for moral reasoning and how their autonomy might conflict with human values.

Categorising AI and ethics

Modern AI systems exhibit a spectrum of ethical complexities that reflect their varying capabilities and applications. Basic AI operates by executing tasks based purely on algorithms and pre-programmed instructions, devoid of any moral reasoning or ethical considerations. These systems may efficiently sort data, recognise patterns, or automate simple processes, yet they do not engage in any form of ethical deliberation.

In contrast, more advanced AI systems are designed to incorporate limited ethical decision-making. These systems are increasingly being deployed in critical areas such as healthcare, where they help diagnose diseases, recommend treatments, and manage patient care. Similarly, in autonomous vehicles, AI must navigate complex moral scenarios, such as how to prioritise the safety of passengers versus pedestrians in unavoidable accident situations. While these advanced systems can make decisions that involve some level of ethical consideration, their ability to fully grasp and navigate complex moral landscapes remains constrained.

The variety of ethical dilemmas

 Logo, Nature, Outdoors, Person

Legal impacts

The question of AI accountability is increasingly relevant in our technologically driven society, particularly in scenarios involving autonomous vehicles, where determining liability in the event of an accident is fraught with complications. For instance, if an autonomous car is involved in a collision, should the manufacturer, software developer, or vehicle owner be held responsible? As AI systems become more autonomous, existing legal frameworks may struggle to keep pace with these advancements, leading to legal grey areas that can result in injustices. Additionally, AI technologies are vulnerable to misuse for criminal activities, such as identity theft, fraud, or cyberattacks. This underscores the urgent need for comprehensive legal reforms that not only address accountability issues but also develop robust regulations to mitigate the potential for abuse.

Financial impacts

The integration of AI into financial markets introduces significant risks, including the potential for market manipulation and exacerbation of financial inequalities. For instance, algorithms designed to optimise trading strategies may inadvertently favour wealthy investors, perpetuating a cycle of inequality. Furthermore, biased decision-making algorithms can lead to unfair lending practices or discriminatory hiring processes, limiting opportunities for marginalised groups. As AI continues to shape financial systems, it is crucial to implement safeguards and oversight mechanisms that promote fairness and equitable access to financial resources.

Environmental impacts

The environmental implications of AI cannot be overlooked, particularly given the substantial energy consumption associated with training and deploying large AI models. The computational power required for these processes contributes significantly to carbon emissions, raising concerns about the sustainability of AI technologies. In addition, the rapid expansion of AI applications in various industries may lead to increased electronic waste, as outdated hardware is discarded in favour of more advanced systems. To address these challenges, stakeholders must prioritise the development of energy-efficient algorithms and sustainable practices that minimise the ecological footprint of AI technologies.

Social impacts

AI-driven automation poses a profound threat to traditional job markets, particularly in sectors that rely heavily on routine tasks, such as manufacturing and customer service. As machines become capable of performing these jobs more efficiently, human workers may face displacement, leading to economic instability and social unrest. Moreover, the deployment of biassed algorithms can deepen existing social inequalities, especially when applied in sensitive areas like hiring, loan approvals, or criminal justice. The use of AI in surveillance systems also raises significant privacy concerns, as individuals may be monitored without their consent, leading to a chilling effect on free expression and civil liberties.

Psychological impacts

The interaction between humans and AI systems can have far-reaching implications for emotional well-being. For example, AI-driven customer service chatbots may struggle to provide the empathetic responses that human agents can offer, leading to frustration among users. Additionally, emotionally manipulative AI applications in marketing may exploit psychological vulnerabilities, promoting unhealthy consumer behaviours or contributing to feelings of inadequacy. As AI systems become more integrated into everyday life, understanding and mitigating their psychological effects will be essential for promoting healthy human-computer interactions.

Trust issues

Public mistrust of AI technologies is a significant barrier to their widespread adoption. This mistrust is largely rooted in the opacity of AI systems and the potential for algorithmic bias, which can lead to unjust outcomes. To foster trust, it is crucial to establish transparent practices and accountability measures that ensure AI systems operate fairly and ethically. This can include the development of explainable AI, which allows users to understand how decisions are made, as well as the implementation of regulatory frameworks that promote responsible AI development. By addressing these trust issues, stakeholders can work toward creating a more equitable and trustworthy AI landscape.

These complex ethical challenges require global coordination and thoughtful, adaptable regulation to ensure that AI serves humanity’s best interests, respects human dignity, and promotes fairness across all sectors of society. The ethical considerations around AI extend far beyond individual technologies or industries, impacting fundamental human rights, economic equality, environmental sustainability, and societal trust.

As AI continues to advance, the collective responsibility of governments, corporations, and individuals is to build robust, transparent systems that not only push the boundaries of innovation but also safeguard society. Only through an ethical framework can AI fulfil its potential as a transformative force for good rather than deepening existing divides or creating new dangers. The journey towards creating ethically aware AI systems necessitates ongoing research, interdisciplinary collaboration, and a commitment to prioritising human well-being in all technological advancements.

Digital Public Infrastructure: An innovative outcome of India’s G20 leadership

From latent concept to global consensus

Not more than a couple of years back, this highly jingled acronym of the present time – DPI (Digital Public Infrastructure), was merely a latent term. However, today it has gained an ‘internationally agreed vocabulary’ with wide-ranging global recognition. This could not imply that efforts in this direction had not been laid earlier, yet a tangible global consensus over the formal incorporation of the term was unattainable. 

The complex dynamics of such a long-standing impasse or ambiguity over a potential consensus-based acknowledgement of DPI, has been prominently highlighted in a recently published report of ‘India’s G20 Task Force on Digital Public Infrastructure’. The report clearly underlines that, 

While DPI was being designed and built independently by selected institutions around the world for over a decade, there was an absence of a global movement that identified the common design approach that drove success, as well as low political awareness at the highest levels of the impacts of DPI on accelerating development. 

It was only at the helm of India’s G20 Presidency in September 2023 that the first-ever multilateral consensus was reached on recognising DPI as being a ‘safe, secure, trusted, accountable, and inclusive’ driver of socioeconomic development across the globe. Notably, the ‘New Delhi Declaration’ has cultivated a DPI approach, intending to enhance a robust, resilient, innovative and interoperable digital ecosystem steered by a crucial interplay of technology, business, governance, and the community.

The DPI approach persuasively offers a middle way between a purely public and a purely private strand, with an emphasis on addressing ‘diversity and choice’; encouraging ‘innovation and competition’;  and ensuring ‘openness and sovereignty’. 

Ontologically, this marks a perceptible shift from the exclusive idea of technocratic-functionalism to embracing the concepts of multistakeholderism and pluralistic universalism.  These conceptualisations hold substance in the realm of India’s greater quest to democratise and diversify the power of innovation, based on delicate trade-offs and cross-sectional intersubjective understanding. Nevertheless, it is also to be construed that an all-pervasive digital transition increasingly entrenched into the burgeoning international DPI approach, has been exceptionally drawn from India’s own successful experience of the domestic DPI framework, namely India Stack.

India Stack is primarily an agglomeration of open Application Programming Interfaces (APIs) and digital public goods, aiming to enhance a broadly vibrant social, financial, and technological ecosystem. It offers multiple benefits and ingenious services, like faster digital payments through UPI, Aadhaar Enabled Payments System (AEPS), direct benefit transfers, digital lending, digital health measures, education and skilling, and secure sharing of data. The remarkable journey of India’s digital progress and coherently successful implementation of DPI over the last decade indisputably turned out to be the centre of attention during the G20 deliberations. 

India’s role in advancing DPI through G20 engagement and strategic initiative

What seems quite exemplary is the procedural dynamism with which actions have been undertaken to mobilise the vocabulary and effectiveness of DPI during various G20 meetings and conferences held within India. Most importantly, the Digital Economy Working Group (DEWG) meetings and negotiations were organised in collaboration with all the G20 members, guest countries, and eminent knowledge partners, like ITU, OECD, UNDP, UNESCO and the World Bank. As an effect, the Outcome Document of the Digital Economy Ministers’ Meeting was unanimously agreed to by all the G20 members and presented a comprehensive global digital agenda with appropriate technical nuances and risk-management strategies. 

Along with gaining traction in DEWG, the DPI agenda also got prominence in other G20 working groups under India’s Presidency. These include the Global Partnership for Financial Inclusion Working Group; the Health Working Group; the Agriculture Working Group; the Trade and Investment Working Group; and the Education Working Group. 

Commensurate to these diverse group meetings, the Indian leadership also conducted bilateral negotiations with its top G20 strategic and trading partners, namely the USA, the EU, France, Japan, and Australia. Interestingly, the official joint statements of all these bilateral meetings decisively entailed the catchword ‘DPI’. It could be obviously considered whether the time was ripe, or it was India’s well-laid-out strategy that ultimately paid off. Yet, it could not be repudiated that a well-thought-out parallel negotiation process had certainly played an instrumental role in providing leverage to the DPI approach. 

Further, in follow-up to the New Delhi Declaration of September 2023, the Prime Minister of India announced the launch of two landmark India-led initiatives during the G20 Virtual Leaders’ Summit in November 2023. The two initiatives denominated as the Global Digital Public Infrastructure Repository (GDPIR) and the Social Impact Fund (SIF) are mainly inclined towards the advancement of DPI in the Global South, particularly by offering upstream technical-financial assistance and knowledge-based expertise. This kind of forward-looking holistic approach reasonably fortifies the path towards a transformative global digital discourse. 

India 2025 Towards a Multilateral Framework for Digital Public Infrastructure.

Building on momentum: Brazil’s role in advancing DPI

Ever since India passed on the wand of the G20 Presidency to Brazil, expectations have been pretty high from the latter to carry forward the momentum and ensure that emerging digital technologies effectively meet the requirements of the Global South. It is encouraging to witness that Brazil is vehemently making a step forward to maintain the drive, with a greater emphasis on deepening the discussion over crucial DPI components such as digital identification, data governance, data sharing infrastructure, and global data safeguards. Although Brazil has seized an impressive track record of using digital infrastructure to promote poverty alleviation and inclusive growth within the country, a considerable measure of success at the forthcoming G20 summit will be its efficacy in stimulating political and financial commitments for a broader availability of such infrastructure. 

Despite the fact that concerted endeavours are being deployed to boost the interoperability, scalability and accessibility of DPIs, it becomes highly imperative to ensure their confidentiality and integrity. This turns out to be more alarming in the wake of increased cybersecurity breaches, unwarranted data privacy intrusions, and potential risks attached to emerging technologies like AI. Hence, at this critical juncture, it is quintessential to foster more refined, coordinated and scaled-up global efforts, or to be more precise, an effective global digital cooperation.

Pavel Durov, a transgressor or a fighter for free speech and privacy?

It has not been that long since Elon Musk was hardly criticised by the British government for spreading extremist content and advocating for the freedom of speech on his platform. This freedom of speech has probably become a luxury few people can afford, especially on platforms whose owners are less committed to those principles while trying to comply with the requirements of governments worldwide. The British riots, where individuals were allegedly arrested for social media posts, further illustrate the complexity of regulating social media digital policies. While governments and like-minded people may argue that these actions are necessary to curb violent extremism and exacerbation of critical situations, others see them as a dangerous encroachment and undermining of free speech. 

The line between expressing controversial opinions and inciting violence or allowing crime on social media platforms is often blurred, and the consequences of crossing it can be severe. However, let us look at a situation where someone is arrested for allegedly turning a blind eye to organised crime activities on his platform, as in the case of Telegram’s CEO. 

Namely, Pavel Durov, Telegram’s founder and CEO, became another symbol of resistance against government control over digital communications alongside Elon Musk. His arrest in Paris on 25 August 2024 sparked a global debate on the fine line between freedom of speech and the responsibilities that come with running a platform that allows for uncensored, encrypted communication. French authorities allegedly detained Durov based on an arrest warrant related to his involvement in a preliminary investigation and his unwillingness to grant authorities access to his encrypted messaging app, which has over 1 billion users worldwide. The investigation concerns Telegram’s alleged role in enabling a wide range of crimes due to insufficient moderation and lack of cooperation with law enforcement. The charges against him—allegations of enabling criminal activities such as child exploitation, drug trafficking, terrorism, and fraud, as well as refusing to cooperate with authorities —are severe. However, they also raise critical questions about the extent to which a platform owner can or should be held accountable for the actions of its users.

Durov’s journey from Russia to France highlights the complex interplay between tech entrepreneurship and state control. He first made his mark in Russia, founding VKontakte, a platform that quickly became a refuge for political dissenters. His refusal to comply with Kremlin demands to hand over user data and sell the platform eventually forced him out of the country in 2014. Meanwhile, Durov launched Telegram in 2013, a messaging app focused on privacy and encryption, which has since become a tool for those seeking to avoid government surveillance. However, his commitment to privacy has put him at odds with various governments, leading to a life of constant movement across borders to evade legal and political challenges.

In France, Durov’s initially promising relationship with the government soured over time. Invited by President Emmanuel Macron in 2018 to consider moving Telegram to Paris, Durov even accepted French citizenship in 2021. However, the French government’s growing concerns about Telegram’s role in facilitating illegal activities, from terrorism to drug trafficking, led to increased scrutiny. The tension as we already know, culminated in Durov’s recent detention, which is part of a broader investigation into whether platforms like Telegram enable online criminality.

Durov’s relationship with the United Arab Emirates adds another layer of complexity. After leaving Russia, Durov based Telegram in the UAE, where he was granted citizenship and received significant financial backing. However, the UAE’s restrictive political environment and stringent digital controls have made this partnership a delicate one, with Durov carefully navigating the country’s security concerns while maintaining Telegram’s operations.

The USA, too, has exerted pressure on Durov. Despite repeated attempts by US authorities to enlist his cooperation in controlling Telegram, Durov has steadfastly resisted, reinforcing his reputation as a staunch defender of digital freedom. He recently told to Tucker Carlson in an interview that the FBI approached a Telegram engineer, attempting to secretly hire him to install a backdoor that would allow US intelligence agencies to spy on users. However, his refusal to collaborate with the FBI has only heightened his standing as a symbol of resistance against governmental overreach in the digital realm.

With such an intriguing biography of his controversial tech entrepreneurship, Durov’s arrest indeed gives us reasons for speculation. At the same time, it seems not just a simple legal dispute but a symbol of the growing diplomatic and legal tensions between governments and tech platforms over control of cyberspaces. His journey from Russia to his current predicament in France highlights a broader issue: the universal challenge of balancing free expression with national security. 

Accordingly, Telegram, based in Dubai and widely used across Russia and the former Soviet Union, has faced scrutiny for its role in disseminating unfiltered content, especially during the Russia-Ukraine conflict. Durov, who left Russia in 2014 after refusing to comply with government demands, has consistently maintained that Telegram is a neutral platform committed to user privacy and free speech. Additionally, his multiple citizenships, including Russian (since the devolution in 1991, previously the Soviet Union from birth), Saint Kitts and Nevis (since 2013), French (since 2021), and UAE (since 2021), are only escalating tenseness between concerned governments pressing on French President Emmanuel Macron and asking for clarifications on the matter. Even Elon Musk confronted Emanuel Macron by responding directly to his post on X, claiming that ‘It would be helpful to the global public to understand more details about why he was arrested’, as he described it as an attack on free speech.

Despite the unclear circumstances and vague official evidence justifying the arrest and court process, Durov will undoubtedly face the probe and confront the accusations under the prescribed laws concerning the case. Therefore, it would be preferable to look at the relevant laws and clarify which legal measures are coherent with the case. 

The legal backdrop to Durov’s arrest is complex, involving both US and EU laws that govern digital platforms. However, Section 230 of the US Communications Decency Act of 1996, often called the ‘twenty-six words that created the internet,’ is the governing law that should be consulted and under which, among others, this case would be conducted. The law, in its essence, protects online platforms from liability for user-generated content as long as they act in good faith to remove unlawful material. This legal shield has allowed platforms like Telegram to flourish, offering robust encryption and a promise of privacy that appeals to millions of users worldwide. However, this immunity is not absolute. Section 230 does not protect against federal criminal liability, which means that if Telegram is found to have knowingly allowed illegal activities to increase without taking adequate steps to curb them, Durov could indeed be held liable.

In the EU context, the recently implemented Digital Services Act (DSA) imposes stricter obligations on digital platforms, particularly those with significant user bases. Although Telegram, with its 41 million users in the EU, falls short of the ‘very large online platforms’ (VLOP) category that would subject it to the most stringent DSA requirements, it would probably still be obligated to act against illegal content. The DSA emphasises transparency, accountability, and cooperation with law enforcement—a framework that contrasts sharply with Telegram’s ethos of privacy and minimal interference.

 Performer, Person, Solo Performance, Adult, Male, Man, Head, Face, Happy, Pavel Durov

The case also invites comparisons with other tech moguls who have faced similar dilemmas. Elon Musk’s acquisition of Twitter, now rebranded as X, has been marked by his advocacy for free speech. However, even Musk has had to navigate the treacherous waters of content moderation, facing governments’ pressure to combat disinformation and extremist content on his platform. The last example is the dispute with Brazil’s Supreme Court, where Elon Musk’s social media platform X could be easily ordered to shut down in Brazil due to alleged misinformation and extremist content concerning the case that was spread on X. The conflict has deepened tensions between Musk and Supreme Court Judge Alexandre de Moraes, whom Musk accused of engaging in censorship.

Similarly, Mark Zuckerberg’s Meta has been embroiled in controversies over its role in child exploitation, but especially in spreading harmful content, from political misinformation to hate speech. On the other hand, Zuckerberg’s recent confession in an official letter that, in 2021, the White House and other Biden administration officials exerted considerable pressure on Meta to suppress certain COVID-19-related content, including humour and satire, adds fuel to the fire concerning the abuse of legal measures to stifle freedom of speech and excessive content moderation by government officials. Nevertheless, both Musk and Zuckerberg have had to strike a balance between maintaining a platform that allows for open dialogue and complying with legal requirements to prevent the spread of harmful content.

The story of Chris Pavlovski, CEO of Rumble, further complicates this narrative. His decision to leave the EU following Durov’s arrest underscores the growing unease among tech leaders about the increasing regulatory pressures of the EU. Pavlovski’s departure can be seen as a preemptive move to avoid the legal and financial risks of operating in a jurisdiction that tightens its grip on digital platforms. It also reflects a broader trend of tech companies seeking more favourable regulatory environments, often at the expense of user rights and freedoms.

All these controversial examples bring us to the heart of this debate: where to draw the line between free speech and harm prevention. Encrypted platforms like Telegram offer unparalleled privacy but pose significant challenges for law enforcement. The potential for these platforms to be used by criminals and extremists cannot be ignored. However, the solution is more complex. Overzealous regulation risks stifling free expression and driving users to even more secretive and unregulated corners of the internet.

Pavel Durov’s case is a microcosm of the larger global struggle over digital rights. It forces us to confront uncomfortable questions: Do platforms like Telegram have a responsibility to monitor and control the content shared by their users, even at the cost of privacy? Should governments have the power to compel these platforms to act, or does this represent an unacceptable intrusion into the private sphere? Should social media companies that monetise content on their platforms be held responsible for the content they allow? And ultimately, how do we find the balance in the digital world we live in to optimally combine privacy and security in our society? 

These questions will only become more pressing as we watch Durov’s and similar legal cases unfold. The outcome of his case could set a precedent that shapes the future of digital communication, influencing not just Telegram but all platforms that value user privacy and free speech. Either way, Durov’s case also highlights the inherent conflict between cyberspace and real space. There was once a concept that the online world—the domain of bits, bytes, and endless data streams—existed apart from the physical reality we live in. In the early days of the internet, this virtual space seemed like an expansive, unregulated frontier where the laws of the physical world did not necessarily apply. However, cyberspace was never a separate entity; rather, it was an extension, a layer added to the world we already knew. Therefore, the concept of punishment in the digital world has always been, and still is, rooted in the physical world. Those held responsible for crimes or who commit crimes online are not confined to a virtual jail; they are subject to controversies in the real world and legal systems, courts, and prisons.

The history of computer viruses: Journey back to where it all began!

Once confined to the realms of theoretical science and speculative fiction, computer viruses have morphed into one of the main threats in the digital age. This transformation from an intriguing concept to a pervasive danger has not only reshaped the landscape of cybersecurity but has also imposed significant challenges to national security and dangers for everyday users.  

In this exploration, we trace the origins of computer viruses, charting their evolution through decades of innovation and malfeasance, to understand how they became a key concern for modern societies. 

Early concepts and theoretical foundations

The notion of a computer virus was not born out of malice (or malice intent) but from theoretical discussions about self-replicating programs. In 1949, during his lectures at the University of Illinois, Hungarian scientist John von Neumann introduced the idea of self-reproducing automata. His theories, later published in 1966, proposed that computer programs, much like biological entities, could self-replicate. Although not specifically labelled as viruses at the time, these theoretical constructs laid the groundwork for what would later become a major field of study in computer science.

The first practical implementation of von Neumann’s theories was seen in the 1960s at AT&T’s Bell Labs, where the game Darwin was developed by Victor Vyssotski, Robert Morris Sr., and Malcolm Douglas McIlroy on an IBM 7090 mainframe. The game involved programs, termed organisms, that competed by taking over each other’s memory space in a digital arena, essentially simulating a survival of the fittest scenario among software.

The sci-fi prophecy and early experiments

Much like other groundbreaking concepts, the idea of a malicious self-replicating program made its way into popular culture in 1970, thanks to Gregory Benford’s science fiction story ‘The Scarred Man’. This story vividly brought to life a self-replicating program akin to a computer virus, complete with a counteracting ‘vaccine’—a visionary notion that anticipated the advent of real-world antivirus software.

The first program to perform the self-replicating function of a modern virus was Creeper, created in 1971 by Bob Thomas at BBN Technologies. Designed as an experiment, Creeper moved through the ARPANET, displaying the message, ‘I’m the creeper, catch me if you can!’ This foundational work paved the way for the development of malicious software.

 Green, Text

Image Source: dscomputerstudies1112.weebly.com

In 1975, computer programmer John Walker developed the first Trojan, called ANIMAL. It was a ‘20 questions’ program that tried to guess the user’s favourite animal, using a clever machine learning algorithm to improve its questions. Walker included a subroutine called PERVADE, which copied ANIMAL into any user-accessible directories it could find. 

Although there is some debate as to whether ANIMAL was a Trojan or simply another virus, it is generally considered to be the first Trojan due to its method of disguising itself as something the user wanted and then performing actions without the user’s permission (copying itself into directories without the user’s knowledge or consent). This fits the definition of a Trojan: a type of malware that hides inside another program and performs actions without the user’s permission.

The rise of malicious intent

The 1970s and early 1980s saw the first instances of viruses crafted with harmful intentions. In 1974, the Rabbit (or Wabbit) virus emerged, replicating itself rapidly to the point of crashing systems. The speed of replication gave the virus its name.

Technically, the Rabbit virus operated by exploiting vulnerabilities in the host system’s architecture. It was the first example of a Rabbit virus, a type of denial-of-service attack where a process continually replicates to deplete system resources. 

While the Rabbit virus itself may not have caused widespread havoc compared to later viruses, its impact on the field of cybersecurity was profound. It helped catalyse the development of early antivirus measures and informed the strategies used to combat future threats. 

In 1982, high school student Richard Skrenta created Elk Cloner, one of the first viruses to spread via floppy disks among personal computer users. Elk Cloner spread by infecting the Apple DOS 3.3 operating system using a technique now known as a boot sector virus. It was attached to a game which was then set to play. The 50th time the game was started, the virus was released, but instead of playing the game, it would change to a blank screen displaying a poem about the virus. If a computer booted from an infected floppy disk, a copy of the virus was placed in the computer’s memory. When an uninfected disk was inserted into the computer, the entire DOS (including Elk Cloner) would be copied to the disk, allowing it to spread from disk to disk. To prevent DOS from being continually rewritten each time the disk was accessed, Elk Cloner also wrote a signature byte to the disk’s directory, indicating that it had already been infected.

Official recognition and the growth of malware

The term ‘computer virus’ was coined by Fred Cohen in 1983 while he was a graduate student. Fred Cohen’s experiments provided concrete evidence of the potential threat posed by computer viruses. His work demonstrated that these programs could not only replicate but also conceal their presence, making them difficult to detect and eradicate. He presented his findings in a seminal paper titled ‘Computer Viruses – Theory and Experiments’. 

Cohen introduced a small, self-replicating program into a UNIX system, referring to it as a ‘virus’. This program was able to spread from one file to another, replicating itself and modifying other programs to include a copy of itself. 

By the mid-1980s, the landscape of computer viruses had expanded significantly. The Brain virus, which appeared in 1986, targeted IBM PC platforms and employed stealth techniques to evade detection. The Brain virus was created by two Pakistani brothers, Basit and Amjad Farooq Alvi, who owned a computer store in Lahore. Interestingly, their initial intention was not to cause harm but to protect their medical software from being pirated. To achieve this, they embedded Brain into the boot sector of floppy disks, ensuring that any unauthorised copies of their software would be infected.

 Text, Number, Symbol, Scoreboard

Image Source: By Avinash Meetoo – https://commons.wikimedia.org/w/index.php?curid=3919244

The release of the internet worm, also known as the Morris worm, in 1988 marked another important event in the history of cybersecurity. Created by Robert Tappan Morris, a graduate student at Cornell University, this self-replicating program exposed significant vulnerabilities in the early internet infrastructure, causing widespread disruption and prompting major advancements in computer security. Morris developed the worm as an experiment to gauge the size of the internet. His intention was not to cause harm but to explore the network’s capabilities. However, a critical flaw in the worm’s design led to it spreading uncontrollably, causing significant damage.

The wake-up call: Recognising the need for cybersecurity

The initial success of these early viruses can be attributed to two primary factors: the absence of antivirus software and a general lack of awareness about the importance of cyber hygiene among users.

The late 1980s and early 1990s marked a key period for the internet community. The proliferation of malware threats was a wake-up call, highlighting the urgent need for robust cybersecurity measures. In these years, the antivirus software industry saw rapid growth and diversification. Companies around the world began developing and releasing antivirus programs to address the escalating threat. In 1987, Bernd Robert Fix documented the first successful removal of a computer virus.

That same year, G Data Software AG released the first antivirus software designed for Atari ST computers, signalling the commercial viability and necessity of antivirus solutions. Concurrently, McAfee, Inc. was founded and launched VirusScan, one of the earliest antivirus programs for personal computers. These developments marked the beginning of a concerted effort to protect users from the growing menace of computer viruses.

Notable examples include Avira, which emerged as a significant player in Germany, and ThunderByte Antivirus from the Netherlands. Meanwhile, avast! was developed in Czechoslovakia, offering robust protection against emerging threats, and Dr Solomon’s Anti-Virus Toolkit became a trusted name in the United Kingdom.

These early antivirus programs were instrumental in establishing the commercial antivirus industry. They provided users with essential tools to detect, remove, and prevent computer viruses, significantly enhancing the security of personal and business computing environments. The proliferation of these tools represented a collective global effort to combat the burgeoning threat of malware, laying the groundwork for the sophisticated cybersecurity solutions we rely on today.

The modern era of cybersecurity

Today, the landscape of cyber threats has evolved to include ransomware, spyware, and sophisticated cyberespionage tools, costing the global economy billions annually. Cybersecurity has become a critical component of national security strategies worldwide, with significant investments from governments and corporations to protect their infrastructure and data.

The constant battle between malicious actors and cybersecurity experts is relentless, with millions of new viruses emerging daily, challenging experts to combat them effectively. The importance of robust security measures was starkly shown by the recent CrowdStrike incident on 19 July 2024. This incident brought down the digital networks of airports, hospitals, and governments globally, disrupting daily life, businesses, and government operations. Numerous industries, including airlines, banks, hotels, manufacturing, and more were severely affected. Essential services such as emergency response and government websites were also impacted. The financial damage from this worldwide outage is estimated to be at least USD 10 billion, underscoring the critical need for strong cybersecurity defences in our interconnected world.

Computer viruses have been around since the beginning of the tech era. So, to think that there will be a solution that would eliminate all the viruses for good is not realistic. But that does not mean they cannot be contained, and that is exactly where cybersecurity measures step in. The more tech experts enhance security, the less likely viruses can cause significant damage on a global scale.

X, a lone warrior for freedom of speech?

Let’s start with a quote…

‘2024 will be marked by an interplay between change, which is the essence of technological development, and continuity, which characterises digital governance efforts.’, said Dr Jovan Kurbalija in one of his interviews, predicting the year 2024 at its beginning. 

Judging by developments in the social media realm, the year 2024 indeed appears to be the year of change, especially in the legal field, with disputes and implementations of newborn digital policies long in the ‘ongoing’ phase. Dr Kurbalija’s prediction connects us to some of the main topics Diplo and its Digital Watch Observatory are following, such as the issue of content moderation and freedom of speech in the social media world. 

This taxonomic dichotomy could easily make us think of how, in the dimly lit corridors of power, where influence and control intertwine like the strands of a spider’s web, the role of social media has become a double-edged sword. On the one hand, platforms like 𝕏 stand as bastions of free speech, allowing voices to be heard that might otherwise be silenced. On the other hand, they are powerful instruments in the hands of those who control them, with the potential to shape public discourse narratives, influence public opinion, and even ignite conflicts. That is why the scrutiny 𝕏 faces for hosting extremist content raises essential questions about whether it is merely a censorship-free network, or a tool wielded by its enigmatic owner, Elon Musk, to further his agenda.

The story begins with the digital revolution, when the internet was hailed as the great equaliser, giving everyone a voice. Social media platforms emerged as the town squares of the 21st century, where ideas could be exchanged freely, unfiltered by traditional gatekeepers like governments or mainstream media. Under Musk’s ownership, 𝕏 has taken this principle to its extreme, often resisting calls for tighter content moderation to protect free speech. But as with all freedoms, this one also comes with a price.

The platform’s hands-off approach to content moderation has led to widespread concerns about its role in amplifying extremist content. The issue here is not just about spreading harmful material; it touches on the core of digital governance. Governments around the world are increasingly alarmed by the potential for social media platforms to become breeding grounds for radicalisation and violence. The recent scrutiny of 𝕏 is just the latest chapter in an ongoing struggle between the need for free expression and the imperative to maintain public safety.

The balance between these two forces is incredibly delicate in countries like Türkiye, for example, where the government has a history of cracking down on dissent. The Turkish government’s decision to block instagram for nine days in August 2024 after the platform failed to comply with local laws and sensitivities is a stark reminder of the power dynamics at play. In this context, 𝕏’s refusal to bow to similar pressures can be seen as both a defiant stand for free speech and a dangerous gamble that could have far-reaching consequences.

But the story does not end there. The influence of social media extends far beyond any one country’s borders. In the UK, the recent riots have highlighted the role of platforms like 𝕏 and Meta in both facilitating and exacerbating social unrest. While Meta has taken a more proactive approach to content moderation, removing inflammatory material and attempting to prevent the spread of misinformation, 𝕏’s more relaxed policies have allowed a more comprehensive range of content to circulate. Such an approach has included not just legitimate protest organisations but also harmful rhetoric that has fuelled violence and division.

The contrast between the two platforms is stark. Meta, with its more stringent content policies, has been criticised for stifling free speech and suppressing dissenting voices. Yet, in the context of the British riots, its approach may have helped prevent the situation from escalating further. On the other hand, 𝕏 has been lauded for its commitment to free expression, but this freedom comes at a price. The platform’s role in the riots has drawn sharp criticism, with some accusing it of enabling the very violence it claims to oppose as the government officials have vowed action against tech platforms, even though Britain’s Online Safety Act will not be fully effective until next year. Meanwhile, the EU’s Digital Services Act, which Britain is no longer part of, is already in effect and will allegedly serve as a backup in similar disputes.

The British riots also serve as a cautionary tale about the power of social media to shape public discourse. In an age where information spreads at lightning speed, the ability of platforms like 𝕏 and Meta to influence events in real time is unprecedented. This kind of lever of power is not just a threat to governments but also a powerful tool that can be used to achieve political ends. For Musk, acquiring 𝕏 represents a business opportunity and a chance to shape the global discourse in ways that align with his future vision.

Musk did not even hesitate to accuse the European Commission of attempting to pull off what he describes as an ‘illegal secret deal’ with 𝕏. In one of his posts, he claimed the EU, with its stringent new regulations aimed at curbing online extremist content and misinformation, allegedly tried to coax 𝕏 into quietly censoring content to sidestep hefty fines. Other tech giants, according to Musk, nodded in agreement, but not 𝕏. The platform stood its ground, placing its unwavering belief in free speech above all else.

While the European Commission fired back, accusing 𝕏 of violating parts of the EU’s Digital Services Act, Musk’s bold stance has ignited a fiery debate. And here, it is not just about rules and fines anymore—it is a battle over the very soul of digital discourse. How far should governmental oversight go? And at what point does it start to choke the free exchange of ideas? Musk’s narrative paints 𝕏 as a lone warrior, holding the line against mounting pressure, and in doing so, forces us to confront the delicate dance between regulation and the freedom to speak openly in today’s digital world.

Furthermore, the cherry on top of the cake, in this case, is Musk’s close contact and support for the potential new president of the USA, Donald Trump, generating additional doubts about the concentration and acquisition of power by social media owners, respectively, tech giants and their allies. Namely, in an interview with Donald Trump, Elon Musk openly endorsed the candidate for the US presidency, discussing, among others, topics such as regulatory policies and the juridical system, thus fueling speculation about his platform 𝕏 as a powerful oligarchic lever of power.

At this point, it is already crystal clear that governments are grappling with how to regulate these platforms and the difficult choices they are faced with. On the one hand, there is a clear need to implement optimal measures in order to achieve greater oversight in preventing the spread of extremist content and protecting public safety. On the other hand, too much regulation risks stifling the very freedoms that social media platforms were created to protect. This delicate dichotomy is at the heart of the ongoing debate about the role of tech giants in a modern, digital society.

The story of 𝕏 and its role in hosting extremist content is more than just the platform itself. It is about the power of technology to shape our world, for better or worse. As the digital landscape continues to evolve, the questions raised by 𝕏’s approach to content moderation will only become more urgent. And in the corridors of power, where decisions that shape our future are made, answers to those questions will determine the fate of the internet itself.