Revolutionising medicine with AI: From early detection to precision care

It has been more than four years since AI was first introduced into clinical trials involving humans. Even back then, it was evident that the advancement of artificial intelligence—currently the most popular buzzword online in 2024—would enhance every aspect of society, including medicine.

Thanks to AI-powered tools, diseases that once baffled humanity are now much better understood. Some conditions are also easier to detect, even in their earliest stages, significantly improving diagnosis outcomes. For these reasons, AI in medicine stands out as one of the most valuable technological advances, with the potential to improve individual health and, ultimately, the overall well-being of society.

Although ethical concerns and doubts about the accuracy of AI-assisted diagnostic tools persist, it is clear that the coming years and decades will bring developments and improvements that once seemed purely theoretical.

AI collaborates with radiologists to enhance diagnostic accuracy

AI has been a crucial aid in medical diagnostics for some time now. A Japanese study showed that ChatGPT performed more accurate assessments than experts in the field.

After performing 150 diagnostics, neuroradiologists recorded an 80% accuracy rate for AI. These promising results encouraged the research team to explore integrating such AI systems into apps and medical devices. They also highlighted the importance of incorporating AI education into medical curricula to better prepare future healthcare professionals.

Early detection of brain tumours and lung cancer

Early detection of diseases, particularly cancer, is critical to a patient’s chances of survival. Many companies are focusing on improving AI within medical equipment to diagnose brain tumours and lung cancer in their earliest stages.

AI-enhanced lung nodule detection aims to improve cancer outcomes.

The algorithm developed by Imidex, which has received FDA approval, is currently in clinical trials. Its purpose is to improve the screening of potential lung cancer patients.

Collaborating with Spesana, the company is expected to be among the first to market once the research is finalised.

Growing competition shows AI’s progress

An increasing number of companies entering the AI-in-medicine field suggests that these advancements will be more widely accessible than initially expected. While the mentioned companies are set to dominate the North American market, a French startup Bioptimus is targeting Europe.

Their AI model, trained on millions of medical images, is capable of identifying cancerous cells and genetic anomalies within tumours, pushing the boundaries of precision medicine.

Public trust in AI medical diagnosis

New technologies often face public scepticism and AI in medicine is no exception. A 2023 study found that many patients feel uneasy with doctors relying on AI during treatment.

The Pew Research Centre report revealed that 60% of Americans are against AI-assisted diagnostics, while only 39% support it. Furthermore, 57% believe AI could worsen the doctor-patient relationship, compared to 13% who think it might improve it.

Doctor, Patient, Hospital, Doctor's office, Medical equipment, Medicine, AI

As for treatment outcomes, 38% anticipate improvements with AI, 33% expect negative results, and 27% believe no major changes will occur.

AI’s role in tackling dementia

Dementia, a progressive illness affecting cognitive functions, remains a major challenge for healthcare. However, AI has shown promising potential in this area. Through advanced pattern recognition, AI systems can analyse massive datasets, detect changes in brain structure, and identify early warning signs of dementia, long before symptoms manifest.

By processing various test results and brain scans, AI algorithms enable earlier interventions, which can greatly improve patients’ quality of life. In particular, researchers from Edinburgh and Dundee are hopeful that their AI tool, SCAN-DAN, will revolutionise the early detection of this neurodegenerative disease.

The project is part of the larger global NEURii collaboration, which aims to develop digital health tools that can address some of the most pressing challenges in dementia research.

Helping with early breast cancer detection

AI has shown great potential in improving the effectiveness of ultrasound, mammography, and MRI scans for breast cancer detection. Researchers in the USA have developed an AI system capable of refining disease staging by accurately distinguishing between benign and malignant tumours.

Moreover, the AI system can reduce false positives and negatives, a common problem in traditional breast cancer detection methods. The ability to improve diagnostic accuracy and provide a better understanding of disease stages is crucial in treating breast cancer from its earliest signs.

Computer, AI, Breast cancer, Disease prevention, Cancer detection

Investment in AI set to skyrocket

With early diagnosis playing a pivotal role in curing diseases, more companies are seeking partnerships and funding to keep pace with the leading investors in AI technology.

Recent projections indicate that AI could add nearly USD $20 trillion to the global economy by 2030. While it is still difficult to estimate healthcare’s share in this growth, some early predictions suggest that AI in medicine could account for more than 10% of that value.

What is clear, however, is that major global companies are not missing the opportunity to invest in businesses developing AI-driven medical equipment.

What can we expect in the future?

AI is making significant progress across various industries, and its impact on medicine could be transformational. If healthcare receives as much or more AI focus than fields like economics and ecology, the potential to revolutionise medicine as a science is immense.

Various AI systems that learn about diseases and treatment processes have the capacity to gather and analyse far more information than the human brain. As regulatory frameworks evolve worldwide, AI-driven diagnostic tools may lead to faster, more accurate disease detection than ever before, potentially marking a major turning point in the history of medical science.

El Salvador: Blueprint for the bitcoin economy

On 7 September 2021, El Salvador became the first country in the world to adopt bitcoin as legal tender, sparking global discussions about the role of cryptocurrencies in national economies. This groundbreaking decision transformed El Salvador into a beacon for financial innovation as other nations began to closely monitor its bold experiment. Initially seen as a monetary gamble, El Salvador’s decision has evolved into a strategy with far-reaching implications, both domestically and internationally. While the International Monetary Fund (IMF) and other financial institutions have raised concerns about potential risks, El Salvador’s commitment to cryptocurrency adoption has set a precedent by reshaping global economic systems.

From experiment to national strategy

When El Salvador made bitcoin legal tender, it was an ambitious experiment aimed at solving several economic challenges. The country, reliant on remittances and with a significant part of its population unbanked, saw cryptocurrency as a way to promote financial inclusion. Today, with 5,748.8 bitcoins held in national reserves, El Salvador’s leadership continues to buy bitcoin, signalling confidence in the long-term potential of the digital asset. In this way, the initial idea of bitcoin adoption has transformed from a simple test into a cornerstone of the nation’s financial strategy. El Salvador is now laying the foundation for broader economic development by positioning itself as a crypto-friendly environment.

 Logo, Emblem, Symbol, Hockey, Ice Hockey, Ice Hockey Puck, Rink, Skating, Sport

Economic impact: benefits and challenges

El Salvador’s embrace of bitcoin has left a significant mark on its economy, though it has not been without its challenges. One of the major benefits has been the ability to streamline remittances, allowing Salvadorians abroad (of which there are many in emigration) to send money home using bitcoin, cutting out the traditional intermediaries and lowering fees. This move has made remittances faster, more affordable, and more accessible.

The country has also witnessed a surge in foreign investment, as businesses interested in cryptocurrency see El Salvador as an attractive hub. Crypto enthusiasts and digital nomads have flocked to the country, boosting tourism and putting El Salvador on the global map as a bitcoin-friendly destination.

Moreover, El Salvador’s innovation goes beyond adopting bitcoin as legal tender; it has also ventured into the creation of bitcoin bonds and infrastructure projects like ‘Bitcoin City.’ President Nayib Bukele’s vision for Bitcoin City includes a tax-free, crypto-friendly zone designed to attract foreign investment. The city, with a projected USD $1.6 billion investment, will feature modern infrastructure and create an environment conducive to the growth of blockchain and cryptocurrency businesses. If successful, Bitcoin City could become a global hub for digital finance, further cementing El Salvador’s position at the forefront of this financial revolution.

However, bitcoin volatility remains a persistent issue. Critics argue that heavy reliance on such a fluctuating asset could jeopardise financial stability. Unpredictable price swings in the crypto market pose a risk, potentially leading to instability in the national economy. While El Salvador continues to bet on bitcoin’s long-term success, these challenges highlight the need to carefully navigate the balancing act between innovation and economic resilience.

 City, Urban, Metropolis

Educating for a bitcoin future

One of the latest initiatives El Salvador has undertaken is its Bitcoin certification programme. Spearheaded by the National Bitcoin Office (ONBTC), the programme aims to educate 80,000 government employees on the intricacies of bitcoin and blockchain technology. This strategic move underscores the nation’s commitment to integrating bitcoin into its broader governance structure.

By equipping civil servants with essential knowledge, El Salvador ensures that bitcoin adoption is not just a top-down policy but becomes deeply embedded in the daily functioning of the state. Beyond focusing on external performance, El Salvador is working to seed crypto into the core of its state organisations, ensuring that government employees fully understand the nature of cryptocurrency and not merely reproduce its use. This educational initiative is also expected to create a ripple effect across other sectors, solidifying El Salvador’s place as a leader in the global crypto space.

Global influence and partnerships

El Salvador’s progressive approach to cryptocurrency is beginning to influence other nations. Argentina, for example, has recently started collaborating with El Salvador to learn from its experience. Argentina’s pro-crypto president, Javier Milei, has shown interest in using cryptocurrencies to stabilise the country’s economy. This collaboration is a testament to the growing recognition of El Salvador’s pioneering role in this space. As more countries begin to explore cryptocurrency adoption, El Salvador’s approach provides a practical case study, proving that integrating digital assets into a national economy can have tangible benefits.

 Land, Nature, Outdoors, Sea, Water, Shoreline, Coast, Scenery

Regulatory challenges and criticism

Despite the enthusiasm surrounding Bitcoin adoption, El Salvador has faced significant criticism from international organisations. The IMF has been particularly vocal, warning that the adoption of cryptocurrency as legal tender poses risks to financial stability, consumer protection, and market integrity. These warnings highlight the regulatory challenges El Salvador faces, especially when dealing with global institutions that remain sceptical of digital currencies. However, the country has responded by reinforcing its regulatory frameworks and increasing transparency around its bitcoin activities. While the road is not without obstacles, El Salvador’s approach showcases a willingness to navigate these complexities and maintain its position as a leader in the crypto space.

El Salvador’s Chivo wallet project

One of the most significant elements of El Salvador’s bitcoin adoption is the introduction of the Chivo wallet, which plays a pivotal role in promoting financial inclusion. Chivo, the government-backed digital wallet, allows Salvadorians to easily access and use bitcoin, providing a crucial gateway to financial services for those previously excluded from the traditional banking system.

To help citizens become familiar with the cryptocurrency, the government offered USD $30 worth of bitcoin to each individual through the Chivo wallet, the country’s digital currency platform. However, public reception was mixed, with an August 2021 poll indicating that 70% of respondents opposed the initiative, and only 15% expressed confidence in bitcoin. Concerns about volatility also led to protests in San Salvador, as many feared the potential for drastic price fluctuations.

The Chivo wallet, available on mobile devices, empowers even the unbanked population to participate in the digital economy by enabling seamless transactions and easy access to remittances sent from abroad. By leveraging this digital wallet project, El Salvador has not only embraced crypto but has also laid the foundation for a more inclusive financial ecosystem. This approach serves as a model for other developing nations, showing how the integration of a government-supported crypto platform can help bypass traditional banking barriers, delivering financial tools to millions and boosting both individual economic prospects and national economies.

 Art, Graphics, Text, Logo

The broader global implications

El Salvador’s bold experiment is already making waves across the world. The Central African Republic has followed in its footsteps, adopting bitcoin as legal tender. As other nations watch closely, it is becoming clear that El Salvador’s approach could inspire a global movement towards cryptocurrency-driven economies. For countries struggling with inflation, financial exclusion, or dependence on foreign currencies, bitcoin adoption represents an alternative path. The world sees that cryptocurrency is not just a speculative asset—it can be a powerful tool for economic development and innovation.

A leader in the new digital financial order

El Salvador’s decision to adopt bitcoin as legal tender has positioned the country at the forefront of a financial revolution. What started as a daring experiment has blossomed into a comprehensive national strategy with global implications. Despite the challenges, including market volatility and regulatory pushback, El Salvador’s proactive approach sets a powerful and inspiring example for other countries. By embracing cryptocurrency from the deepest level of society, from education to infrastructure, El Salvador is showing the world that digital currencies can drive economic progress. As more nations observe its success, the small Central American nation may just be paving a historical way for global financial transformation.

AI and ethics in modern society

Humanity’s rapid advancements in robotics and AI have shifted many ethical and philosophical dilemmas from the realm of science fiction into pressing real-world issues. AI technologies now permeate areas such as medicine, public governance, and the economy, making it critical to ensure their ethical use. Multiple actors, including governments, multinational corporations, international organisations, and individual citizens, share the responsibility to navigate these developments thoughtfully.

What is ethics?

Ethics refers to the moral principles that guide individual behaviour or the conduct of activities, determining what is considered right or wrong. In AI, ethics ensures that technologies are developed and used in ways that respect societal values, human dignity, and fairness. For example, one ethical principle is respect for others, which means ensuring that AI systems respect the rights and privacy of individuals.

What is AI?

Artificial Intelligence (AI) refers to systems that analyse their environment and make decisions autonomously to achieve specific goals. These systems can be software-based, like voice assistants and facial recognition software, or hardware-based, such as robots, drones, and autonomous cars. AI has the potential to reshape society profoundly. Without an ethical framework, AI could perpetuate inequalities, reduce accountability, and pose risks to privacy, security, and human autonomy. Embedding ethics in the design, regulation, and use of AI is essential to ensuring that this technology advances in a way that promotes fairness, responsibility, and respect for human rights.

AI ethics and its importance

AI ethics focuses on minimising risks related to poor design, inappropriate applications, and misuse of AI. Problems such as surveillance without consent and the weaponisation of AI have already emerged. This calls for ethical guidelines that protect individual rights and ensure that AI benefits society as a whole.

a person in a white suit

Global and regional efforts to regulate AI ethics

There are international initiatives to regulate AI ethically. For example, UNESCO‘s 2021 Recommendation on the Ethics of AI offers guidelines for countries to develop AI responsibly, focusing on human rights, inclusion, and transparency. The European Union’s AI Act is another pioneering legislative effort, which categorises AI systems by their risk level. The higher the risk, the stricter the regulatory requirements.

The Collingridge dilemma and AI

The Collingridge dilemma points to the challenge of regulating new technologies like AI. Early regulation is difficult due to limited knowledge of the technology’s long-term effects, but once the technology becomes entrenched, regulation faces opposition from stakeholders. AI is currently in a dual phase: while its long-term implications are uncertain, we already have enough examples of its immediate impact—such as algorithmic bias and privacy violations—to justify regulation in key areas.

Asimov’s Three Laws of Robotics: Ethical inspiration for AI

Isaac Asimov’s Three Laws of Robotics, while fictional, resonate with many of the ethical concerns that modern AI systems face today. These laws—designed to prevent harm to humans, ensure obedience to human commands, and prioritise the self-preservation of robots—provide a foundational, if simplistic, framework for responsible AI behaviour.

 Page, Text, Chart, Plot

Modern ethical challenges in AI

However, real-world AI introduces a range of complex challenges that cannot be adequately managed by simple rules. Issues such as algorithmic bias, privacy violations, accountability in decision-making, and unintended consequences complicate the ethical landscape, necessitating more nuanced and adaptive strategies for effectively governing AI systems.

As AI continues to develop, it raises new ethical dilemmas, including the need for transparency in decision-making, accountability in cases of accidents, and the possibility of AI systems acting in ways that conflict with their initial programming. Additionally, there are deeper questions regarding whether AI systems should have the capacity for moral reasoning and how their autonomy might conflict with human values.

Categorising AI and ethics

Modern AI systems exhibit a spectrum of ethical complexities that reflect their varying capabilities and applications. Basic AI operates by executing tasks based purely on algorithms and pre-programmed instructions, devoid of any moral reasoning or ethical considerations. These systems may efficiently sort data, recognise patterns, or automate simple processes, yet they do not engage in any form of ethical deliberation.

In contrast, more advanced AI systems are designed to incorporate limited ethical decision-making. These systems are increasingly being deployed in critical areas such as healthcare, where they help diagnose diseases, recommend treatments, and manage patient care. Similarly, in autonomous vehicles, AI must navigate complex moral scenarios, such as how to prioritise the safety of passengers versus pedestrians in unavoidable accident situations. While these advanced systems can make decisions that involve some level of ethical consideration, their ability to fully grasp and navigate complex moral landscapes remains constrained.

The variety of ethical dilemmas

 Logo, Nature, Outdoors, Person

Legal impacts

The question of AI accountability is increasingly relevant in our technologically driven society, particularly in scenarios involving autonomous vehicles, where determining liability in the event of an accident is fraught with complications. For instance, if an autonomous car is involved in a collision, should the manufacturer, software developer, or vehicle owner be held responsible? As AI systems become more autonomous, existing legal frameworks may struggle to keep pace with these advancements, leading to legal grey areas that can result in injustices. Additionally, AI technologies are vulnerable to misuse for criminal activities, such as identity theft, fraud, or cyberattacks. This underscores the urgent need for comprehensive legal reforms that not only address accountability issues but also develop robust regulations to mitigate the potential for abuse.

Financial impacts

The integration of AI into financial markets introduces significant risks, including the potential for market manipulation and exacerbation of financial inequalities. For instance, algorithms designed to optimise trading strategies may inadvertently favour wealthy investors, perpetuating a cycle of inequality. Furthermore, biased decision-making algorithms can lead to unfair lending practices or discriminatory hiring processes, limiting opportunities for marginalised groups. As AI continues to shape financial systems, it is crucial to implement safeguards and oversight mechanisms that promote fairness and equitable access to financial resources.

Environmental impacts

The environmental implications of AI cannot be overlooked, particularly given the substantial energy consumption associated with training and deploying large AI models. The computational power required for these processes contributes significantly to carbon emissions, raising concerns about the sustainability of AI technologies. In addition, the rapid expansion of AI applications in various industries may lead to increased electronic waste, as outdated hardware is discarded in favour of more advanced systems. To address these challenges, stakeholders must prioritise the development of energy-efficient algorithms and sustainable practices that minimise the ecological footprint of AI technologies.

Social impacts

AI-driven automation poses a profound threat to traditional job markets, particularly in sectors that rely heavily on routine tasks, such as manufacturing and customer service. As machines become capable of performing these jobs more efficiently, human workers may face displacement, leading to economic instability and social unrest. Moreover, the deployment of biassed algorithms can deepen existing social inequalities, especially when applied in sensitive areas like hiring, loan approvals, or criminal justice. The use of AI in surveillance systems also raises significant privacy concerns, as individuals may be monitored without their consent, leading to a chilling effect on free expression and civil liberties.

Psychological impacts

The interaction between humans and AI systems can have far-reaching implications for emotional well-being. For example, AI-driven customer service chatbots may struggle to provide the empathetic responses that human agents can offer, leading to frustration among users. Additionally, emotionally manipulative AI applications in marketing may exploit psychological vulnerabilities, promoting unhealthy consumer behaviours or contributing to feelings of inadequacy. As AI systems become more integrated into everyday life, understanding and mitigating their psychological effects will be essential for promoting healthy human-computer interactions.

Trust issues

Public mistrust of AI technologies is a significant barrier to their widespread adoption. This mistrust is largely rooted in the opacity of AI systems and the potential for algorithmic bias, which can lead to unjust outcomes. To foster trust, it is crucial to establish transparent practices and accountability measures that ensure AI systems operate fairly and ethically. This can include the development of explainable AI, which allows users to understand how decisions are made, as well as the implementation of regulatory frameworks that promote responsible AI development. By addressing these trust issues, stakeholders can work toward creating a more equitable and trustworthy AI landscape.

These complex ethical challenges require global coordination and thoughtful, adaptable regulation to ensure that AI serves humanity’s best interests, respects human dignity, and promotes fairness across all sectors of society. The ethical considerations around AI extend far beyond individual technologies or industries, impacting fundamental human rights, economic equality, environmental sustainability, and societal trust.

As AI continues to advance, the collective responsibility of governments, corporations, and individuals is to build robust, transparent systems that not only push the boundaries of innovation but also safeguard society. Only through an ethical framework can AI fulfil its potential as a transformative force for good rather than deepening existing divides or creating new dangers. The journey towards creating ethically aware AI systems necessitates ongoing research, interdisciplinary collaboration, and a commitment to prioritising human well-being in all technological advancements.

Digital Public Infrastructure: An innovative outcome of India’s G20 leadership

From latent concept to global consensus

Not more than a couple of years back, this highly jingled acronym of the present time – DPI (Digital Public Infrastructure), was merely a latent term. However, today it has gained an ‘internationally agreed vocabulary’ with wide-ranging global recognition. This could not imply that efforts in this direction had not been laid earlier, yet a tangible global consensus over the formal incorporation of the term was unattainable. 

The complex dynamics of such a long-standing impasse or ambiguity over a potential consensus-based acknowledgement of DPI, has been prominently highlighted in a recently published report of ‘India’s G20 Task Force on Digital Public Infrastructure’. The report clearly underlines that, 

While DPI was being designed and built independently by selected institutions around the world for over a decade, there was an absence of a global movement that identified the common design approach that drove success, as well as low political awareness at the highest levels of the impacts of DPI on accelerating development. 

It was only at the helm of India’s G20 Presidency in September 2023 that the first-ever multilateral consensus was reached on recognising DPI as being a ‘safe, secure, trusted, accountable, and inclusive’ driver of socioeconomic development across the globe. Notably, the ‘New Delhi Declaration’ has cultivated a DPI approach, intending to enhance a robust, resilient, innovative and interoperable digital ecosystem steered by a crucial interplay of technology, business, governance, and the community.

The DPI approach persuasively offers a middle way between a purely public and a purely private strand, with an emphasis on addressing ‘diversity and choice’; encouraging ‘innovation and competition’;  and ensuring ‘openness and sovereignty’. 

Ontologically, this marks a perceptible shift from the exclusive idea of technocratic-functionalism to embracing the concepts of multistakeholderism and pluralistic universalism.  These conceptualisations hold substance in the realm of India’s greater quest to democratise and diversify the power of innovation, based on delicate trade-offs and cross-sectional intersubjective understanding. Nevertheless, it is also to be construed that an all-pervasive digital transition increasingly entrenched into the burgeoning international DPI approach, has been exceptionally drawn from India’s own successful experience of the domestic DPI framework, namely India Stack.

India Stack is primarily an agglomeration of open Application Programming Interfaces (APIs) and digital public goods, aiming to enhance a broadly vibrant social, financial, and technological ecosystem. It offers multiple benefits and ingenious services, like faster digital payments through UPI, Aadhaar Enabled Payments System (AEPS), direct benefit transfers, digital lending, digital health measures, education and skilling, and secure sharing of data. The remarkable journey of India’s digital progress and coherently successful implementation of DPI over the last decade indisputably turned out to be the centre of attention during the G20 deliberations. 

India’s role in advancing DPI through G20 engagement and strategic initiative

What seems quite exemplary is the procedural dynamism with which actions have been undertaken to mobilise the vocabulary and effectiveness of DPI during various G20 meetings and conferences held within India. Most importantly, the Digital Economy Working Group (DEWG) meetings and negotiations were organised in collaboration with all the G20 members, guest countries, and eminent knowledge partners, like ITU, OECD, UNDP, UNESCO and the World Bank. As an effect, the Outcome Document of the Digital Economy Ministers’ Meeting was unanimously agreed to by all the G20 members and presented a comprehensive global digital agenda with appropriate technical nuances and risk-management strategies. 

Along with gaining traction in DEWG, the DPI agenda also got prominence in other G20 working groups under India’s Presidency. These include the Global Partnership for Financial Inclusion Working Group; the Health Working Group; the Agriculture Working Group; the Trade and Investment Working Group; and the Education Working Group. 

Commensurate to these diverse group meetings, the Indian leadership also conducted bilateral negotiations with its top G20 strategic and trading partners, namely the USA, the EU, France, Japan, and Australia. Interestingly, the official joint statements of all these bilateral meetings decisively entailed the catchword ‘DPI’. It could be obviously considered whether the time was ripe, or it was India’s well-laid-out strategy that ultimately paid off. Yet, it could not be repudiated that a well-thought-out parallel negotiation process had certainly played an instrumental role in providing leverage to the DPI approach. 

Further, in follow-up to the New Delhi Declaration of September 2023, the Prime Minister of India announced the launch of two landmark India-led initiatives during the G20 Virtual Leaders’ Summit in November 2023. The two initiatives denominated as the Global Digital Public Infrastructure Repository (GDPIR) and the Social Impact Fund (SIF) are mainly inclined towards the advancement of DPI in the Global South, particularly by offering upstream technical-financial assistance and knowledge-based expertise. This kind of forward-looking holistic approach reasonably fortifies the path towards a transformative global digital discourse. 

India 2025 Towards a Multilateral Framework for Digital Public Infrastructure.
Digital Public Infrastructure: An innovative outcome of India’s G20 leadership 17

Building on momentum: Brazil’s role in advancing DPI

Ever since India passed on the wand of the G20 Presidency to Brazil, expectations have been pretty high from the latter to carry forward the momentum and ensure that emerging digital technologies effectively meet the requirements of the Global South. It is encouraging to witness that Brazil is vehemently making a step forward to maintain the drive, with a greater emphasis on deepening the discussion over crucial DPI components such as digital identification, data governance, data sharing infrastructure, and global data safeguards. Although Brazil has seized an impressive track record of using digital infrastructure to promote poverty alleviation and inclusive growth within the country, a considerable measure of success at the forthcoming G20 summit will be its efficacy in stimulating political and financial commitments for a broader availability of such infrastructure. 

Despite the fact that concerted endeavours are being deployed to boost the interoperability, scalability and accessibility of DPIs, it becomes highly imperative to ensure their confidentiality and integrity. This turns out to be more alarming in the wake of increased cybersecurity breaches, unwarranted data privacy intrusions, and potential risks attached to emerging technologies like AI. Hence, at this critical juncture, it is quintessential to foster more refined, coordinated and scaled-up global efforts, or to be more precise, an effective global digital cooperation.

Pavel Durov, a transgressor or a fighter for free speech and privacy?

It has not been that long since Elon Musk was hardly criticised by the British government for spreading extremist content and advocating for the freedom of speech on his platform. This freedom of speech has probably become a luxury few people can afford, especially on platforms whose owners are less committed to those principles while trying to comply with the requirements of governments worldwide. The British riots, where individuals were allegedly arrested for social media posts, further illustrate the complexity of regulating social media digital policies. While governments and like-minded people may argue that these actions are necessary to curb violent extremism and exacerbation of critical situations, others see them as a dangerous encroachment and undermining of free speech. 

The line between expressing controversial opinions and inciting violence or allowing crime on social media platforms is often blurred, and the consequences of crossing it can be severe. However, let us look at a situation where someone is arrested for allegedly turning a blind eye to organised crime activities on his platform, as in the case of Telegram’s CEO. 

Namely, Pavel Durov, Telegram’s founder and CEO, became another symbol of resistance against government control over digital communications alongside Elon Musk. His arrest in Paris on 25 August 2024 sparked a global debate on the fine line between freedom of speech and the responsibilities that come with running a platform that allows for uncensored, encrypted communication. French authorities allegedly detained Durov based on an arrest warrant related to his involvement in a preliminary investigation and his unwillingness to grant authorities access to his encrypted messaging app, which has over 1 billion users worldwide. The investigation concerns Telegram’s alleged role in enabling a wide range of crimes due to insufficient moderation and lack of cooperation with law enforcement. The charges against him—allegations of enabling criminal activities such as child exploitation, drug trafficking, terrorism, and fraud, as well as refusing to cooperate with authorities —are severe. However, they also raise critical questions about the extent to which a platform owner can or should be held accountable for the actions of its users.

Durov’s journey from Russia to France highlights the complex interplay between tech entrepreneurship and state control. He first made his mark in Russia, founding VKontakte, a platform that quickly became a refuge for political dissenters. His refusal to comply with Kremlin demands to hand over user data and sell the platform eventually forced him out of the country in 2014. Meanwhile, Durov launched Telegram in 2013, a messaging app focused on privacy and encryption, which has since become a tool for those seeking to avoid government surveillance. However, his commitment to privacy has put him at odds with various governments, leading to a life of constant movement across borders to evade legal and political challenges.

In France, Durov’s initially promising relationship with the government soured over time. Invited by President Emmanuel Macron in 2018 to consider moving Telegram to Paris, Durov even accepted French citizenship in 2021. However, the French government’s growing concerns about Telegram’s role in facilitating illegal activities, from terrorism to drug trafficking, led to increased scrutiny. The tension as we already know, culminated in Durov’s recent detention, which is part of a broader investigation into whether platforms like Telegram enable online criminality.

Durov’s relationship with the United Arab Emirates adds another layer of complexity. After leaving Russia, Durov based Telegram in the UAE, where he was granted citizenship and received significant financial backing. However, the UAE’s restrictive political environment and stringent digital controls have made this partnership a delicate one, with Durov carefully navigating the country’s security concerns while maintaining Telegram’s operations.

The USA, too, has exerted pressure on Durov. Despite repeated attempts by US authorities to enlist his cooperation in controlling Telegram, Durov has steadfastly resisted, reinforcing his reputation as a staunch defender of digital freedom. He recently told to Tucker Carlson in an interview that the FBI approached a Telegram engineer, attempting to secretly hire him to install a backdoor that would allow US intelligence agencies to spy on users. However, his refusal to collaborate with the FBI has only heightened his standing as a symbol of resistance against governmental overreach in the digital realm.

With such an intriguing biography of his controversial tech entrepreneurship, Durov’s arrest indeed gives us reasons for speculation. At the same time, it seems not just a simple legal dispute but a symbol of the growing diplomatic and legal tensions between governments and tech platforms over control of cyberspaces. His journey from Russia to his current predicament in France highlights a broader issue: the universal challenge of balancing free expression with national security. 

Accordingly, Telegram, based in Dubai and widely used across Russia and the former Soviet Union, has faced scrutiny for its role in disseminating unfiltered content, especially during the Russia-Ukraine conflict. Durov, who left Russia in 2014 after refusing to comply with government demands, has consistently maintained that Telegram is a neutral platform committed to user privacy and free speech. Additionally, his multiple citizenships, including Russian (since the devolution in 1991, previously the Soviet Union from birth), Saint Kitts and Nevis (since 2013), French (since 2021), and UAE (since 2021), are only escalating tenseness between concerned governments pressing on French President Emmanuel Macron and asking for clarifications on the matter. Even Elon Musk confronted Emanuel Macron by responding directly to his post on X, claiming that ‘It would be helpful to the global public to understand more details about why he was arrested’, as he described it as an attack on free speech.

Despite the unclear circumstances and vague official evidence justifying the arrest and court process, Durov will undoubtedly face the probe and confront the accusations under the prescribed laws concerning the case. Therefore, it would be preferable to look at the relevant laws and clarify which legal measures are coherent with the case. 

The legal backdrop to Durov’s arrest is complex, involving both US and EU laws that govern digital platforms. However, Section 230 of the US Communications Decency Act of 1996, often called the ‘twenty-six words that created the internet,’ is the governing law that should be consulted and under which, among others, this case would be conducted. The law, in its essence, protects online platforms from liability for user-generated content as long as they act in good faith to remove unlawful material. This legal shield has allowed platforms like Telegram to flourish, offering robust encryption and a promise of privacy that appeals to millions of users worldwide. However, this immunity is not absolute. Section 230 does not protect against federal criminal liability, which means that if Telegram is found to have knowingly allowed illegal activities to increase without taking adequate steps to curb them, Durov could indeed be held liable.

In the EU context, the recently implemented Digital Services Act (DSA) imposes stricter obligations on digital platforms, particularly those with significant user bases. Although Telegram, with its 41 million users in the EU, falls short of the ‘very large online platforms’ (VLOP) category that would subject it to the most stringent DSA requirements, it would probably still be obligated to act against illegal content. The DSA emphasises transparency, accountability, and cooperation with law enforcement—a framework that contrasts sharply with Telegram’s ethos of privacy and minimal interference.

 Performer, Person, Solo Performance, Adult, Male, Man, Head, Face, Happy, Pavel Durov

The case also invites comparisons with other tech moguls who have faced similar dilemmas. Elon Musk’s acquisition of Twitter, now rebranded as X, has been marked by his advocacy for free speech. However, even Musk has had to navigate the treacherous waters of content moderation, facing governments’ pressure to combat disinformation and extremist content on his platform. The last example is the dispute with Brazil’s Supreme Court, where Elon Musk’s social media platform X could be easily ordered to shut down in Brazil due to alleged misinformation and extremist content concerning the case that was spread on X. The conflict has deepened tensions between Musk and Supreme Court Judge Alexandre de Moraes, whom Musk accused of engaging in censorship.

Similarly, Mark Zuckerberg’s Meta has been embroiled in controversies over its role in child exploitation, but especially in spreading harmful content, from political misinformation to hate speech. On the other hand, Zuckerberg’s recent confession in an official letter that, in 2021, the White House and other Biden administration officials exerted considerable pressure on Meta to suppress certain COVID-19-related content, including humour and satire, adds fuel to the fire concerning the abuse of legal measures to stifle freedom of speech and excessive content moderation by government officials. Nevertheless, both Musk and Zuckerberg have had to strike a balance between maintaining a platform that allows for open dialogue and complying with legal requirements to prevent the spread of harmful content.

The story of Chris Pavlovski, CEO of Rumble, further complicates this narrative. His decision to leave the EU following Durov’s arrest underscores the growing unease among tech leaders about the increasing regulatory pressures of the EU. Pavlovski’s departure can be seen as a preemptive move to avoid the legal and financial risks of operating in a jurisdiction that tightens its grip on digital platforms. It also reflects a broader trend of tech companies seeking more favourable regulatory environments, often at the expense of user rights and freedoms.

All these controversial examples bring us to the heart of this debate: where to draw the line between free speech and harm prevention. Encrypted platforms like Telegram offer unparalleled privacy but pose significant challenges for law enforcement. The potential for these platforms to be used by criminals and extremists cannot be ignored. However, the solution is more complex. Overzealous regulation risks stifling free expression and driving users to even more secretive and unregulated corners of the internet.

Pavel Durov’s case is a microcosm of the larger global struggle over digital rights. It forces us to confront uncomfortable questions: Do platforms like Telegram have a responsibility to monitor and control the content shared by their users, even at the cost of privacy? Should governments have the power to compel these platforms to act, or does this represent an unacceptable intrusion into the private sphere? Should social media companies that monetise content on their platforms be held responsible for the content they allow? And ultimately, how do we find the balance in the digital world we live in to optimally combine privacy and security in our society? 

These questions will only become more pressing as we watch Durov’s and similar legal cases unfold. The outcome of his case could set a precedent that shapes the future of digital communication, influencing not just Telegram but all platforms that value user privacy and free speech. Either way, Durov’s case also highlights the inherent conflict between cyberspace and real space. There was once a concept that the online world—the domain of bits, bytes, and endless data streams—existed apart from the physical reality we live in. In the early days of the internet, this virtual space seemed like an expansive, unregulated frontier where the laws of the physical world did not necessarily apply. However, cyberspace was never a separate entity; rather, it was an extension, a layer added to the world we already knew. Therefore, the concept of punishment in the digital world has always been, and still is, rooted in the physical world. Those held responsible for crimes or who commit crimes online are not confined to a virtual jail; they are subject to controversies in the real world and legal systems, courts, and prisons.

The history of computer viruses: Journey back to where it all began!

Once confined to the realms of theoretical science and speculative fiction, computer viruses have morphed into one of the main threats in the digital age. This transformation from an intriguing concept to a pervasive danger has not only reshaped the landscape of cybersecurity but has also imposed significant challenges to national security and dangers for everyday users.  

In this exploration, we trace the origins of computer viruses, charting their evolution through decades of innovation and malfeasance, to understand how they became a key concern for modern societies. 

Early concepts and theoretical foundations

The notion of a computer virus was not born out of malice (or malice intent) but from theoretical discussions about self-replicating programs. In 1949, during his lectures at the University of Illinois, Hungarian scientist John von Neumann introduced the idea of self-reproducing automata. His theories, later published in 1966, proposed that computer programs, much like biological entities, could self-replicate. Although not specifically labelled as viruses at the time, these theoretical constructs laid the groundwork for what would later become a major field of study in computer science.

The first practical implementation of von Neumann’s theories was seen in the 1960s at AT&T’s Bell Labs, where the game Darwin was developed by Victor Vyssotski, Robert Morris Sr., and Malcolm Douglas McIlroy on an IBM 7090 mainframe. The game involved programs, termed organisms, that competed by taking over each other’s memory space in a digital arena, essentially simulating a survival of the fittest scenario among software.

The sci-fi prophecy and early experiments

Much like other groundbreaking concepts, the idea of a malicious self-replicating program made its way into popular culture in 1970, thanks to Gregory Benford’s science fiction story ‘The Scarred Man’. This story vividly brought to life a self-replicating program akin to a computer virus, complete with a counteracting ‘vaccine’—a visionary notion that anticipated the advent of real-world antivirus software.

The first program to perform the self-replicating function of a modern virus was Creeper, created in 1971 by Bob Thomas at BBN Technologies. Designed as an experiment, Creeper moved through the ARPANET, displaying the message, ‘I’m the creeper, catch me if you can!’ This foundational work paved the way for the development of malicious software.

 Green, Text

Image Source: dscomputerstudies1112.weebly.com

In 1975, computer programmer John Walker developed the first Trojan, called ANIMAL. It was a ‘20 questions’ program that tried to guess the user’s favourite animal, using a clever machine learning algorithm to improve its questions. Walker included a subroutine called PERVADE, which copied ANIMAL into any user-accessible directories it could find. 

Although there is some debate as to whether ANIMAL was a Trojan or simply another virus, it is generally considered to be the first Trojan due to its method of disguising itself as something the user wanted and then performing actions without the user’s permission (copying itself into directories without the user’s knowledge or consent). This fits the definition of a Trojan: a type of malware that hides inside another program and performs actions without the user’s permission.

The rise of malicious intent

The 1970s and early 1980s saw the first instances of viruses crafted with harmful intentions. In 1974, the Rabbit (or Wabbit) virus emerged, replicating itself rapidly to the point of crashing systems. The speed of replication gave the virus its name.

Technically, the Rabbit virus operated by exploiting vulnerabilities in the host system’s architecture. It was the first example of a Rabbit virus, a type of denial-of-service attack where a process continually replicates to deplete system resources. 

While the Rabbit virus itself may not have caused widespread havoc compared to later viruses, its impact on the field of cybersecurity was profound. It helped catalyse the development of early antivirus measures and informed the strategies used to combat future threats. 

In 1982, high school student Richard Skrenta created Elk Cloner, one of the first viruses to spread via floppy disks among personal computer users. Elk Cloner spread by infecting the Apple DOS 3.3 operating system using a technique now known as a boot sector virus. It was attached to a game which was then set to play. The 50th time the game was started, the virus was released, but instead of playing the game, it would change to a blank screen displaying a poem about the virus. If a computer booted from an infected floppy disk, a copy of the virus was placed in the computer’s memory. When an uninfected disk was inserted into the computer, the entire DOS (including Elk Cloner) would be copied to the disk, allowing it to spread from disk to disk. To prevent DOS from being continually rewritten each time the disk was accessed, Elk Cloner also wrote a signature byte to the disk’s directory, indicating that it had already been infected.

Official recognition and the growth of malware

The term ‘computer virus’ was coined by Fred Cohen in 1983 while he was a graduate student. Fred Cohen’s experiments provided concrete evidence of the potential threat posed by computer viruses. His work demonstrated that these programs could not only replicate but also conceal their presence, making them difficult to detect and eradicate. He presented his findings in a seminal paper titled ‘Computer Viruses – Theory and Experiments’. 

Cohen introduced a small, self-replicating program into a UNIX system, referring to it as a ‘virus’. This program was able to spread from one file to another, replicating itself and modifying other programs to include a copy of itself. 

By the mid-1980s, the landscape of computer viruses had expanded significantly. The Brain virus, which appeared in 1986, targeted IBM PC platforms and employed stealth techniques to evade detection. The Brain virus was created by two Pakistani brothers, Basit and Amjad Farooq Alvi, who owned a computer store in Lahore. Interestingly, their initial intention was not to cause harm but to protect their medical software from being pirated. To achieve this, they embedded Brain into the boot sector of floppy disks, ensuring that any unauthorised copies of their software would be infected.

 Text, Number, Symbol, Scoreboard

Image Source: By Avinash Meetoo – https://commons.wikimedia.org/w/index.php?curid=3919244

The release of the internet worm, also known as the Morris worm, in 1988 marked another important event in the history of cybersecurity. Created by Robert Tappan Morris, a graduate student at Cornell University, this self-replicating program exposed significant vulnerabilities in the early internet infrastructure, causing widespread disruption and prompting major advancements in computer security. Morris developed the worm as an experiment to gauge the size of the internet. His intention was not to cause harm but to explore the network’s capabilities. However, a critical flaw in the worm’s design led to it spreading uncontrollably, causing significant damage.

The wake-up call: Recognising the need for cybersecurity

The initial success of these early viruses can be attributed to two primary factors: the absence of antivirus software and a general lack of awareness about the importance of cyber hygiene among users.

The late 1980s and early 1990s marked a key period for the internet community. The proliferation of malware threats was a wake-up call, highlighting the urgent need for robust cybersecurity measures. In these years, the antivirus software industry saw rapid growth and diversification. Companies around the world began developing and releasing antivirus programs to address the escalating threat. In 1987, Bernd Robert Fix documented the first successful removal of a computer virus.

That same year, G Data Software AG released the first antivirus software designed for Atari ST computers, signalling the commercial viability and necessity of antivirus solutions. Concurrently, McAfee, Inc. was founded and launched VirusScan, one of the earliest antivirus programs for personal computers. These developments marked the beginning of a concerted effort to protect users from the growing menace of computer viruses.

Notable examples include Avira, which emerged as a significant player in Germany, and ThunderByte Antivirus from the Netherlands. Meanwhile, avast! was developed in Czechoslovakia, offering robust protection against emerging threats, and Dr Solomon’s Anti-Virus Toolkit became a trusted name in the United Kingdom.

These early antivirus programs were instrumental in establishing the commercial antivirus industry. They provided users with essential tools to detect, remove, and prevent computer viruses, significantly enhancing the security of personal and business computing environments. The proliferation of these tools represented a collective global effort to combat the burgeoning threat of malware, laying the groundwork for the sophisticated cybersecurity solutions we rely on today.

The modern era of cybersecurity

Today, the landscape of cyber threats has evolved to include ransomware, spyware, and sophisticated cyberespionage tools, costing the global economy billions annually. Cybersecurity has become a critical component of national security strategies worldwide, with significant investments from governments and corporations to protect their infrastructure and data.

The constant battle between malicious actors and cybersecurity experts is relentless, with millions of new viruses emerging daily, challenging experts to combat them effectively. The importance of robust security measures was starkly shown by the recent CrowdStrike incident on 19 July 2024. This incident brought down the digital networks of airports, hospitals, and governments globally, disrupting daily life, businesses, and government operations. Numerous industries, including airlines, banks, hotels, manufacturing, and more were severely affected. Essential services such as emergency response and government websites were also impacted. The financial damage from this worldwide outage is estimated to be at least USD 10 billion, underscoring the critical need for strong cybersecurity defences in our interconnected world.

Computer viruses have been around since the beginning of the tech era. So, to think that there will be a solution that would eliminate all the viruses for good is not realistic. But that does not mean they cannot be contained, and that is exactly where cybersecurity measures step in. The more tech experts enhance security, the less likely viruses can cause significant damage on a global scale.

X, a lone warrior for freedom of speech?

Let’s start with a quote…

‘2024 will be marked by an interplay between change, which is the essence of technological development, and continuity, which characterises digital governance efforts.’, said Dr Jovan Kurbalija in one of his interviews, predicting the year 2024 at its beginning. 

Judging by developments in the social media realm, the year 2024 indeed appears to be the year of change, especially in the legal field, with disputes and implementations of newborn digital policies long in the ‘ongoing’ phase. Dr Kurbalija’s prediction connects us to some of the main topics Diplo and its Digital Watch Observatory are following, such as the issue of content moderation and freedom of speech in the social media world. 

This taxonomic dichotomy could easily make us think of how, in the dimly lit corridors of power, where influence and control intertwine like the strands of a spider’s web, the role of social media has become a double-edged sword. On the one hand, platforms like 𝕏 stand as bastions of free speech, allowing voices to be heard that might otherwise be silenced. On the other hand, they are powerful instruments in the hands of those who control them, with the potential to shape public discourse narratives, influence public opinion, and even ignite conflicts. That is why the scrutiny 𝕏 faces for hosting extremist content raises essential questions about whether it is merely a censorship-free network, or a tool wielded by its enigmatic owner, Elon Musk, to further his agenda.

The story begins with the digital revolution, when the internet was hailed as the great equaliser, giving everyone a voice. Social media platforms emerged as the town squares of the 21st century, where ideas could be exchanged freely, unfiltered by traditional gatekeepers like governments or mainstream media. Under Musk’s ownership, 𝕏 has taken this principle to its extreme, often resisting calls for tighter content moderation to protect free speech. But as with all freedoms, this one also comes with a price.

The platform’s hands-off approach to content moderation has led to widespread concerns about its role in amplifying extremist content. The issue here is not just about spreading harmful material; it touches on the core of digital governance. Governments around the world are increasingly alarmed by the potential for social media platforms to become breeding grounds for radicalisation and violence. The recent scrutiny of 𝕏 is just the latest chapter in an ongoing struggle between the need for free expression and the imperative to maintain public safety.

The balance between these two forces is incredibly delicate in countries like Türkiye, for example, where the government has a history of cracking down on dissent. The Turkish government’s decision to block instagram for nine days in August 2024 after the platform failed to comply with local laws and sensitivities is a stark reminder of the power dynamics at play. In this context, 𝕏’s refusal to bow to similar pressures can be seen as both a defiant stand for free speech and a dangerous gamble that could have far-reaching consequences.

But the story does not end there. The influence of social media extends far beyond any one country’s borders. In the UK, the recent riots have highlighted the role of platforms like 𝕏 and Meta in both facilitating and exacerbating social unrest. While Meta has taken a more proactive approach to content moderation, removing inflammatory material and attempting to prevent the spread of misinformation, 𝕏’s more relaxed policies have allowed a more comprehensive range of content to circulate. Such an approach has included not just legitimate protest organisations but also harmful rhetoric that has fuelled violence and division.

The contrast between the two platforms is stark. Meta, with its more stringent content policies, has been criticised for stifling free speech and suppressing dissenting voices. Yet, in the context of the British riots, its approach may have helped prevent the situation from escalating further. On the other hand, 𝕏 has been lauded for its commitment to free expression, but this freedom comes at a price. The platform’s role in the riots has drawn sharp criticism, with some accusing it of enabling the very violence it claims to oppose as the government officials have vowed action against tech platforms, even though Britain’s Online Safety Act will not be fully effective until next year. Meanwhile, the EU’s Digital Services Act, which Britain is no longer part of, is already in effect and will allegedly serve as a backup in similar disputes.

The British riots also serve as a cautionary tale about the power of social media to shape public discourse. In an age where information spreads at lightning speed, the ability of platforms like 𝕏 and Meta to influence events in real time is unprecedented. This kind of lever of power is not just a threat to governments but also a powerful tool that can be used to achieve political ends. For Musk, acquiring 𝕏 represents a business opportunity and a chance to shape the global discourse in ways that align with his future vision.

Musk did not even hesitate to accuse the European Commission of attempting to pull off what he describes as an ‘illegal secret deal’ with 𝕏. In one of his posts, he claimed the EU, with its stringent new regulations aimed at curbing online extremist content and misinformation, allegedly tried to coax 𝕏 into quietly censoring content to sidestep hefty fines. Other tech giants, according to Musk, nodded in agreement, but not 𝕏. The platform stood its ground, placing its unwavering belief in free speech above all else.

While the European Commission fired back, accusing 𝕏 of violating parts of the EU’s Digital Services Act, Musk’s bold stance has ignited a fiery debate. And here, it is not just about rules and fines anymore—it is a battle over the very soul of digital discourse. How far should governmental oversight go? And at what point does it start to choke the free exchange of ideas? Musk’s narrative paints 𝕏 as a lone warrior, holding the line against mounting pressure, and in doing so, forces us to confront the delicate dance between regulation and the freedom to speak openly in today’s digital world.

Furthermore, the cherry on top of the cake, in this case, is Musk’s close contact and support for the potential new president of the USA, Donald Trump, generating additional doubts about the concentration and acquisition of power by social media owners, respectively, tech giants and their allies. Namely, in an interview with Donald Trump, Elon Musk openly endorsed the candidate for the US presidency, discussing, among others, topics such as regulatory policies and the juridical system, thus fueling speculation about his platform 𝕏 as a powerful oligarchic lever of power.

At this point, it is already crystal clear that governments are grappling with how to regulate these platforms and the difficult choices they are faced with. On the one hand, there is a clear need to implement optimal measures in order to achieve greater oversight in preventing the spread of extremist content and protecting public safety. On the other hand, too much regulation risks stifling the very freedoms that social media platforms were created to protect. This delicate dichotomy is at the heart of the ongoing debate about the role of tech giants in a modern, digital society.

The story of 𝕏 and its role in hosting extremist content is more than just the platform itself. It is about the power of technology to shape our world, for better or worse. As the digital landscape continues to evolve, the questions raised by 𝕏’s approach to content moderation will only become more urgent. And in the corridors of power, where decisions that shape our future are made, answers to those questions will determine the fate of the internet itself.

Copy of UN approves its first comprehensive convention on cybercrime: What happened at the last round of the negotiations?

After three years of negotiations, the UN member states at the Ad Hoc Committee (AHC) adopted the draft of the first globally binding legal instrument on cybercrime. This convention will now be presented to the UN General Assembly for formal adoption later this year. The chair emphasised that the convention is a criminal justice legal instrument and the aim is, therefore, to combat cybercrime by prohibiting certain behaviours by physical persons, rather than to regulate the behaviour of member states. The treaty is now set to take effect once ratified by 40 member countries, and it establishes a global criminal justice policy to protect society against cybercrime by ‘fostering international cooperation’. 

The adoption of the convention has proceeded despite significant opposition from human rights groups, civil society, and technology companies, who have raised concerns about the potential risks of increased surveillance. In July, Diplo hosted experts from various stakeholder groups to discuss their expectations before the final round of UN negotiations and to review the draft treaty.

Experts noted an unprecedented alignment between industry and civil society on concerns with the draft, emphasising the urgent need for a treaty focused on core cybercrime offences, strengthened by robust safeguards and clear intent requirements. Moreover, it was hard to imagine that states would have been able to reach a consensus given how many issues they disagreed with earlier. 

You can also consult the annotated text of the UN Convention Against Cybercrime, the topic page on cybercrime, and the drafting process at the Ad Hoc Committee on Cybercrime.

How did it happen? Did states change their views suddenly? What was the last round of negotiations about? 

Cyber vs ICTs: Debates about the convention’s title, scope, and terminology

The debates surrounding the title of the convention highlighted ongoing challenges among states in agreeing on the scope and terminology for this legal instrument. During the final session, the majority of delegations advocated for a succinct title, suggesting ‘United Nations Convention Against Cybercrime’ for clarity’s sake. However, the term ‘cybercrime’ has not been agreed upon by all states in the use of terms. Russia, in particular, criticised the use of ‘cyber’ terminology, arguing that it does not align with the mandate and instead supported the use of ‘information and communications technology’ (ICTs), which had been agreed upon by states and included in the use of terms (Article 2). The US delegate argued that the title should not define ‘cybercrime’ as it is the globally accepted term for such issues and it is immediately clear from the title ‘Convention Against Cybercrime’ what conduct the treaty covers, and why. Switzerland requested further discussion on the title at the beginning of the session, and the Czech Republic argued that terms ‘cybercrime’ and ‘ICT crimes’ should not be treated as synonymous, as they represent different concepts. South Africa supported the title reflecting the committee’s mandate, and expressed its flexibility in working with other delegations to find a suitable title.

These debates reflected the long-standing disagreements between states about the scope. At the beginning of the session, the Russian Federation said that the draft convention did not meet the objectives and mandate, which are to come up with a comprehensive convention. Russia advocated for the inclusion of certain serious crimes that involve the use of ICTs, including extremist crimes, illegal trafficking in arms and drugs, and offences involving youth. Russia argued that such crimes should be explicitly covered in a dedicated article within the criminalisation chapter. New Zealand, on the other hand, expressed the concern that the current scope (in Article 4), particularly the procedural measures and international cooperation chapter, extends beyond the primary purpose of the convention, namely combating cybercrime.

Same old, same old: What did states agree on concerning human rights protections and safeguards?

Human rights protections and safeguards were among the most contested areas in the draft treaty throughout the negotiation process. We provided a detailed analysis of these disagreements earlier, for example, here. During the final session, states held differing views to the Chair’s proposal for Article 6.2 which suggested adding the phrase ‘and in a manner consistent with applicable international human rights law’ to address concerns about human rights safeguards. India proposed replacing ‘suppression’ with ‘restriction’ in Article 6.2, while Iraq called for the deletion of Article 6.2 entirely. Egypt called for the deletion of the listed rights in Article 6.2 and proposed additional language to reflect duties and responsibilities associated with certain rights as per international human rights law. Other countries, including Malaysia and Pakistan, also suggested revisions to Article 6.2, advocating for a more general statement on human rights without a detailed list. Sudan argued that singling out specific human rights could imply a hierarchy, which undermines the holistic and inclusive approach in the convention. 

Overall, states were divided: one group (e.g. Cuba, Iran, Russia, etc.) repeatedly emphasised that this is not a human rights treaty and criticised the draft for having too many references to human rights but lacking specific references to crimes and criminal uses of ICTs. This group of states (e.g. China and Iraq, in particular) argued that human rights should not become an obstacle to effective cross-border cooperation in combating cybercrime, while others (e.g. New Zealand, Canada, Australia, Liechtenstein, the USA, Switzerland, etc.) believed that the lack of explicit references to human rights is itself a barrier to such cooperation.

A significant portion of the session was dedicated to debating Articles 14 and 16 of the draft resolution, which pertain to child sexual exploitation material and the dissemination of intimate images, respectively. Concerns were raised about the phrase ‘without right’ in these articles, which some member states felt could potentially legitimise access to such material. A joint statement by the Syrian Arab Republic, on behalf of a group of countries, called for the removal of exceptions in these articles to ensure robust protection for children and adherence to international human rights standards. Many states, including Saudi Arabia, Iran, Rwanda, and Egypt, expressed strong opposition to this wording, fearing it could create legal loopholes for child exploitation material. These countries called for its removal or revision to eliminate any ambiguity, with Rwanda suggesting alternative phrasing such as ‘committed intentionally and without permission by law’. On the other hand, Japan defended the current wording, emphasising the need to balance combating harm with protecting freedom of expression. 

Iran, along with the Democratic Republic of Congo, also voiced strong objections to Paragraph 3 of Article 14. They argued that the provision created exceptions in combating child sexual exploitation, which they found unacceptable. Both countries requested the deletion of this paragraph, citing its inconsistency with international laws, such as the Convention on the Rights of the Child.

Switzerland and other countries supported the text as presented in rev. 3, particularly paragraph 4 of Article 14, arguing that the phrase was necessary for national authorities to act against criminal online material. 

Article 16 also sparked significant debate. Countries like Yemen and Uganda questioned the inclusion of the term ‘non-consensual’, arguing that it did not align with their domestic legal interpretations. They called for more restrictive language to prevent potential exploitation of legal loopholes, such as those involving self-generated content or material used for legitimate purposes.

Political offences exceptions: Why did states disagree?

At the beginning of the session, Costa Rica proposed for Article 40.21 to include an additional reason for the refusal of mutual legal assistance (MLA) requests related to political offences. Several countries, including Liechtenstein, France, Canada, the USA, and others supported this proposal, while others (e.g. Russia, Nigeria, Pakistan) rejected new grounds for refusal, including political offences, due to their subjective nature and potential misuse. 

Ratification: What divided states?

In discussing the ratification of the convention and following steps, states were split on two issues: a number of required ratifications for the convention to come into effect, and the need for additional protocols. 

On the first aspect, some delegations supported a higher threshold for ratification to ensure inclusivity and give states time for domestic legislative harmonisation. Others were satisfied with the Chair’s proposal of a lower threshold.

Mexico made an argument for setting the threshold at 60 ratifications, arguing that this would ensure the convention’s broad and representative international support, thus enhancing its effectiveness and universality. This proposal was supported by several delegations, including New Zealand and Liechtenstein, who underscored the need for inclusivity, especially for smaller states that may need additional time to ratify the convention.

Conversely, Russia called for a lower threshold of 30 ratifications, underlining the pressing necessity for a universal instrument to address the inherently transnational nature of ICT crimes. This position found support among several delegations, including Iran and Azerbaijan, who contended that a lower threshold would facilitate more immediate global action against cybercrime.

On the second aspect, the Chair’s proposal for additional protocol and the process for its consideration sparked mixed views. Concerns were raised about the immediate timing, the predetermined outcome, and the inclusivity of the process. Some delegates supported the Chair’s proposal as a balanced approach. For example, Pakistan argued that immediate negotiations on an additional protocol would be needed to review the list of crimes, while Nigeria also highlighted that the additional protocol would help address the specific serious crimes in line with the draft. At the same time, other delegations (e.g. Mexico, Malaysia, the Netherlands, France, Germany, Chile) suggested that it is too early to discuss the possibility of additional protocols, given that there are differences between states concerning the scope. 

So did states agree on everything?

While we outlined some of the examples of issues which sparked debates at the session, it’s worth noting that UN member states adopted the draft with reservations. 

In particular, while Iran expressed gratitude for the Chair’s leadership before the voting and the adoption of the draft, the delegation raised concerns about the reinsertion of certain provisions in the draft text despite strong objections. Iran emphasised the lack of consensus on key aspects, particularly within the preamble and various articles, and called for further deliberation to address outstanding issues. The Chair clarified that in the absence of consensus, decisions on substantive matters would be made by a two-thirds majority. Iran requested votes on specific contentious paragraphs, including Articles 6, 14, 16, and 24. These proposals to delete certain paragraphs were ultimately rejected through voting, despite Iran’s reservations.

Russia, while choosing not to oppose the consensus on the convention’s text, expressed dissatisfaction with the title, arguing that it did not accurately reflect the document’s scope as per the mandate of the ad hoc committee. Russia highlighted that it dissociates itself from the consensus on the title of the convention and intends to make the following interpretive statement when signing or ratifying this instrument.

Nigeria also dissociated itself from specific provisions, particularly those in Article 14, arguing that they were inconsistent with its domestic laws and cultural norms. Nigeria emphasised that the best interests of the child should be paramount in any provisions aimed at protecting children and requested that its objections be recorded in the official report.

Cuba raised broader concerns about the convention’s comprehensiveness and effectiveness. Cuba criticised the limited number of crimes covered in the text, arguing that the convention should address a wider range of serious crimes associated with the use of ICTs, such as terrorism, hate speech, and crimes against the environment. Additionally, Cuba pointed out ambiguities in key terms like ‘dishonest intention’ in several Articles as well as argued that Articles 33 and 37, which deal with witness protection and extradition, should be excluded as they are typically governed by national legislation and bilateral agreements. Cuba indicated, in the end, that it did not feel bound by certain provisions.

The session concluded with the adoption of the draft convention and the associated General Assembly resolution, despite the objections and reservations from several member states.

We will continue reporting on the convention and developments associated. Stay tuned for Diplo’s updates. 

UN approves its first comprehensive convention on cybercrime: What happened at the last round of the negotiations?

After three years of negotiations, the UN member states at the Ad Hoc Committee (AHC) adopted the draft of the first globally binding legal instrument on cybercrime. This convention will now be presented to the UN General Assembly for formal adoption later this year.

The chair emphasised that the convention is a criminal justice legal instrument and the aim is, therefore, to combat cybercrime by prohibiting certain behaviours by physical persons, rather than to regulate the behaviour of member states. The treaty is now set to take effect once ratified by 40 member countries, and it establishes a global criminal justice policy to protect society against cybercrime by ‘fostering international cooperation’. 

Consult the drafting process at the Ad Hoc Committee on Cybercrime | the annotated text of the UN Convention Against Cybercrime

The adoption of the convention has proceeded despite significant opposition from human rights groups, civil society, and technology companies, who have raised concerns about the potential risks of increased surveillance. In July, Diplo hosted experts from various stakeholder groups to discuss their expectations before the final round of UN negotiations and to review the draft treaty.

Experts noted an unprecedented alignment between industry and civil society on concerns with the draft, emphasising the urgent need for a treaty focused on core cybercrime offences, strengthened by robust safeguards and clear intent requirements. Moreover, it was hard to imagine that states would have been able to reach a consensus given how many issues they disagreed with earlier. 

How did it happen? Did states change their views suddenly? What was the last round of negotiations about? 

Cyber vs ICTs: Debates about the convention’s title, scope, and terminology

The debates surrounding the title of the convention highlighted ongoing challenges among states in agreeing on the scope and terminology for this legal instrument. During the final session, the majority of delegations advocated for a succinct title, suggesting ‘United Nations Convention Against Cybercrime’ for clarity’s sake.

However, the term ‘cybercrime’ has not been agreed upon by all states in the use of terms. Russia, in particular, criticised the use of ‘cyber’ terminology, arguing that it does not align with the mandate and instead supported the use of ‘information and communications technology’ (ICTs), which had been agreed upon by states and included in the use of terms (Article 2).

United States argued that the title should not define ‘cybercrime’ as it is the globally accepted term for such issues and it is immediately clear from the title ‘Convention Against Cybercrime’ what conduct the treaty covers, and why.

Switzerland requested further discussion on the title at the beginning of the session, and the Czech Republic argued that the terms ‘cybercrime’ and ‘ICT crimes’ should not be treated as synonymous, as they represent different concepts. South Africa supported the title reflecting the committee’s mandate and expressed its flexibility in working with other delegations to find a suitable title.

Negotiations resulted in the adoption of the title

These debates reflected the long-standing disagreements between states about the scope. At the beginning of the session, the Russian Federation said that the draft convention did not meet the objectives and mandate, which are to come up with a comprehensive convention.

Russia advocated for the inclusion of certain serious crimes that involve the use of ICTs, including extremist crimes, illegal trafficking in arms and drugs, and offences involving youth. Russia argued that such crimes should be explicitly covered in a dedicated article within the criminalisation chapter.

New Zealand, on the other hand, expressed the concern that the current scope (in Article 4), particularly the procedural measures and international cooperation chapter, extends beyond the primary purpose of the convention, namely combating cybercrime.

Negotiations resulted in the adoption of Article 4 on ‘Offences established in accordance with other United Nations conventions and protocols’ which says:

Consult the topic page on cybercrime

Same old, same old: What did states agree on concerning human rights protections and safeguards?

Human rights protections and safeguards were among the most contested areas in the draft treaty throughout the negotiation process. We provided a detailed analysis of these disagreements earlier, for example, here.

During the final session, states held differing views to the Chair’s proposal for Article 6.2 which suggested adding the phrase ‘and in a manner consistent with applicable international human rights law’ to address concerns about human rights safeguards.

India proposed replacing ‘suppression’ with ‘restriction’ in Article 6.2, while Iraq called for the deletion of Article 6.2 entirely. Egypt called for the deletion of the listed rights in Article 6.2 and proposed additional language to reflect duties and responsibilities associated with certain rights as per international human rights law. Other countries, including Malaysia and Pakistan, also suggested revisions to Article 6.2, advocating for a more general statement on human rights without a detailed list. Sudan argued that singling out specific human rights could imply a hierarchy, which undermines the holistic and inclusive approach in the convention. 

Overall, states were divided: one group (e.g. Cuba, Iran, Russia, etc.) repeatedly emphasised that this is not a human rights treaty and criticised the draft for having too many references to human rights but lacking specific references to crimes and criminal uses of ICTs.

This group of states (e.g. China and Iraq, in particular) argued that human rights should not become an obstacle to effective cross-border cooperation in combating cybercrime, while others (e.g. New Zealand, Canada, Australia, Liechtenstein, the USA, Switzerland, etc.) believed that the lack of explicit references to human rights is itself a barrier to such cooperation.

A significant portion of the session was dedicated to debating Articles 14 and 16 of the draft resolution, which pertain to child sexual exploitation material and the dissemination of intimate images, respectively. Concerns were raised about the phrase ‘without right’ in these articles, which some member states felt could potentially legitimise access to such material.

A joint statement by the Syrian Arab Republic, on behalf of a group of countries, called for the removal of exceptions in these articles to ensure robust protection for children and adherence to international human rights standards. Many states, including Saudi Arabia, Iran, Rwanda, and Egypt, expressed strong opposition to this wording, fearing it could create legal loopholes for child exploitation material.

These countries called for its removal or revision to eliminate any ambiguity, with Rwanda suggesting alternative phrasing such as ‘committed intentionally and without permission by law’. On the other hand, Japan defended the current wording, emphasising the need to balance combating harm with protecting freedom of expression. 

Iran, along with the Democratic Republic of Congo, also voiced strong objections to Paragraph 3 of Article 14. They argued that the provision created exceptions in combating child sexual exploitation, which they found unacceptable. Both countries requested the deletion of this paragraph, citing its inconsistency with international laws, such as the Convention on the Rights of the Child.

Switzerland and other countries supported the text as presented in rev. 3, particularly paragraph 4 of Article 14, arguing that the phrase was necessary for national authorities to act against criminal online material. 

Article 16 also sparked significant debate. Countries like Yemen and Uganda questioned the inclusion of the term ‘non-consensual’, arguing that it did not align with their domestic legal interpretations. They called for more restrictive language to prevent potential exploitation of legal loopholes, such as those involving self-generated content or material used for legitimate purposes.

Negotiations resulted in the adoption of Article 6 on ‘Respect for human rights’, which says:

Political offences exceptions: Why did states disagree?

At the beginning of the session, Costa Rica proposed for Article 40.21 to include an additional reason for the refusal of mutual legal assistance (MLA) requests related to political offences. Several countries, including Liechtenstein, France, Canada, the USA, and others supported this proposal, while others (e.g. Russia, Nigeria, Pakistan) rejected new grounds for refusal, including political offences, due to their subjective nature and potential misuse. 

Negotiations resulted in the adoption of Article 40.21, which doesn’t explicitly exclude grounds for political offences and says that mutual legal assistance may be refused

Ratification: What divided states?

In discussing the ratification of the convention and following steps, states were split on two issues: a number of required ratifications for the convention to come into effect, and the need for additional protocols. 

On the first aspect, some delegations supported a higher threshold for ratification to ensure inclusivity and give states time for domestic legislative harmonisation. Others were satisfied with the Chair’s proposal of a lower threshold.

Mexico made an argument for setting the threshold at 60 ratifications, arguing that this would ensure the convention’s broad and representative international support, thus enhancing its effectiveness and universality. This proposal was supported by several delegations, including New Zealand and Liechtenstein, who underscored the need for inclusivity, especially for smaller states that may need additional time to ratify the convention.

Conversely, Russia called for a lower threshold of 30 ratifications, underlining the pressing necessity for a universal instrument to address the inherently transnational nature of ICT crimes. This position found support among several delegations, including Iran and Azerbaijan, who contended that a lower threshold would facilitate more immediate global action against cybercrime.

Negotiations resulted in the adoption of the threshold of 40 ratifications and adopted Article 65 on ‘Entry into force’, which says:

On the second aspect, the Chair’s proposal for additional protocol and the process for its consideration sparked mixed views. Concerns were raised about the immediate timing, the predetermined outcome, and the inclusivity of the process.

Some delegates supported the Chair’s proposal as a balanced approach. For example, Pakistan argued that immediate negotiations on an additional protocol would be needed to review the list of crimes, while Nigeria also highlighted that the additional protocol would help address the specific serious crimes in line with the draft.

At the same time, other delegations (e.g. Mexico, Malaysia, the Netherlands, France, Germany, Chile) suggested that it is too early to discuss the possibility of additional protocols, given that there are differences between states concerning the scope. 

Negotiations resulted in the adoption of Article 61 on ‘Relation with protocols’, which says in paragraph 1:

as well as Article 62 on ‘Adoption of supplementary protocols’, which states in paragraph 1 that

So did states agree on everything?

While we outlined some of the examples of issues which sparked debates at the session, it’s worth noting that UN member states adopted the draft with reservations. 

In particular, while Iran expressed gratitude for the Chair’s leadership before the voting and the adoption of the draft, the delegation raised concerns about the reinsertion of certain provisions in the draft text despite strong objections. Iran emphasised the lack of consensus on key aspects, particularly within the preamble and various articles, and called for further deliberation to address outstanding issues.

The Chair clarified that in the absence of consensus, decisions on substantive matters would be made by a two-thirds majority. Iran requested votes on specific contentious paragraphs, including Articles 6, 14, 16, and 24. These proposals to delete certain paragraphs were ultimately rejected through voting, despite Iran’s reservations.

Russia, while choosing not to oppose the consensus on the convention’s text, expressed dissatisfaction with the title, arguing that it did not accurately reflect the document’s scope as per the mandate of the ad hoc committee. Russia highlighted that it dissociates itself from the consensus on the title of the convention and intends to make the interpretive statement when signing or ratifying this instrument.

Nigeria also dissociated itself from specific provisions, particularly those in Article 14, arguing that they were inconsistent with its domestic laws and cultural norms. Nigeria emphasised that the best interests of the child should be paramount in any provisions aimed at protecting children and requested that its objections be recorded in the official report.

Cuba raised broader concerns about the convention’s comprehensiveness and effectiveness. Cuba criticised the limited number of crimes covered in the text, arguing that the convention should address a wider range of serious crimes associated with the use of ICTs, such as terrorism, hate speech, and crimes against the environment.

Additionally, Cuba pointed out ambiguities in key terms like ‘dishonest intention’ in several Articles as well as argued that Articles 33 and 37, which deal with witness protection and extradition, should be excluded as they are typically governed by national legislation and bilateral agreements. Cuba indicated, in the end, that it did not feel bound by certain provisions.

The session concluded with adopting the draft convention and the associated General Assembly resolution, despite the objections and reservations from several member states.


Consult the drafting process at the Ad Hoc Committee on Cybercrime | the annotated text of the UN Convention Against Cybercrime |

We will continue reporting on the convention and associated developments. Stay tuned for Diplo’s updates. 

‘Please, please president, we don’t want any more electricity’

Apart from the cryptocurrency-focused media, not many US news outlets have been providing this newsworthy coverage. The United States is home to more than 100 top cryptocurrency companies. In particular, the USA is home to the world’s second largest cryptocurrency exchange, Coinbase, which is a publicly traded company. Coinbase reported a USD 273 million profit in the fourth quarter of 2023. For the full year of 2023, it earned USD 95 million on USD 3.1 billion in revenue, while in 2022, it posted a loss of USD 2.6 billion. Apart from Coinbase, Marathon Mining, the largest US cryptocurrency mining operation, reported a staggering revenue increase of 229% to a record USD 387.5 million in 2023 from USD 117.8 million in 2022. Several policy proposals are before policymakers in the USA, trying to tackle issues related to the industry. 

In late July 2024, Nashville, Tennessee (USA) was the stage of the largest annual bitcoin gathering. The Bitcoin Conference 2024 had one speaker that everyone awaited with anticipation. Republican Party candidate and the former US President Donald J Trump is the first high-end political figure in the USA who agreed to address the bitcoin crowd. Trump’s appearance was announced a couple of weeks before, and at that very moment the issue of cryptocurrency and the surrounding industry slipped into the main discussion among the candidates for the November US elections.

Back on the green

So, the industry is back on the green, regulation is discussed in the US Senate and Congress and the mining industry is growing. How come that industry is not discussed that much on the main political stage? 

In Nashville, everything was ready for Trump’s appearance. The former president’s campaign trail advertised his appearance as one of the highlights of his July campaign. Anyhow, the crowd gathered at the Bitcoin Conference is not politically homogeneous. There are people on the complete opposite side of Trump’s proposed political spectrum. In the past, US cryptocurrency companies were one of the top contributors to the US Democratic Party (most prominently, Sam Bankman-Fried, now convicted, former CEO of the failed cryptocurrency exchange FTX).

Crypto vs bitcoin

Before I continue, let me give you a brief explanation. It is important, trust me.

In short, the cryptocurrency industry is divided into two strongly opinionated teams standing on opposite sides. Bitcoin adopters would almost never call themselves crypto enthusiasts. They consider other cryptocurrencies, cryptocurrency exchanges, and the entire idea of ‘blockchain technology changing the world’ to be false. For (true) bitcoiners, NFTs, meme coins, microtransactions, enormous overnight profits, and other ‘miraculous’ stories were considered nothing more than elaborate schemes for tricksters and scammers willing to sell innocent investors a story of the world-changing technology. And to be fair, that narrative kind of proved to be true. For years, US financial regulators have been waging war on cryptocurrencies and online cryptocurrency exchanges as ‘unregulated securities’ businesses. The US SEC has already won many cryptocurrency-related court cases for scam and fraud charges. On the other hand, the same regulatory agency has made a clear distinction between bitcoin and others. The SEC has officially stated that bitcoin cannot be considered a security but rather a commodity and that they will not pursue any bitcoin holders or bitcoin-only companies (in a court case back in 2019). This is thought to be due the decentralised nature of bitcoin. Unlike other cryptocurrencies, bitcoin does not have a CEO, headquarters, or hire anyone to work on its update. Bitcoin is simply an open-source protocol that handles digital value as unique information. Therefore, it cannot be defined as ‘a promise of profits’ to investors, which is the main argument of the famous Howey Test, a metric that has been used by the SEC to determine the scope of its work since the 1946 Supreme Court case.

This is the first point of difference recognised by regulators and one of the main arguments for Bitcoin as digital gold. Bitcoin can be used at a settlement level to create a future ‘digital gold standard’ mimicking the now abandoned ‘gold standard’ for the global economy. Bitcoiners argue that other cryptocurrencies and the industry as a whole have achieved a huge value transfer but fail to see any value creation (thus far). Court decisions worldwide have confirmed quite similar things.

Energy consumption in cryptocurrency

The second point of major disagreement between the two sides (crypto and bitcoin) is the way the industry is spending energy. Energy is the most frequently mentioned issue in the media coverage of cryptocurrency developments. You have heard and read numerous reports on the massive amount of energy used to mine (create) cryptocurrency and 99.5% of that energy is spent in bitcoin mining. The proof-of-work (PoW) algorithm, used in the bitcoin network for security reasons, requires miners to spend energy creating new bitcoins. Specialised mining equipment is often located near big power plants, and the pursuit of cheap electricity is the major driver of the industry. In contrast, the crypto industry, apart from bitcoin, has created a solution for such energy demand with the non-energy consuming Proof-of-Stake (PoS) algorithm for network security. Therefore, the crypto industry is now pointing to bitcoin as the sole reason why regulators are thinking about cryptocurrencies, as the green agenda worldwide becomes dominant. A couple of US legislators from the Democratic Party have filed several motions for a statewide ban on bitcoin mining as an energy demanding industry. As a counter-argument, bitcoiners say that the actual amount of energy spent for bitcoin creation gives it its power. In other words, energy spent in creation gives bitcoin an intrinsic value similar to physical gold. 

It is important that these distinctions have been clarified in order to understand the scope of Trump’s address. With that, back to Nashville.

Trump’s address at the Bitcoin Conference 2024. Video by ‘NewsMax’

News for crypto in the USA

One of my friends who was in the audience told me that people who normally are not interested in politics were ecstatic and wanted to hear the first address of the US president to the bitcoin crowd. President Trump took the stage at the Bitcoin Conference 2024 and gave the crowd all they wanted to hear, and a bit more. He said that the AI and bitcoin industries are similar as they need the same thing: electricity. He made a promise to the United States to ramp up electricity production by a couple of folds, clearly setting his agenda on the opposite side of Democrat’s calls for mining bans. We want all bitcoins in the world to be created in the USA, he said. ‘We will be creating so much electricity that you’ll be saying: Please, please president, we don’t want any more electricity…’ 

He immediately followed with the promise to relieve Mr Gary Gensler, the current chairman of the US SEC. Actually, he would do it on his first day in the office. He promised that the bitcoin and crypto industry would stay in the USA. But one of the most dazzling promises for all bitcoiners attending was his announcement that the USA might start accumulating bitcoins as for future global trade. The crowd was overwhelmed, as he confirmed that the idea of bitcoin as digital gold had finally received approval from the top policymaker, let alone the former (and possibly future) president of the United States. Later during the conference, plans were elaborated on how such a thing can be done. If true, this could indeed play a significant role in the worldwide adoption of bitcoin as a global store of digital value. Having in mind that the future global economy will certainly be digital, such a thing is actually quite possible and logical. Ultimately, it is a matter of political will to create such a strong global independent currency not related to the reigning central banking system. ‘Bitcoin will probably overtake gold (market), there was never anything like it… it’s not only a marvel of technology but the miracle of (human) corporation.’ To back that up, Trump reiterated that he would halt the development of the US Central Bank Digital Currency (CBDC).

Trump finished with the best wishes for all: ‘We will make America and bitcoin bigger, better, stronger, richer, freer, and greater than ever before… Have a good time with your bitcoin and your crypto and everything else you’re playing with.’ 

The moment he said it, the crowd suddenly became colder. They realised that he was not aware of the distinction between bitcoin and crypto. Actually, he might just populistically say what crowds want to hear, and the moment the script was taken off the teleprompter, he could not tell the difference between the two. 

This was for sure the event that launched issues surrounding bitcoin and cryptocurrency in the US elections race, as more and more young voters are getting to the polling stations and the idea of the independent global currency becomes not so utopian and high-end tech issue. In any case, we will have to wait and see which of those promises are actual future policies and which part plays the role of enchanting the masses. Open-source software, energy consumption, and consumer protection will be discussed in detail in the future.