Snapshot: The developments that made waves
AI governance
The ‘Pact for the Future,’ adopted at the Summit of the Future on 22 September 2024, sets out an ambitious agenda to address climate change, digital transformation, and peace while fostering agile global governance.
On Day 3 of the UN General Assembly, discussions surrounded the challenges of rapid technological advancements and their sociocultural implications. A significant focus was placed on governing AI, misinformation, and disinformation, with several countries addressing their detrimental impact on democratic stability.
The UN advisory body has released its final report, Governing AI for Humanity, proposing seven strategic recommendations for global AI governance.
Israel is proactively shaping its AI landscape by establishing a national expert forum on AI policy and regulation. Led by the Ministry of Innovation, Science, and Technology, this initiative demonstrates the government’s commitment to responsibly harnessing AI and unites experts to address its challenges and opportunities.
Technologies
AI models, including ChatGPT and Cohere, once depended on low-cost workers for basic fact-checking. Today, these models require human trainers with specialised knowledge in medicine, finance, and quantum physics.
The US House has recently passed a bill aimed at streamlining federal permitting for semiconductor manufacturing projects, a move anticipated to benefit companies like Intel and TSMC. The legislation seeks to address concerns that lengthy environmental reviews could hinder the construction of domestic chip plants especially as chipmakers have pledged significant investments following the 2022 Chips and Science Act.
At its annual Connect conference, Meta Platforms unveiled its first working prototype of augmented-reality glasses called Orion.
While South Korean memory giants Samsung Electronics and SK hynix experienced a significant sales increase in China during the first half of this year, the report by the Korea Eximbank Overseas Economic Research Institute indicates that South Korea’s reliance on China for critical semiconductor raw materials is also growing.
Several police departments in the United States have begun using AI to write incident reports, aiming to reduce time spent on paperwork.
Infrastructure
The FCC has made a pivotal move to enhance broadband services across the USA by allocating additional spectrum in the 17.3-17.7 GHz band to non-geostationary satellite operators (NGSO), including notable providers like Starlink.
China and Africa cooperate to enhance digital infrastructure, a key aspect of their economic partnership. Chinese investments have built essential frameworks, including fibre optic cables and 5G networks, transforming local economies and expanding e-commerce.
Cybersecurity
US officials warn of foreign AI influence as the presidential election draws near, with Russia leading the charge. Moscow’s efforts have focused on supporting Donald Trump and undermining Kamala Harris.
China’s national security ministry has recently alleged that a Taiwan-backed hacking group, Anonymous 64, has been attacking targets in China, even releasing photos of individuals it claims are part of the group.
The US Federal Bureau of Investigation has disrupted another major Chinese hacking group, dubbed ‘Flax Typhoon,’ which had compromised thousands of devices globally.
After months of defiance, Elon Musk’s social media platform, X, told Brazil’s Supreme Court that it had complied with orders to curb the spread of misinformation and extremist content.
Digital rights
Russia is ramping up its efforts to control the internet by allocating nearly RUB 60 billion ($660 million) over the next five years to upgrade its web censorship system, known as TSPU.
Australia is preparing to introduce age limits for social media use to protect children’s mental and physical health.
Legal
Australia has introduced the Privacy and Other Legislation Amendment Bill 2024, marking a pivotal advancement in addressing privacy concerns within the digital landscape.
Meta, Facebook’s owner, has been fined €91 million ($101.5 million) by the EU’s privacy regulator for mishandling user passwords. Ireland’s Data Protection Commission (DPC), which oversees GDPR compliance for many US tech firms operating in the EU, launched an investigation after Meta reported the incident.
A political consultant has been fined $7.7 million by the Federal Communications Commission (FCC) for using AI to generate robocalls mimicking President Biden’s voice. The calls, aimed at New Hampshire voters, urged them not to vote in the Democratic primary, sparking significant controversy.
California Governor Gavin Newsom has signed two new bills into law aimed at protecting actors and performers from unauthorised use of their digital likenesses through AI. The following measures have been introduced in response to the increasing use of AI in the entertainment industry, which has raised concerns about the unauthorised replication of artists’ voices and images.
Internet economy
Gold has soared to a record high of $2,629 per ounce following the US Federal Reserve’s recent interest rate cut.
Uber Technologies and WeRide announced a partnership to integrate the Chinese self-driving technology firm’s vehicles into Uber’s rideshare platform, beginning in the UAE.
OpenAI’s board is considering compensating CEO Sam Altman with equity, though no decision has been made, according to board chair Bret Taylor.
Vietnam’s President To Lam met with leading US firms in New York, pledging to strengthen the domestic tech sector.
Development
The G20 Task Force 05 on Digital Transformation has unveiled a policy brief titled ‘Advocating an International Decade for Data under G20 Sponsorship’, highlighting the fundamental role of accessible and responsibly re-used data in driving social and economic development, particularly in the context of emerging technologies like AI.
The Eastern Africa Regional Digital Integration Project (EARDIP) is poised to transform the digital landscape across Eastern Africa by enhancing connectivity and accessibility.
Sociocultural
Telegram has apparently decided to alleviate its policy restrictions and to provide users’ IP addresses and phone numbers to authorities in response to valid legal requests.
Telegram founder Pavel Durov has announced that the messaging platform will tighten its content moderation policies following criticism over its use for illegal activities. The decision comes after Durov was placed under formal investigation in France for crimes linked to fraud, money laundering, and sharing abusive content.
Meta’s Oversight Board has advised the Facebook parent company not to automatically remove the phrase ‘From the river to the sea’, which is interpreted by some as a show of solidarity with the Palestinians and by others as antisemitic.
A group of Democratic senators, led by Amy Klobuchar, has called on the US Federal Trade Commission (FTC) and the Department of Justice (DOJ) to investigate whether AI tools that summarise online content are anti-competitive.
Elon Musk’s social media platform, X, has moved to address legal requirements in Brazil by appointing a new legal representative, Rachel de Oliveira Conceicao.
UNGA79 and the ‘Pact for the Future’
The ‘Pact for the Future’, adopted at the Summit of the Future on 22 September 2024, emerges as a declaration of intent to leap from the past into an uncertain, but ambitious, tomorrow. The Pact, presented before an audience of world leaders and civil society representatives, encapsulates a roadmap and a lighthouse – navigating the challenges of climate, digital transformation, and peace while aiming to build structures agile enough for the unpredictable rhythms of modernity. It is a global handshake between generations: a promise that the wisdom of the past will not stagnate progress but rather infuse it with urgency. In the words of the UN Secretary-General: ‘We cannot create a future fit for our grandchildren with a system built by our grandparents’, we can hear a sentiment that underpins the thematic core of the Pact.
The ink on the Pact for the Future was barely dry when the first repercussions could be felt, especially within the UN General Assembly’s 79th session chambers. With climate change blazing on one side and the promise of digital revolution flickering on the other, world leaders convened during the high-level week to reassert their commitment to the Sustainable Development Goals (SDGs). What unfolded was a kaleidoscope of voices, discussions, and pledges that sought to breathe life into what had often been seen as lofty, distant goals. The pace was fast, yet the ambition seemed to echo slower truths – the earth’s fevered rise in temperature, persistent inequality, and the widening gaps in access to digital infrastructure.
While the Summit of the Future carved out new space for discussions on the use and governance of AI and digital inclusion, the UNGA79 focused on ensuring these discussions weren’t mere fleeting abstractions. Anchored in the Pact, the Global Digital Compact took centre stage, drawing sharp lines around data governance issues, internet access, and AI oversight. These initiatives were a nod to the ever-growing digital divide, where the future of democracy and human rights may just be shaped by the bits and bytes of cyberspace as much as by the ballots cast at polls. Global leaders, it seemed, were not just pledging to keep everyone connected – they were promising to keep everyone protected in an increasingly tricky online world. A bold promise indeed, in a time when the pace of technological change far outstrips the speed at which governance frameworks are formed.
Then came the delicate dance of peace and security, where old enemies and new technologies collided on the agenda. Discussions surrounding the reform of the UN Security Council – arguably one of the most progressive since the Mid-20th century – were matched with fresh commitments to nuclear disarmament and the governance of outer space. No longer the stuff of science fiction, space and AI were recognised as the new frontiers of conflict and cooperation. Yet Africa’s under-representation on the global stage may prove to be the most seismic of shifts. If the Pact’s promise to redress this historical imbalance holds, it could alter the very architecture of global governance in ways not seen since the decolonisation waves of the mid-1900s.
Through it all, the resonance of the future generations loomed large. For the first time, a formal Declaration on Future Generations was signed, reminding current leaders that their decisions – or indecisions – would shape the lives of the not yet born. A future envoy, empowered youth, and re-energised civil society seem to echo a deeper undercurrent: that this Pact, this Summit, and the UNGA79 may not be remembered for its words alone, but for the actions that will (or won’t) follow in its wake.
Digital Public Infrastructure: An innovative outcome of India’s G20 leadership
From latent concept to global consensus
Not more than a couple of years back, this highly jingled acronym of the present time – DPI (Digital Public Infrastructure), was merely a latent term. However, today it has gained an ‘internationally agreed vocabulary’ with wide-ranging global recognition. This could not imply that efforts in this direction had not been laid earlier, yet a tangible global consensus over the formal incorporation of the term was unattainable.
The complex dynamics of such a long-standing impasse or ambiguity over a potential consensus-based acknowledgement of DPI is prominently highlighted in the recently published report of ‘India’s G20 Task Force on Digital Public Infrastructure’. The report clearly underlines that,
While DPI was being designed and built independently by selected institutions around the world for over a decade, there was an absence of a global movement that identified the common design approach that drove success, as well as low political awareness at the highest levels of the impacts of DPI on accelerating development.
It was only at the helm of India’s G20 Presidency in September 2023 that the first-ever multilateral consensus was reached to recognise DPI as a ‘safe, secure, trusted, accountable, and inclusive’ driver of socioeconomic development across the globe. Notably, the ‘New Delhi Declaration’ has cultivated a DPI approach, intending to enhance a robust, resilient, innovative, and interoperable digital ecosystem steered by a crucial interplay of technology, business, governance, and community.
The DPI approach persuasively offers a middle way between a purely public and a purely private strand, with an emphasis on addressing ‘diversity and choice’, encouraging ‘innovation and competition’, and ensuring ‘openness and sovereignty’.
Ontologically, this marks a perceptible shift from the exclusive idea of technocratic-functionalism to embracing the concepts of multistakeholderism and pluralistic universalism. These conceptualisations hold substance in the realm of India’s greater quest to democratise and diversify the power of innovation, based on delicate tradeoffs and cross-sectional intersubjective understanding. Nevertheless, it is also to be construed that an all-pervasive digital transition increasingly entrenched into the burgeoning international DPI approach, has been exceptionally drawn from India’s own successful experience of the domestic DPI framework, namely India Stack.
India Stack is primarily an agglomeration of open Application Programming Interfaces (APIs) and digital public goods, aiming to enhance a broadly vibrant social, financial, and technological ecosystem. It offers multiple benefits and ingenious services, like faster digital payments through UPI, Aadhaar Enabled Payments System (AEPS), direct benefit transfers, digital lending, digital health measures, education and skilling, and secure data sharing. The remarkable journey of India’s digital progress and coherently successful implementation of DPI over the last decade indisputably came into focus during the G20 deliberations.
India’s role in advancing DPI through G20 engagement and strategic initiative
What seems quite exemplary is the procedural dynamism with which actions have been undertaken to mobilise the vocabulary and effectiveness of DPI during various G20 meetings and conferences held within India. Most importantly, the Digital Economy Working Group (DEWG) meetings and negotiations were organised in collaboration with all the G20 members, guest countries, and eminent knowledge partners, like ITU, OECD, UNDP, UNESCO and the World Bank. As an effect, the Outcome Document of the Digital Economy Ministers Meeting was unanimously agreed to by all the G20 members and presented a comprehensive global digital agenda with appropriate technical nuances and risk-management strategies.
Along with gaining traction in the DEWG, the DPI agenda also gained prominence in other G20 working groups under India’s presidency. These include the Global Partnership for Financial Inclusion Working Group, the Health Working Group, the Agriculture Working Group, the Trade and Investment Working Group, and the Education Working Group.
Commensurate to these diverse group meetings, the Indian leadership also held bilateral negotiations with its top G20 strategic and trading partners, namely the USA, the EU, France, Japan, and Australia. Interestingly, the official joint statements of all these bilateral meetings decisively entailed the catchword ‘DPI’. It could be obviously considered whether the time was ripe, or it was India’s well-laid-out strategy that ultimately paid off. Yet, it could not be repudiated that a well-thought-out parallel negotiation process had certainly played an instrumental role in providing leverage for the DPI approach.
Further, in follow-up to the New Delhi Declaration of September 2023, the Prime Minister of India announced the launch of two landmark India-led initiatives during the G20 Virtual Leaders’ Summit in November 2023. The two initiatives denominated as the Global Digital Public Infrastructure Repository (GDPIR) and the Social Impact Fund (SIF) are mainly inclined towards the advancement of DPI in the Global South, particularly by offering upstream technical-financial assistance and knowledge-based expertise. This kind of forward-looking holistic approach reasonably fortifies the path towards a transformative global digital discourse.
Building on momentum: Brazil’s role in advancing DPI
Ever since India passed the baton of the G20 presidency to Brazil, expectations have been pretty high from the latter to carry forward the momentum and ensure that emerging digital technologies effectively meet the requirements of the Global South. It is encouraging to witness that Brazil is vehemently making a step forward to maintain the drive, with a greater emphasis on deepening the discussion over crucial DPI components such as digital identification, data governance, data sharing infrastructure, and global data safeguards. Although Brazil has seized an impressive track record of using digital infrastructure to promote poverty alleviation and inclusive growth within the country, a considerable measure of success at the forthcoming G20 summit will be its efficacy in stimulating political and financial commitments for a broader availability of such infrastructure.
Although concerted endeavours are being deployed to boost the interoperability, scalability and accessibility of DPIs, it becomes highly imperative to ensure their confidentiality and integrity. This turns out to be more alarming in the wake of increased cybersecurity breaches, unwarranted data privacy intrusions, and potential risks attached to emerging technologies like AI. Hence, at this critical juncture, it is quintessential to foster more refined, coordinated and scaled-up global efforts, or more precisely, an effective global digital cooperation.
Disinformation in the digital era
Communication is the cornerstone of societal interaction, holding together the fabric of the social system. By shaping the course of communication, agents may influence the development of society. In our increasingly digital world, the spread of misinformation and disinformation poses a significant threat to social cohesion, democracy, and human rights.
The deceptive use of information has a long history. Emblematic examples can be found in Ancient Egypt, during the Roman Empire, and after the invention of the printing press, for example. During the Cold War, the United States and the Soviet Union used disinformation campaigns to help advance their respective strategic interests. The complexity and scale of information pollution in the digitally connected world, however, present an unprecedented challenge. In particular, social media has allowed information to be disseminated on a wider scale. While this new informational landscape has empowered individuals to express their opinions, it has also sometimes resulted in the spread of mis- and disinformation.
The speed of propagation is intimately related to the dynamics of social media. Individuals increasingly resort to social media for day-to-day information but still use these platforms with a recreational mindset, which lowers critical thinking and makes them more vulnerable to content that evokes an emotional response, has a powerful visual component or a strong narrative, or is shown repeatedly.
Globally, data from 2022 shows that over 70% of individuals in some developing countries use social media as a source of news. This figure was above 60% in some European countries, such as Greece, Bulgaria, and Hungary. In the United States, 50% of adults get their news from social media. In 19 developed countries, 84% of Pew Research respondents believe that access to the internet and social media has made it easier to manipulate people with false information and rumours. Moreover, 70% of those surveyed consider the spread of false information online to be a major threat, second only to climate change.
The role of technology
One of the key mechanisms behind the social media phenomenon is algorithmic content curation. Social media platforms use sophisticated algorithms, designed to keep users engaged by showing them the content most likely to capture their attention and prompt interaction. As a result, posts that evoke strong emotional responses—such as anger, fear, or outrage—tend to be favoured. Disinformation, with its often sensational and inflammatory nature, fits perfectly into this model, leading to its widespread dissemination.
This amplification effect is compounded by the phenomenon of ‘echo chambers’ and ‘filter bubbles’. Social media algorithms tend to reinforce users’ existing beliefs by showing them content that aligns with their views while filtering out opposing perspectives. This creates an environment where users are primarily exposed to information that confirms their biases, making them more susceptible to disinformation that supports their pre-existing opinions. In these echo chambers, false narratives can quickly gain traction, as they are continually reinforced by like-minded individuals and groups.
The viral nature of social media further exacerbates the problem. Disinformation can spread rapidly across networks, reaching large audiences. This speed of dissemination makes it difficult for fact-checkers and other countermeasures to keep up, allowing false information to gain a foothold before it can be debunked. Moreover, once disinformation has been shared widely, it can be challenging to correct the record, as retractions or corrections often do not receive the same level of attention as the original falsehoods.
In parallel, more research is necessary to understand the spread of disinformation and how social media algorithms interplay with individuals’ active search for content, especially in non-Western and non-English speaking countries. Against this backdrop, policy and regulation that requests companies to share data and information on algorithms with researchers and other vetted actors could be an important step towards a deeper understanding of information disorder.
The emergence of artificial intelligence-generated mis- and disinformation introduces additional complexity. The challenges relate not only to misinformation fuelled by factual errors or fabricated information provided by AI (often called AI ‘hallucinations’) but also to deliberate disinformation generated by malicious actors with the assistance of AI. The possibility to use generative AI models to produce ‘deepfakes’ – synthetic audio-visual media of human faces, bodies, or voices – enhances the quality and persuasiveness of disinformation, threatening core functions of democracy. Countries as diverse as Burkina Faso, India, Slovakia, Türkiye, and Venezuela have seen deepfakes used to sway voters and shape public opinion. Ultimately, deepfakes may undermine trust in elections and democratic institutions.
Policy and regulatory responses to disinformation
A considerable number of national and regional legal frameworks, as well as private-led initiatives have been introduced to combat mis- and disinformation. On the one hand, they seek to empower individuals to participate in fighting the spread of mis- and disinformation through media literacy. On the other hand, there are initiatives that put in place content regulation aiming to tackle the information ecosystem, reducing social exposure to disinformation to protect society, with particular emphasis on vulnerable groups.
In both cases, policies and frameworks to fight disinformation should seek to uphold human rights, such as the right to freedom of expression and the right to receive and impart information. The Human Rights Council has affirmed that responses to the spread of mis- and disinformation must be aligned with international human rights law, including the principles of lawfulness, legitimacy, necessity, and proportionality. Any limitation imposed on freedom of expression must be exceptional and narrowly construed. Disinformation laws that are vague or that confer excessive government discretion to fight disinformation are concerning, since they may lead to censorship.
In parallel, more should be done to curb the economic incentives to disinformation. Companies are expected to conduct human rights risk assessments and due diligence, ensuring their business models and operations do not negatively impact human rights. This includes sharing data and information on algorithms, which could allow the correlation between the spread of disinformation and ‘ad tech’ business models to be assessed.
Striking the right balance between protection and participation in combating disinformation means resorting wisely to both regulation and engagement. The latter should be conceived in broad terms, encompassing not only the active involvement of individuals, but also the involvement of other segments such as educators, companies, and technical actors. This inclusive approach provides a pathway to curb disinformation while respecting human rights.
The report ‘Decoding Disinformation: Lessons from Case Studies’, published by Diplo, offers an in-depth analysis of disinformation and its interplay with digital policy and human rights. The research was supported by the project ‘Info Trust Alliance’, funded by the German Federal Foreign Office and implemented by GIZ Moldova.