DW Weekly #186 – 15 November 2024

 Page, Text

Dear readers,

AI companies are grappling with challenges such as energy consumption, data availability, and hardware limitations in developing advanced language models. OpenAI, one of the leaders in the field, is exploring new approaches to AI model training that could revolutionise the industry. Experts suggest that the key to this future may lie in techniques that mimic more human-like thinking patterns, offering an alternative to the current focus on scaling up data and computational power.

The recent release of the OpenAI’s OpenAI o1 model represents a breakthrough in AI development, leveraging human-like reasoning and multi-step problem-solving to improve performance. This new model represents a significant shift away from traditional AI approaches focused on feeding massive data into large models. As a result of the expansion of the pre-training plateau, AI pioneers like Ilya Sutskever have acknowledged that the era of simply scaling models may be over. Now, AI researchers are exploring techniques that allow algorithms to ‘thin’ more like humans, enabling faster and more efficient problem-solving.

 Book, Comics, Publication, Adult, Female, Person, Woman, Face, Head

The growing realisation that scaling models may not always lead to better performance has sparked a re-evaluation of the AI development process. To address these problems, AI researchers are looking at ‘test-time compute’, a method that enhances AI models during the inference phase when they are actively used. This technique enables models to process multiple possibilities in real time, providing more accurate results without scaling up the model.

OpenAI’s new model, o1, is nearing these techniques, improving its performance by allowing AI to think through problems in stages similar to human reasoning. This method has proven highly effective, as evidenced by its previous test performances. Namely, the new model outperforms older models, scoring 83% against GPT-4o at the International Mathematics Olympiad. The model is unique in its ability to provide step-by-step reasoning and show human-like patterns of hesitation during the process. It also reduces the occurrence of hallucinations. However, the models have limitations, such as not browsing the internet, needing more world knowledge, and processing files and images. 

The next step is for the models to perform similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology. The company aims to develop a product that can make decisions and take action on behalf of humans, which is estimated to cost USD 150 billion. Removing current kinks in the system will enable the models to work on complex global problems in areas like engineering and medicine. The o1 model, which builds on existing models like GPT-4, is a critical step in AI’s evolution, focusing on multi-step reasoning and incorporating expert feedback to guide AI through complex tasks.

Venture capital investors are taking notice of these changes, as the shift towards inference clouds could drastically alter the dynamics of AI research and development.

Follow the ‘Highlights from the week’ in its section below…

EU and UK universities begin metaverse classes

Universities across the EU and UK are set to introduce metaverse-based courses, where students can attend classes in digital replicas of their campuses. Meta, the company behind Facebook and Instagram, announced the launch of Europe’s first ‘metaversities,’ immersive digital twins of real university campuses.

Polish priest brings AI to faith discussions

In Poznan, Poland, a new chapel is combining tradition with cutting-edge technology. Created by priest Radek Rakowski, the modern chapel features an AI-powered system that answers visitors’ questions about Catholicism.

More updates and other topics on our dig.watch portal!

Marko and the Digital Watch team


Highlights from the week of 08-15 November 2024

bitcoin price

The rise is attributed to a 30% increase in Bitcoin’s value over the past week amidst declining silver prices, which fell by over 6%.

south korea supports chipmaking industry

President Yoon Suk Yeol has voiced concerns over Trump’s tariff threats, which could affect South Korea’s chip industry by undercutting export prices.

nuclear power plant 4529392 1280

AI is playing a key role in the future of California’s last nuclear power plant, enabling it to overcome ageing infrastructure and compliance hurdles with innovative technology.

The Guardian’s decision follows growing criticism of X’s moderation policies under Elon Musk.

cryptocurrencies

Bitcoin recently gained 11% in 24 hours, underscoring its current dominance in the crypto sphere.

x app logo front twitter blue bird symbol background 3d rendering

X faces a May 2025 hearing in France over content payments to publishers.

european union eu flag

Europe’s initiative to boost technological sovereignty.

FlagSetBlueBG 5 20

Interior minister warns of cyber threats as Germany prepares for snap elections.

Vasilica

Explore St. Peter’s Basilica like never before.

Starlink

In a significant step for digital inclusion, Chad has approved Starlink’s satellite internet services to enhance connectivity across underserved areas, marking a leap forward for the central African nation’s technological…


Reading corner

blog How and why language is hardening in modern discourse FI
www.diplomacy.edu

Language is shifting – words like ‘dialogue’ and ‘conversation’ are being replaced by ‘debate’ and ‘discussion’. Is this hardening of tone a sign of the times? Aldo Matteucci analyses.

UN Cybercrime Convention 1920x1080px oct 2024
dig.watch

DiploFoundation invited experts from participating delegations in the UN cybercrime treaty negotiations to break down the agreed draft convention and discuss its potential impact on users.

BLOG featured image 04
www.diplomacy.edu

Trump’s appointment of Elon Musk as ‘efficiency tzar’ aims to modernize federal administration, amidst critiques suggesting it targets the ‘deep state.’ However, the focus should shift to the broader AI transformation poised to reshape bureaucracies worldwide. As AI begins to automate core functions, particularly text-based tasks, public institutions must adapt to these changes. This includes proactive discussions, investing in training, and identifying irreplaceable roles. Ultimately, while Trump’s motivations may spark debate, the real challenge lies in preparing governments for a future where AI fundamentally alters their operations.

Upcoming

Diplo Weekly Newsletter 2024 thumbnail 01
www.diplomacy.edu

United Arab Emirates Diplomatic Academy: Training on Artificial Intelligence From November 17-20, 2024, Diplo will partner with the Anwar Gargash Diplomatic

G20
dig.watch

The G20 Leaders’ Summit 2024 will take place in Rio de Janeiro, Brazil on November 18 and 19. Brazil holds the Presidency from December 2023 to November 2024. The Summit…

Diplo Weekly Newsletter 2024 thumbnail 02
www.diplomacy.edu

25th European Diplomatic Programme: The use of AI in foreign affairs Diplo will participate in the 25th European Diplomatic Programme in Budapest, Hungary, on

DW Weekly #185 – 8 November 2024

 Page, Text

Dear readers,

Donald Trump’s return to the White House probably signals a relevant shift in tech policy, given his strategic alignment with influential figures in Silicon Valley, most notably Elon Musk. Musk, a vocal supporter and one of the wealthiest individuals on the planet, invested approximately $120 million into Trump’s campaign, clearly showing his commitment to Trump’s vision for a tech-forward, market-driven America. Trump has vowed to appoint Musk to head a government efficiency commission, suggesting an unprecedented partnership between the government and private tech giants.

Trump’s ambitions in the tech arena are sweeping. He has promised a regulatory environment that would ‘set free’ companies burdened by government intervention. By rolling back regulations on AI, social media, and cryptocurrency sectors, Trump aims to foster innovation by reducing oversight and promoting a more liberal market. This policy stance starkly contrasts the Biden administration’s regulatory approach, particularly in Big Tech antitrust and AI oversight, which Trump’s team views as stifling growth and innovation.

 Flag

A key part of Trump’s tech agenda is his stance on digital freedom. He has consistently criticised social media platforms for what he claims is censorship of conservative voices, a sentiment echoed by Musk, especially since his acquisition of Twitter (now X). Under Trump’s leadership, there are likely to be pushes to reform Section 230, the law that protects platforms from liability for user-generated content, aiming to curb what Trump views as ‘biased censorship’ against his supporters. This approach aligns with Trump’s free-market ethos and reflects his desire to reshape the digital public square to favour unrestricted speech.

Moreover, the Government Efficiency Commission would conduct a complete financial and performance audit of the federal government. Trump also pledged to cut corporate tax rates for domestically manufactured companies, establish ‘low-tax’ zones on federal lands, encourage construction companies to build new homes and start a sovereign wealth fund. Trump’s proposal drew criticism from Everett Kelley, president of the American Federation of Government Employees, who accused Trump and Musk of wanting to weaken the nonpartisan civil service.

As Trump reclaims his influence over tech policy, his administration is expected to reassess past conflicts with Silicon Valley. Despite his previous clashes with leaders like Mark Zuckerberg, Trump’s recent statements have indicated a willingness to mend fences, especially with executives prioritising business over political engagement. For instance, Zuckerberg’s current stance of neutrality has met with Trump’s approval, signifying a potential thaw in relations that could lead to an era of cooperation rather than confrontation.

In this new chapter, Trump’s alliance with Musk and other tech elites underscores his ambition to create a tech policy that minimises governmental control while encouraging private innovation. Together, Trump and Musk represent a fusion of populism and technology, a partnership that could reshape America’s role in the global tech landscape, steering it towards a future where corporate influence on policy is stronger than ever.

Follow the ‘Highlights from the week’ in its section below…

EU unveils new transparency rules under DSA for intermediary service providers

The European Commission has introduced an Implementing Regulation that standardises transparency reporting for providers of intermediary services under the Digital Services Act (DSA). That regulation aims to ensure consistency and comparability of data shared with the public by requiring providers to disclose specific information about their content moderation practices.

Apple faces first EU fine under Digital Markets Act

Apple is set to face its first fine under the European Union‘s Digital Markets Act (DMA) for breaching the bloc’s antitrust regulations. The case comes after  EU regulators charged Apple in June for violating the new tech rules designed to curb the dominance of big tech companies.

More updates and other topics on our dig.watch portal!

Marko and the Digital Watch team


Highlights from the week of 01-08 November 2024

trump google

The new president favouring moderate reforms over drastic measures.

australia flag is depicted on a sports cloth fabric with many folds sport team waving banner

A landmark initiative in regulating children’s access to social media.

icrc logo

The 34th International Conference of the Red Cross and Red Crescent has adopted a resolution to protect civilians and essential infrastructure from the risks of cyber activities in armed conflicts,…

TikTok

An alleged harmful impact on teenagers’ mental health.

spaceX

A manufacturing shift to Southeast Asia.

Donald Trump 29347022846

A decentralised finance (DeFi) crypto project linked to former President Donald Trump and his sons, plans to restrict its token sales to $30 million within the United States.

DALL%C2%B7E 2023 11 22 22.33.01 A photo realistic image representing a conceptual conflict in semiconductor technology between China and the United States. The image features a large

GlobalFoundries, a major US chipmaker, faces a hefty fine for shipping chips to a sanctioned Chinese affiliate.

italy taxes cryptocurrency

Economy minister Giancarlo Giorgetti argues higher cryptocurrency taxes are needed, citing their risk and disconnection from tangible assets.

5cCVPZJQ UNDP 2

The NHDR will benchmark Bahrain’s digital landscape against regional and international standards, offering insights and recommendations to enhance digital inclusion and infrastructure.

Meta

Critics question Meta’s choice to use Llama AI for military applications.


ICYMI



Reading corner

Diplo Insta DW Analysis 4 11
dig.watch

The major role that technology industry leaders might play in influencing the election outcome.

Diplo BLOGS24 Insta Anita Lamprecht 8 11 1080x1080px 1
www.diplomacy.edu

How do AI’s cognitive mechanisms actually work? Just like human cognition, AI relies on schemas to process and interpret data – yet it lacks the depth and context that human understanding brings. Dr Anita Lamprecht explores.

MicrosoftTeams image 17.png
www.diplomacy.edu

Valencia, recognised as advanced smart city, failed to effectively warn residents of imminent floods, resulting in devastating consequences. Despite advanced technology, the local authorities sent emergency alerts eight hours late, after severe rainfall caused substantial destruction and over 200 fatalities in the region.

Diplo INSTA Diplo Event Unpacking Global Digital
www.diplomacy.edu

On November 8, Sorina Teleanu will launch her book, “Unpacking Global Digital Compact,” a crucial resource for understanding the newly adopted Global Digital Compact (GDC). Published shortly after the GDC’s approval at the UN Summit, it offers in-depth analysis of its negotiations, clarity on complex language, and insights into broader digital governance. The book emphasizes public interest and aims to bridge gaps in digital policy discussions, particularly for underrepresented nations.

Upcoming

Diplo AI CAMPUS Cybercrime
www.diplomacy.edu

#NEW Combating Cybercrime 2024 online course | Diplo Academy Diplo Academy is excited to announce the start of the Combating Cybercrime online course, aimed

Diplo Event Digital Trade for Africa s
www.diplomacy.edu

Digital Trade for Africa’s Prosperity Digital trade is becoming a key driver of economic growth across the globe, reshaping how goods and services are

2015 10 10 12.22.35 1
www.diplomacy.edu

Visit from participants of the Digital Policy Leadership Program, University of St. Gallen Under the framework of the Digital Policy Leadership Program

Digital Watch newsletter – Issue 94 – November 2024

 Flag, American Flag, Advertisement, Poster, Page, Text

Snapshot: The developments that made waves

AI governance

The US Department of Energy (DOE) and the US Department of Commerce (DOC) have joined forces to promote the safe, secure, and trustworthy development of AI through a newly established Memorandum of Understanding (MOU).

A recent assessment of some of the top AI models has revealed significant gaps in compliance with the EU regulations, particularly in cybersecurity resilience and preventing discriminatory outputs. The study by Swiss startup LatticeFlow, in collaboration with EU officials, tested generative AI models from major tech companies like Meta, OpenAI, and Alibaba.

Technologies

Three scientists, David Baker, John Jumper, and Demis Hassabis, have been awarded the 2024 Nobel Prize in Chemistry for their pioneering work in protein science. David Baker, of the University of Washington, was acknowledged for his innovations in computational protein design, while John Jumper and Demis Hassabis of Google DeepMind were recognised for using AI to predict protein structures. 

American scientist John Hopfield and British-Canadian Geoffrey Hinton have been awarded the 2024 Nobel Prize in Physics for their groundbreaking work in machine learning, which has significantly contributed to the rise of AI.

Companies in Japan are increasingly turning to AI to manage customer service roles, addressing the country’s ongoing labour shortage. These AI systems are now being used for more complex tasks, assisting workers across various industries.

Russia has announced a substantial increase in the use of AI-powered drones in its military operations in Ukraine. Russian Defence Minister Andrei Belousov emphasised the importance of these autonomous drones in battlefield tactics, saying they are already deployed in key regions and proved successful in combat situations.

Chinese researchers from Shanghai University claim to have made a significant breakthrough in quantum computing, asserting they have breached encryption algorithms commonly used in banking and cryptocurrency.

Infrastructure

The competition between Elon Musk and Mukesh Ambani is intensifying as they vie for dominance in India’s emerging satellite broadband market

A group of major tech companies, including Microsoft, Alphabet, Meta, and Amazon, has proposed new terms for how data centres in Ohio should pay for their energy needs.

Siemens relies on its digital platform, Xcelerator, to drive future growth, especially in its factory automation business, which has faced slowing demand in China and Europe.

Cybersecurity

Six Democratic senators have urged the Biden administration to address critical concerns about human rights and cybersecurity in the upcoming United Nations Cybercrime Convention, which is set for a vote at the UN General Assembly.

According to a new threat assessment, Canada’s signals intelligence agency has identified China’s hacking activities as the most significant state-sponsored cyber threat facing the country.

Russia is using generative AI to ramp up disinformation campaigns against Ukraine, warned Ukraine’s Deputy Foreign Minister, Anton Demokhin, during a cyber conference in Singapore.

Forrester’s 2025 Predictions report outlines critical cybersecurity, risk, and privacy challenges on the horizon. Cybercrime costs are expected to cost $12 trillion by 2025, with regulators stepping up efforts to protect consumer data.

Digital rights

The EU‘s voluntary code of practice on disinformation will soon become a formal set of rules under the Digital Services Act (DSA).

The Consumer Financial Protection Bureau (CFPB) has informed Meta of its intention to consider ‘legal action’ concerning allegations that the tech giant improperly acquired consumer financial data from third parties for its targeted advertising operations.

Legal

Chinese online retailer Temu is exploring joining a European Union-led initiative to combat counterfeit goods, which includes major retailers such as Amazon, Alibaba, and brands like Adidas and Hermes. 

South Korea’s data protection agency has fined Meta Platforms, the owner of Facebook, KRW 21.62 billion ($15.67 million) for improperly collecting and sharing sensitive user data with advertisers.

Seven families in France are suing TikTok, alleging that the platform’s algorithm exposed their teenage children to harmful content, leading to tragic consequences, including the suicides of two 15-year-olds.

The Kremlin has called on Google to lift its restrictions on Russian broadcasters on YouTube, highlighting mounting legal claims against the tech giant as potential leverage.

Internet economy

World Liberty Financial, a decentralised finance (DeFi) crypto project associated with former President Donald Trump and his sons, plans to limit its token sales to $30 million within the USA

Italy‘s Economy Minister Giancarlo Giorgetti has defended plans to raise taxes on cryptocurrency capital gains as part of the country’s 2025 budget, despite facing opposition from members of his own League party.

The State Bank of Pakistan (SBP) has proposed a significant framework to recognise digital assets, including cryptocurrency, as legal currency in Pakistan.

Thailand Board of Investment (BOI) announced on Friday it has approved $2 billion in new investments aimed at bolstering the nation’s data centre and electronics manufacturing sectors.

Development

Morocco’s Panafsat and Thales Alenia Space have signed a memorandum of understanding (MoU) to build a high-capacity satellite telecommunications system to advance digital connectivity across 26 African countries, including 23 French-speaking nations.

Kenya partners with Google to enhance its digital infrastructure and empower its citizens in the evolving digital economy.

Sociocultural

Seven families in France are suing TikTok, alleging that the platform’s algorithm exposed their teenage children to harmful content, leading to tragic consequences, including the suicides of two 15-year-olds.

OpenAI has introduced new search functions to its popular ChatGPT, making it a direct competitor to Google, Microsoft’s Bing, and other emerging AI-driven search tools.

Meta has announced an extended ban on new political ads following the United States election, aiming to counter misinformation in the tense post-election period.

Mozambique and Mauritius are facing criticism for recent social media shutdowns amid political crises, with many arguing these actions infringe on digital rights. In Mozambique, platforms like Facebook and WhatsApp were blocked following protests over disputed election results.


Trump vs Harris: The tech industry’s role in 2024

As the 5 November US presidential election nears, the race between former President Donald Trump and Vice President Kamala Harris is extremely close, making voter mobilisation critical. The support of influential business figures, particularly from Big Tech, could prove pivotal. Elon Musk, the founder of X, has voiced strong support for Trump, spotlighting the role that tech giants, especially the ‘Magnificent Seven’ (Apple, Microsoft, Amazon, Nvidia, Meta, Tesla, and Alphabet), could play in the election outcome. Both Trump and Harris are courting corporate America, reflecting Big Tech’s growing influence over public policy and voter sentiment.

AD 4nXezlMgnnON4a9kNG3O3vBQTqfkBZrp 33 v JcfXrFC ty0MGIJENSf64661cZkIU F8K2lsQ1aSoaVCWGfB4Cn1jV1A0jiYZxiPyP3uDL8IiMTMS bPouE

Tech leaders have increasingly reached out to Trump. Figures like Apple’s Tim Cook and Amazon’s Andy Jassy have engaged with him, and even Mark Zuckerberg has shown respect toward Trump despite previous tensions, such as Facebook’s ban on Trump after the Capitol riot. Zuckerberg has stated he will remain neutral in the 2024 election, though Trump has hinted at a newfound mutual understanding. Musk’s relationship with Trump has also evolved; despite past criticism, Musk now aligns more closely with Trump, particularly since taking over Twitter, where he promotes issues resonant with Trump’s base, such as scepticism of the media and government censorship.

Musk’s financial contributions are significant, with his America PAC offering $1 million daily to registered voters who support First and Second Amendment causes. However, this initiative has raised legal concerns over incentivising voter registration, with experts questioning the legality of tying financial rewards to political participation.

On the other hand, Kamala Harris enjoys substantial support from Silicon Valley’s elite. Her connections to tech stem from her time as California’s attorney general and later as a US senator. Figures like former Facebook CEO Sheryl Sandberg and philanthropist Melinda French Gates are backing her, along with over 800 venture capitalists and thousands of tech employees. Harris’s appeal to Silicon Valley aligns with her stance on AI regulation and data privacy, which is seen as more favourable than Trump’s deregulation approach. While most of Silicon Valley leans Democratic, there are exceptions, such as David Marcus, a former PayPal president who has shifted allegiance to the Republican Party.

Big Tech is under regulatory scrutiny, especially from the Biden administration’s antitrust actions against companies like Apple and Google. The Department of Justice has accused these companies of anti-competitive practices. Trump, however, has suggested he would lessen regulatory pressure on tech firms if elected, contrasting sharply with the Biden administration’s regulatory approach.

Trump’s tech policy emphasises deregulation, which he believes will stimulate growth. He opposes what he calls ‘illegal censorship’ by tech companies and advocates for a hands-off approach to AI and cryptocurrencies, favouring minimal government oversight to drive US competitiveness. He also supports corporate tax cuts and reduced regulatory burdens, aligning with a market-driven vision for tech growth.

Conversely, Harris, as Biden’s appointed AI czar, supports stronger regulations on AI and tech to ensure public safety. She has pushed for data privacy and bias protection laws, aligning her campaign with Biden’s regulatory framework on technology. Harris’s support for initiatives like the CHIPS Act highlights her focus on US tech independence and national security, prioritising consumer protection and a controlled tech landscape.


AI and ethics in modern society

Humanity’s rapid advancements in AI and robotics have brought ethical and philosophical issues into urgent focus, especially as AI technologies now shape areas like medicine, governance, and the economy. Governments, corporations, international organisations, and individuals are responsible for navigating these advancements ethically, ensuring that AI use respects human rights and fosters societal good.

Ethics in AI refers to principles guiding right and wrong actions, requiring AI technologies to respect societal values and protect human dignity. AI, defined as systems that autonomously analyse and make decisions, spans various forms, from voice assistants to autonomous vehicles. Without an ethical framework, AI risks worsening inequality, eroding accountability, and infringing on privacy and autonomy, highlighting the necessity of embedding fairness and responsibility into AI’s design and regulation.

AI ethics aims to minimise risks from misuse, poor design, or harmful applications, addressing issues like unauthorised surveillance and AI weaponisation. Global initiatives like UNESCO’s 2021 Recommendation on the Ethics of AI and the EU’s AI Act seek to ensure responsible AI development, balancing the challenge of early regulation against the entrenchment of unregulated technologies. These frameworks respond to real-world impacts like algorithmic bias, emphasising the need for timely, well-constructed oversight.

AI ethics draws inspiration from Asimov’s fictional Three Laws of Robotics, although real-world AI complexities extend far beyond this basic framework. Current AI applications, such as autonomous vehicles and facial recognition, introduce accountability, privacy, and other issues, demanding nuanced strategies beyond foundational ethical rules. Real-world AI systems require complex governance, focusing on areas such as legal, social, and environmental impacts.

Legal accountability, particularly in autonomous systems scenarios, raises questions about responsibility in accidents, stressing the need for legal reforms. Financially, AI risks worsening inequality due to algorithmic biases in areas like lending. Environmentally, AI’s large energy requirements for training models impact sustainability, and it is crucial to develop energy-efficient systems to address this issue. Socially, automation disrupts traditional jobs, and biased algorithms could deepen social inequality, especially in employment and criminal justice. The use of AI in surveillance also raises serious privacy concerns.

The psychological effects of AI, such as how AI-driven customer service may lack empathy or how manipulative marketing tactics may impact well-being, require careful attention. Public mistrust in AI, stemming from the opacity of AI systems and the potential for algorithmic bias, is a significant barrier to widespread AI adoption. Transparent, explainable AI that allows users to understand decision-making processes, along with strong accountability frameworks, is essential for fostering public trust and establishing a fair AI landscape.

AD 4nXeeWJX7noZ mfmV55Vu0q3198 xCE94DGzzkchmilmhw6MGz8WpK1Sq4TyGbu4 OUkRgHGcoSR3Iudzvj9WNChUKQlpD9uqlrsZckewgmiYEC0ITBpT1FHn9Qwa4kfvvxOW2iPbJiQtqSx6awqNRy8rE

Addressing these ethical challenges demands global coordination and adaptable regulation to ensure AI supports humanity’s best interests, respects human dignity, and promotes fairness across all sectors. The ethical challenges surrounding AI impact fundamental human rights, economic equality, environmental sustainability, and social trust. A collaborative approach, with contributions from governments, corporations, and individuals, is essential to build robust, transparent AI systems that advance societal welfare. Through a commitment to research, interdisciplinary collaboration, and prioritising human well-being, AI can fulfil its transformative potential for good, guiding technological advancement while safeguarding societal values.attached to emerging technologies like AI. Hence, at this critical juncture, it is quintessential to foster more refined, coordinated and scaled-up global efforts, or more precisely, an effective global digital cooperation.



El Salvador: Blueprint for the bitcoin economy

El Salvador’s adoption of bitcoin as legal tender on 7 September 2021 marked a pioneering step in integrating cryptocurrency into national economic policy. Initially viewed as a bold experiment, this move transformed into a strategic approach with significant implications both domestically and internationally, despite concerns raised by the IMF and other institutions about potential risks. The policy aimed to address economic challenges like financial inclusion in an unbanked population, making El Salvador a global beacon for cryptocurrency. With 5,748.8 bitcoins in national reserves, the country has continued to invest in bitcoin, showcasing confidence in its long-term potential.

AD 4nXePbRGdRG WZItq C

El Salvador’s bitcoin adoption has had mixed economic impacts. The cryptocurrency has streamlined remittances for Salvadorians abroad, reducing fees and making transactions more accessible. This policy has also attracted foreign investments and a surge in crypto tourism. However, bitcoin’s volatility remains a concern, with critics warning that reliance on such a fluctuating asset could threaten financial stability. President Nayib Bukele’s ambitious plan to establish ‘Bitcoin City’ —a tax-free, crypto-friendly zone to attract foreign investment with a projected $1.6 billion investment—aims to make El Salvador a global hub for digital finance.

Education has been a key focus, demonstrated through the government’s bitcoin certification programme spearheaded by the National Bitcoin Office (ONBTC). The initiative seeks to educate 80,000 government employees on bitcoin and blockchain, embedding cryptocurrency knowledge across state institutions. This approach ensures that bitcoin adoption is more than a policy directive and becomes ingrained in the country’s governance and administration, facilitating a foundational understanding of cryptocurrency among civil servants and extending into other sectors.

El Salvador’s pro-crypto stance has influenced other nations. Argentina, led by pro-crypto president Javier Milei, has shown interest in adopting cryptocurrencies to stabilise its economy and is closely studying El Salvador’s approach. As more countries consider cryptocurrency integration, El Salvador’s policy offers a practical example, illustrating both the opportunities and challenges of digital currency in a national economy.

However, regulatory challenges persist, with organisations like the IMF voicing concerns about financial stability and consumer protection risks. Despite this, El Salvador has continued to strengthen its regulatory frameworks and increase transparency around bitcoin activities, emphasising its commitment to maintaining its crypto leadership.

The government-backed Chivo wallet has played a crucial role in driving financial inclusion, giving citizens who previously had no access to banking a way to transact digitally. Through the Chivo platform, which offered $30 in bitcoin to each user, El Salvador has made significant strides toward an inclusive financial ecosystem, setting an example for other nations looking to reduce banking barriers for the unbanked.

AD 4nXcLVyD3HUfZumEeXmdkCzUYSMQ3UhRiKayMHvaLGULBAf2wiq4g8xkYxiYX3Pyda8Jhd5uRxQaZuQysrfA GrSuQrh8x96ynl80skuD3xDK2aXCzNWx4XeDVmb1k1bJDI7zh0x48dX p8Hi9yoyzBF61rP

El Salvador’s experiment has inspired other nations, such as the Central African Republic, to adopt bitcoin. For countries grappling with inflation or financial exclusion, bitcoin represents a potential alternative. El Salvador’s pioneering approach illustrates how digital currencies can offer a pathway to economic development and innovation, positioning the country as a leader in the emerging digital financial order.



Revolutionising medicine with AI

The integration of AI into medicine has marked a revolutionary shift, especially in diagnostics and early disease detection. Since AI was first applied to human clinical trials over four years ago, its potential to enhance healthcare has become increasingly evident. AI now aids in detecting complex diseases, often at early stages, improving diagnosis accuracy and patient outcomes. This technological advancement promises to transform individual health and broader societal well-being despite ethical concerns and questions about AI accuracy that persist in public debate.

In diagnostics, AI has shown remarkable success. A Japanese study revealed that AI-assisted tools, such as ChatGPT, outperformed experts, achieving an 80% accuracy rate in medical assessments across 150 diagnostics. These results encourage further integration of AI into medical devices and underscore the need for AI-focused training in medical education.

AI is making substantial strides in cancer detection, with companies like Imidex, whose AI algorithm has received FDA approval, working on improving early lung cancer screening. Similarly, French startup Bioptimus is targeting the European market with an AI model that can identify cancerous cells and genetic anomalies in tumours. Such developments highlight the growing competition and innovation in AI-driven healthcare, making these advancements more accessible globally.

AD 4nXeVZM6NpIcBjn7v LT61SlkHFHeNZPSrCiVIKM6CpKX2KfQl0BLzQkhfHjKY VmrY6 216rJv6QuOdcBNr38QMTuFXcHfNi6Lxah99NfkOdCEx1b6wB gw K1sIHUrHNEvH GzyO3o9

Despite these promising advances, public scepticism remains a significant challenge. A 2023 Pew Research study found that 60% of Americans are uncomfortable with AI-assisted diagnostics, fearing it might harm the doctor-patient relationship. While 38% of respondents anticipate better outcomes with AI, 33% worry about negative impacts, reflecting mixed feelings on AI’s role in healthcare.

AI is also contributing to dementia research. By analysing large datasets and brain scans, AI systems can detect structural brain changes and early signs of dementia. The SCAN-DAN tool, developed by researchers in Edinburgh and Dundee, aims to revolutionise early dementia detection through the NEURii global collaboration, which seeks digital solutions to dementia’s challenges. Early interventions enabled by AI hold the potential to improve the life quality of dementia patients.

AD 4nXdgWaMEvnKQfuGis 6uilCguNQs7OFJ2GqkjLtALGCxIzsl6LjTAZOO9yAD8NTc qLw0JMV2CKpRTw xpo14

AI’s utility extends to breast cancer detection, where it enhances the effectiveness of mammograms, ultrasounds, and MRIs. An AI system developed in the USA refines disease staging, distinguishing between benign and malignant tumours with reduced false positives and negatives. Accurate staging aids in effective treatment, particularly for early-detected breast cancer.

The financial backing for AI in healthcare is substantial, with projections suggesting that AI could contribute nearly $20 trillion to the global economy by 2030, with healthcare potentially accounting for over 10% of this value. Major global corporations are keen to invest in AI-driven medical equipment, underlining the field’s growth potential.

The future of AI in healthcare is promising, with AI systems poised to surpass human cognitive limits in analysing vast information. As regulatory frameworks adapt, AI tools in diagnostics could lead to faster and more precise disease detection, potentially marking a significant turning point in medical science. This transformative potential aligns AI with a revolutionary trajectory in healthcare, capable of reshaping medical practice and patient outcomes.



Just-in-time reporting from the UN Security Council: Leveraging AI for diplomatic insight

On 21 and 24 October, DiploFoundation provided real-time reporting from the UN Security Council sessions on scientific development and women, peace, and security. Supported by Switzerland, this initiative aims to improve the work of the UN Security Council and the broader UN system by making session insights more accessible.

At the heart of this effort is DiploAI, a sophisticated AI platform trained on UN materials. DiploAI unlocks the knowledge embedded in the Council’s video recordings and transcripts, making it easier to access valuable diplomatic insights. This AI-driven reporting combines advanced technology with expertise in peace and security, providing in-depth analysis of UN Security Council sessions in 2023-2024 and covering the UN General Assembly (UNGA) for eight years.

A key feature of DiploAI’s success is the seamless collaboration between AI and human experts. Experts tailored the AI system to the Security Council’s needs by providing essential documents and materials, enhancing the AI’s contextual understanding. Through iterative feedback on topics and keywords, DiploAI produces accurate and diplomatically relevant outputs. A significant milestone in this partnership was DiploAI’s analysis of ‘A New Agenda for Peace,’ where experts identified over 400 key topics, forming a comprehensive taxonomy for UN peace and security issues. Additionally, a Knowledge Graph was developed to visually represent sentiment and relational analysis, adding depth to Council session insights.

Building on these advancements, DiploAI introduced a custom chatbot that goes beyond basic Q&A. By incorporating data from all 2024 sessions, the chatbot enables interactive exploration of diplomatic content, offering detailed, real-time answers. 

This shift from static reports to dynamic, conversational access represents a major leap in understanding and engaging with UN Security Council materials.

DiploAI’s development process underscores the importance of human-AI collaboration. The Q&A module underwent approximately ten iterations, refined with feedback from UNSC experts, ensuring accuracy and sensitivity in diplomatic responses. This process has led to an AI system capable of addressing critical questions while adhering to diplomatic standards.

AD 4nXdbGxkcotkSOtkz8jfpQF7BAtttjAolRt4Wig PU0bgekVpujmb1WOLFKjh8R6BmOIoJ8eZ23ydEjSyJml8n1OkvPsTBOgojM

DiploAI’s suite of tools, including real-time transcription and analysis, enhances the transparency of UN reporting. By integrating advanced AI methods such as retrieval-augmented generation (RAG) and knowledge graphs, DiploAI contextualises and enriches the extracted information. Trained on a vast corpus of diplomatic knowledge, the AI generates responses tailored to UNSC topics, making complex session details accessible through transcripts, reports, and an AI-powered chatbot.

DiploAI’s work with the Security Council, supported by Switzerland, demonstrates the potential of AI in enhancing diplomacy. By blending technical prowess with human expertise, DiploAI promotes more inclusive, informed, and impactful diplomatic practices.


DW Weekly #184 – 1 November 2024

 Page, Text

Dear readers,

In the past week, Meta Platforms unveiled its partnership with Reuters to integrate Reuters’ news content into its AI chatbot. The collaboration across Meta’s platforms, including Facebook, WhatsApp, and Instagram, allows Meta’s chatbot to respond to real-time news inquiries using Reuters’ trusted reporting. Following Meta’s scaled-back news operations amid content disputes with regulators, this deal marks a notable return to licensed news distribution. It marks the company’s aim to balance AI-driven content with verified information, compensating Reuters through a multi-year agreement and establishing a promising model for AI and media partnerships.

Yet, the path to collaboration has not been smooth for all. Earlier in 2024, News Corp sued Perplexity AI for alleged copyright violations, arguing that the AI company used News Corp’s content without authorisation. The lawsuit was soon echoed by Dow Jones and the New York Post, both accusing Perplexity of bypassing sources. Perplexity defended itself by citing fair use, stressing that its summaries only replicated small portions of articles.

Meanwhile, in August 2024, the French news agency AFP filed a lawsuit against X (formerly Twitter), demanding compensation for using AFP’s content to train AI models. The legal action stresses the global demand for fairer treatment of newsrooms by tech companies and reflects growing concerns that the intellectual property rights of news providers are often sidelined in favour of AI innovation.

 Advertisement, Poster, Book, Comics, Publication, Person, Adult, Female, Woman, Face, Head, Art

However, over the past year, other AI giants like OpenAI have chosen to formalise relationships with media, establishing partnerships with publishers such as Hearst, Conde Nast, and Axel Springer. OpenAI’s ChatGPT now features licensed news content, a strategic move to avoid copyright disputes while providing high-quality, fact-based summaries to users. These partnerships also provide publishers with new avenues for traffic and revenue, showcasing a balanced approach where AI enhances access to reliable news and publishers are compensated. 

Other companies like Microsoft and Apple have entered the AI news space, each establishing robust collaborations with news organisations. Microsoft’s approach centres on supporting AI-driven innovation within newsrooms, while Apple plans to utilise publisher archives to improve its AI training data. These initiatives signal a trend toward structured partnerships and the emergence of Big Tech’s role in reshaping news consumption. However, as these tech giants build AI models on news content, pressure grows to respect news publishers’ copyrights, reflecting a delicate balance between AI advancement and content ownership.

As AI becomes increasingly central to media, industry leaders and advocates call for equitable policies to protect newsrooms’ intellectual property and revenue. With studies estimating that Big Tech may owe news publishers billions annually, the push for fair compensation intensifies. But, given the above cases of legal disputes and successful digital business models on the other side, the evolution of AI-news partnerships will likely hinge on transparent standards that ensure newsrooms receive due credit and financial benefit, creating a sustainable, equitable future for AI-driven media. However, these arrangements also raise questions about AI’s long-term impact on traditional newsrooms and revenue structures.

In other news…

UK man sentenced to 18 years for using AI to create child sexual abuse material

In a case spotlighting the misuse of AI in criminal activity, Hugh Nelson, a 27-year-old from Bolton, UK, was sentenced to 18 years in prison for creating child sexual abuse material (CSAM) using AI. Nelson utilised the app Daz 3D to turn ordinary photos of children into exploitative 3D images, some based on photos provided by acquaintances of the victims.

Chinese military adapts Meta’s Llama for AI tool

China’s People’s Liberation Army (PLA) has utilised Meta’s open-source AI model, Llama, to develop a military-adapted AI tool, ChatBIT, focusing on military decision-making and intelligence tasks.

More updates and other topics on our dig.watch portal!

Marko and the Digital Watch team


Highlights from the week of 25-01 November 2024

AIatUN 1

Six Democratic senators are urging the Biden administration to address human rights and cybersecurity concerns in the upcoming UN Cybercrime Convention, warning it could enable authoritarian surveillance and weaken privacy…

ff5c2175 elon musk x afp

Scrutiny intensifies over X’s handling of misinformation.

llama3

PLA researchers use the tech giant’s AI for military innovations.

flag2nd 2 2 01

The lawsuits highlight a growing debate over social media regulation in Brazil, especially after a high-profile legal dispute between Elon Musk’s X platform and a Brazilian Supreme Court justice led…

Temu logo.svg

In response to rising concerns over illegal product sales, the European Commission is preparing to investigate Chinese e-commerce platform Temu for potential regulatory breaches under the DSA.

artificial intelligence ai and machine learning ml

Masayoshi Son predicts that artificial super intelligence could surpass human brainpower by 10,000 times by 2035.

restaurant with robotic waiters

By 2040, a world with 10B humanoid robots could become reality, with prices set to make them accessible for both personal and business use globally.

ai biotech drug making companies investment

A new AI model from biotech firm Iambic Therapeutics could revolutionise drug development, potentially cutting costs in half by identifying effective drugs early in the testing process.

linkedin 689760 1280

New developments in hiring, ‘Hiring Assistant’ LinkedIn’s latest AI tool, seeks to ease recruiters’ workloads by automating job listings and candidate searches, marking a new milestone in the platform’s AI…


ICYMI



Reading corner

unsc meeting united nations shut 1
dig.watch

By partnering with the UN Security Council, DiploAI is transforming session reporting with AI-driven insights that go beyond traditional methods.

Diplo BLOGS24 Insta Anita Lamprecht 30
www.diplomacy.edu

Cognitive proximity is key to human-centred AI. Discover how AI can be aligned with human intuition and values, allowing for more harmonious human–AI collaboration. Dr Anita Lamprecht explains.

Diplo BLOGS24 Insta Jovan Kurbalija 5 Nov 1080x1080px 1
www.diplomacy.edu

In the age of AI, understanding its workings is essential for us to shift from being passive passengers to active copilots. While many view AI as a complex tool shrouded in mystery, basic knowledge of its foundational concepts—patterns, probability, hardware, data, and algorithms—can empower us. Recognizing the influence of biases in AI and advocating for ethical practices and diversity in its development are crucial steps. By engaging in discussions around AI’s governance, we can navigate our AI-driven reality, ensuring that technology serves the common good rather than merely accepting its outcomes.

Upcoming

Diplo INSTA Diplo Event Unpacking Global Digital
www.diplomacy.edu

Unpacking Global Digital Compact | Book launch Join us online on 8th November for the launch of Unpacking Global Digital Compact, a new publication written by

DW Weekly #183 – 25 October 2024

 Page, Text

Dear readers,

Over the past week, the Internet Archive has been caught in a series of cyberattacks that have disrupted its operations and raised alarming questions about the cybersecurity of its systems. What began two weeks ago as a temporary outage due to distributed denial-of-service (DDoS) attacks has evolved into a deeper breach, revealing the fragility of even the most widely respected online resources.

The first wave of attacks started with DDoS assaults, a tactic often used to flood a website with traffic, rendering it temporarily inaccessible. The pro-Palestinian hacktivist group BlackMeta claimed responsibility for these attacks, indicating a political motivation behind the disruptions. However, it quickly became evident that this was only the beginning of the Archive’s troubles. Soon after, the organisation suffered a JavaScript-based website defacement, followed by a more insidious data breach. The hackers’ persistence and varied attack methods suggest a sophisticated operation designed to probe multiple vulnerabilities within the Archive’s system.

As if these attacks were not damaging enough, 20 October brought another crisis. Internet Archive users and media outlets began receiving unauthorised emails, seemingly from the organisation. The emails included a stolen access token for the Archive’s Zendesk account, a platform for managing customer service requests. More concerningly, the message claimed that over 800,000 support tickets—dating back to 2018—had been compromised. The hackers alleged that the Internet Archive had failed to rotate API keys exposed in their GitLab secrets, leaving sensitive data vulnerable. Although the email was unauthorised, it had passed security checks, indicating it may have come from an authorised Zendesk server, adding a layer of complexity to the incident.

 Book, Publication, Advertisement, Poster, Comics

The source of the data breach appears to have been an exposed GitLab configuration file, which the hacker reportedly obtained from one of the Archive’s development servers. This file likely contained authentication tokens, granting access to the Archive’s source code and the Zendesk API. The theft of such information could allow bad actors to manipulate support tickets, create false narratives, or even gain unauthorised access to personal information. 

In the wake of these attacks, security experts like Jake Moore of ESET have emphasised the importance of swift action. Moore advised that in the aftermath of such incidents, organisations must conduct thorough audits to identify and address vulnerabilities, as malicious actors often return to test newly implemented defences. The need for proactive security measures was further underlined by Ev Kontsevoy, CEO of Teleport, who pointed out the challenge of securing access relationships after an attack. Without immediate, comprehensive action, breaches like these can lead to further exploitation.

The silence from the Internet Archive and its founder, Brewster Kahle, has only fuelled speculation about the extent of the breach and the organisation’s next steps. Neither the Archive nor GitLab has publicly commented on the stolen access tokens or the implications of the compromised Zendesk account, leaving users and stakeholders in the dark about the potential risks. What is clear, however, is that the Internet Archive must bolster its defences and reconsider its approach to API key rotation and data protection.

In other news…

News Corp sues AI firm Perplexity over copyright violations

News Corp has filed a lawsuit against the AI search engine Perplexity, accusing it of copying and summarising its copyrighted content without permission. The lawsuit claims that Perplexity’s practices divert revenue from original publishers by discouraging users from visiting full articles, harming the financial interests of news outlets like The Wall Street Journal and the New York Post.

Musk discusses XRP and crypto’s potential at Pittsburgh event

Speaking at a town hall in Pittsburgh, Elon Musk discussed the potential of cryptocurrency in protecting individual freedom, although he did not explicitly endorse XRP. He emphasised the importance of cryptocurrencies in resisting centralised control, a statement welcomed by XRP supporters amid Ripple’s ongoing legal issues with the SEC.

More updates and other topics on our dig.watch portal!

Marko and the Digital Watch team


Highlights from the week of 18-25 October 2024

bitcoin etf coin in gold

Experts predict this growing institutional demand could push Bitcoin’s price beyond $100,000 by early 2025, despite anticipated short-term volatility.

doj logo

The US Justice Department’s new rules could affect companies like TikTok, which may face penalties if they transfer sensitive data to foreign parent companies.

flag of usa and china on cracked concrete wall background

The tech war with China will intensify no matter the US election outcome.

V 1 Google

Google argues allowing greater competition on its Play Store could harm the company and introduce security risks and is appealing the 9th US Circuit Court of Appeals decision.

perplexity ai lawsuit nyt

Perplexity AI faces legal action over claims it bypasses traditional search engines, using copyrighted material to generate summaries and answers without permission from publishers.

elon musk

While not directly endorsing XRP, he underscored the importance of digital currencies in resisting centralised control.

microsoft headquarters fdi

These agents, distinct from chatbots, can handle tasks such as client inquiries and sales lead identification with little human intervention.

xAI logo

The model’s exact version is still being determined, but it is part of xAI’s strategy to rival major AI players like OpenAI and Anthropic.

3d illustration folder focus tab with word infringement conceptual image copyright law scaled

Other media entities, including Wired and Forbes, have similarly accused Perplexity of content scraping and plagiarism.

mobile phone with google icon screen computer

A judge has paused Google’s Play Store overhaul to allow more time for an appeal.



Reading corner

q1p7bh3shj8
dig.watch

This summer, the UN finalised a draft of its first international convention against cybercrime, raising questions about how it will coexist with the long-standing Budapest Convention, and in this analysis,…

Diplo BLOGS24 Insta Jovan Kurbalija 25 Oct
www.diplomacy.edu

The book “231 Shades of Diplomacy” catalogs an extensive array of diplomatic types, revealing a significant expansion in terminology, particularly in the digital age. While phrases like “cyber diplomacy” and “Facebook diplomacy” illustrate this evolution, the respect for diplomacy itself appears to be diminishing. Despite its growing prevalence in discourse, the concept of diplomacy often fails to receive the acknowledgment it deserves, overshadowed by military power and simplistic national narratives. The author advocates for a reevaluation of diplomacy’s role and the courage inherent in its practice, essential for fostering societal solutions and recognizing the importance of compromise.

Diplo BLOGS24 Insta Jovan Kurbalija 29
www.diplomacy.edu

How can the UN ensure the impartiality of its AI platform? As the UN celebrates its 79th birthday on October 24, it faces many familiar and new challenges.

Diplo BLOGS24 Insta Anita Lamprecht 28
www.diplomacy.edu

What are the key steps in building chatbots for diplomacy and governance? Dr Anita Lamprecht writes about the essential tools to craft effective AI solutions tailored for diplomatic contexts.

DW Weekly #182 – 18 October 2024

 Page, Text

Dear readers, 

In recent years, when technological advancements have become increasingly demanding regarding energy supply, sustainable development has become a mainstream topic for governments and industries seeking to balance growth with environmental responsibility. At the centre of the topic are AI and the energy sector, where innovative solutions are emerging to support the ever-growing demand for power driven by the rapid evolution of AI. Tech giants, which rely heavily on continuous energy supply to fuel data centres and AI-driven technologies, are now at the forefront of the push toward cleaner, more sustainable energy sources.

Such a Big Tech race for innovation and sustainable models powerful enough to supply energy for the growing demands of AI-powered data centres has prompted Google to sign the world’s first official corporate agreement to purchase nuclear energy. Namely, Google’s agreement with Kairos Power implies it will source energy from small modular reactors (SMRs), which have to be deployed by Kairos after the approval for the project by the US Nuclear Regulatory Commission (NRC) and local agencies. However, Kairos achieved a key milestone last year by obtaining a construction permit to build a demonstration reactor in Tennessee, signalling progress toward deploying SMRs. 

Smaller and potentially safer than traditional nuclear reactors, SMRs offer a new frontier in clean energy, particularly for industries like tech that require a constant, reliable energy supply. The agreement is poised to bring 500 MW of carbon-free power to US grids by 2030, a substantial contribution to the decarbonisation efforts of electricity systems while directly supporting the growing power needs of AI technologies.

 Adult, Female, Person, Woman, Book, Comics, Publication, machine, Bride, Wedding, Gas Pump, Pump, Face, Head, Gas Station

However, Google is not alone in pursuing renewable and sustainable energy sources. In September, Microsoft signed a similar agreement with the Three Mile Island energy plant to secure energy for its data centres. The plant, infamous for the worst nuclear accident in US history, is preparing to reopen for a 20-year deal with Microsoft to purchase power from the facility. It is scheduled to restart in 2028 following upgrades, and it will supply clean energy to support Microsoft’s growing data centres, especially those focused on AI.

Another tech giant, Amazon, is also moving towards nuclear power technology by signing three agreements to develop SMRs to address the growing demand for electricity from its data centres. In collaboration with X-Energy, Amazon will fund a feasibility study for an SMR project near a Northwest Energy site in Washington state, positioning itself as a centre forward in the shift toward renewable energy sources. The deal allows Amazon to purchase power from four SMR modules, with the potential for up to eight additional modules capable of producing enough energy to power more than 770,000 homes.

Furthermore, beyond ensuring a reliable power supply for tech companies, these initiatives reshape the energy landscape by fostering innovation and economic growth. The US Department of Energy has highlighted the financial benefits of nuclear power, citing its potential to generate high-paying, long-term jobs and stimulate local economies. With advanced nuclear reactors estimated to create hundreds of thousands of jobs by 2050, the tech sector’s investments in nuclear energy could also contribute to a broader economic transformation.

Thus, by backing advanced cutting-edge nuclear technologies and other clean energy solutions, companies like Google, Microsoft, and Amazon have set a precedent for how industries can align economic growth with environmental responsibility.

In other news…

Big Tech’s AI models fall short of new EU AI Act’s standards

A recent evaluation of top AI models by Swiss startup LatticeFlow has uncovered significant gaps in compliance with the upcoming EU AI Act, particularly in cybersecurity and bias prevention. While some models, like Anthropic’s Claude 3 Opus, scored highly in various tests, others struggled, such as OpenAI’s GPT-3.5 Turbo and Alibaba’s Qwen1.5 72B Chat, which revealed vulnerabilities in preventing discriminatory outputs.

Australia and the social media ban for younger users

The Australian government is moving toward a social media ban for younger users, sparking concerns among youth and experts about the potential negative impacts on vulnerable communities. The proposed restrictions, intended to combat issues such as addiction and online harm, may sever vital social connections for teens from migrant, LGBTQIA+, and other minority backgrounds.

More updates and other topics on our dig.watch portal!

Marko and the Digital Watch team


Highlights from the week of 11-18 October 2024

eu ai act

Prominent AI models fail to meet the EU regulations, particularly in cybersecurity resilience and non-discriminatory output.

Xl3NMbOY autonomous drones covid19

AI-powered drones are being used in Russia in the ongoing conflict with Ukraine. Defense Minister Andrei Belousov confirmed the deployment of advanced drone units and highlighted plans for further expansion.

flag of usa and china on cracked concrete wall background

The US remains China’s third-largest trading partner, emphasising the importance of ongoing collaboration amid global competition.

V 1 Google

Google argues allowing greater competition on its Play Store could harm the company and introduce security risks and is appealing the 9th US Circuit Court of Appeals decision.

european union eu flag

Businesses anxious over delayed cybersecurity regulations.

enter new era computing with large quantum computer generative ai

These algorithms are crucial to the security of advanced encryption standards, including AES-256, which is widely used in banking and cryptocurrency.

elon musk cybercab printscreen x sawyermerritt

Tesla reveals the Cybercab, aiming for production by 2026, as it moves towards autonomous vehicles.

TikTok

Hundreds of TikTok employees are facing layoffs as the company moves towards automated moderation.

Meta and multilingual

The job cuts are part of the effort to reallocate resources and align with Meta’s long-term strategic goals.

gavel and european union flag on black background

MiCA is expected to become a global benchmark, encouraging other jurisdictions to align their regulatory frameworks for cross-border compatibility.



Reading corner

revolutionising medicine with ai from early detection to precision care
dig.watch

AI is transforming medicine by enabling early disease detection, improving diagnostics, and personalising care.

Diplo BLOGS24 Insta Jovan Kurbalija 16
www.diplomacy.edu

Diplomacy is undergoing a significant transformation in the age of artificial intelligence. Rather than becoming obsolete, it is poised to thrive—and here’s why.

Diplo BLOGS24 Insta Anita Lamprecht 18
www.diplomacy.edu

Week 2 of the AI Apprenticeship course: While it processes data and evolves with us, AI still lacks the human ability to grasp context and meaning. Will AI always be an apprentice, or can it truly master understanding?

Upcoming

un headquaters cybercrime un logo
www.diplomacy.edu

UN Cybercrime Convention: What does it mean and how will it impact all of us? Once formally adopted, how will the UN cybercrime convention impact the security

DW Weekly #181 – 11 October 2024

 Page, Text

Dear readers, 

The antitrust trial between Google and the US Department of Justice (DoJ) is shaping up to be a high-stakes digital David-versus-Goliath showdown for Google and the entire tech industry. In one corner stands Google, the modern-day titan of search engines, a company whose name has become a verb. On the other, the DoJ, brandishing antitrust laws sharpened to puncture monopolistic practices that allegedly stifle competition and innovation. What makes this case so significant is not just the scale of the accusations but the fact that it could redefine the legal boundaries of tech dominance in the 21st century.

At the core of the DoJ’s argument is the assertion that Google uses exclusionary contracts and arrangements to cement its near-total control over the search engine market, ensuring that it is the default choice on most browsers and devices. Critics argue that this practice creates a ‘walled garden’ where competitors are either shut out or significantly disadvantaged. This one and other measures, such as the breakup of essential Google assets like its Chrome browser and Android operating system or opening up Google’s vast search data, indexes, and AI models to rivals to prevent the company from monopolising AI-driven search technologies, seem to be excessive to someone who claims their dominance is the fruit of superior technology and innovation.

Ergo, the question over the trial is whether Google’s dominance is a product of superior technology and innovation or of calculated, anti-competitive strategies. For its part, Google maintains that its business practices merely reflect consumer preference and that switching search engines is just a click away. However, the DoJ’s legal team, seasoned from its work on the Microsoft antitrust case two decades ago, is not so quickly convinced.

 Plant, Outdoors, Nature, Animal, Canine, Dog, Husky, Mammal, Pet, Person, Antelope, Wildlife

The dispute is not just a legal slugfest; it is also a battle of narratives. Google paints itself as the quintessential American success story—an innovative disruptor democratising access to information. However, the DoJ seeks to reframe that story, portraying Google as a gatekeeper of the internet, hoarding the gates to the detriment of competition and consumer choice. The broader implications of this case stretch far beyond the courtroom, touching on how we navigate the digital landscape, how tech companies collect and monetise data, and how competitive markets should be regulated in a world increasingly shaped by algorithmic dominance.

Of course, lurking behind this high-profile litigation is the ghost of Big Tech’s broader regulatory challenges. If the DoJ succeeds, it could encourage calls for stricter regulations on Google and other tech giants such as Amazon, Meta, and Apple, whose sprawling ecosystems have been scrutinised for similar reasons. Silicon Valley is watching this case closely, with some companies quietly hoping for a Google defeat that might loosen its stranglehold on digital advertising and search markets. Others, however, fear that a successful antitrust case could lead to overregulation, hampering innovation, and growth in the long run.

In the end, this case will likely set a legal precedent for the tech industry at large. If Google emerges unscathed, it will validate its business model and cement its position as the uncontested king of search. But if the DoJ prevails, it will send shockwaves through Silicon Valley and worldwide, signalling a new era of scrutiny for Big Tech and setting the stage for even more aggressive antitrust enforcement in the years to come.

In other news…

AI is taking the stage at the Swedish Nobel Prize Academy

Two 2024 Nobel Prizes awarded this week highlighted AI’s transformative role in physics and chemistry. US physicist John Hopfield and British-Canadian AI pioneer Geoffrey Hinton were awarded the Nobel Prize in Physics for their groundbreaking work in machine learning, with Hinton warning about the dual-edged nature of AI’s rapid advancements. Meanwhile, David Baker, John Jumper, and Demis Hassabis received the Nobel Prize in Chemistry for their AI-driven breakthroughs in predicting protein structures, which have significant applications in drug development and tackling global challenges. Both awards underscore AI’s growing impact on science, from reconstructing complex data patterns to creating new proteins, reflecting the need for caution and innovation as these technologies reshape our world. 

Elon Musk’s social media platform, X, is returning to Brazil after months of legal clashes with the Supreme Court. Musk, who championed free speech, initially resisted blocking the accounts flagged for misinformation, resulting in a suspension by Justice Alexandre de Moraes.

More updates and other topics on our dig.watch portal!

Marko and the Digital Watch team


Highlights from the week of 4-11 October 2024

nobel

Hopfield and Hinton were recognised for their contribution to AI.

file CsAsKDDrEXjNFQyJtaIPcVA1

The Nobel Prize emphasises the growing role of AI in scientific innovation, with the laureates to receive their awards at a ceremony in Stockholm in December.

tiktok and the flag of the usa

States claim TikTok encourages social media addiction.

central bank digital currency european union

The initiative seeks to address fragmented financial systems and outdated regulations by enhancing financial integration and efficiency.

brazil bans twitter

After the company complies with legal rulings.

cybersecurity cybercrime internet scam anonymous hacker crypto currency investment digital network vpn technology computer virus attack risk protection

Forrester’s 2025 Predictions report highlights a looming $12 trillion cybercrime crisis, increased regulatory scrutiny, and the need for organisations to adopt proactive security measures, especially in light of new EU…

ai helping customer service in japan

AI systems in Japan are helping human workers by managing routine tasks, allowing staff to focus on more advanced roles.

discord image freepik

The tech company faces blockade by Roskomnadzor.

IMF6

The IMF is particularly concerned about transparency issues and the potential impact on the country’s fiscal stability.

bitcoin crypto currency diagram

The market has lost $200 billion in value, with Bitcoin briefly dipping below $60,000 before a slight recovery.



Reading corner

el salvador as a crypto heaven colorful
dig.watch

El Salvador’s integration of Bitcoin positions it as a pioneer in the shift towards cryptocurrency-driven economies.

Diplo BLOGS24 Insta Aldo Mateucci 9
www.diplomacy.edu

Markets excel at facilitating trade, but they fail to address the unintended consequences of consumerism like pollution. Aldo Matteucci provides ideas on how to approach these hidden costs.

Diplo BLOGS24 Insta Jovan Kurbalija 7
www.diplomacy.edu

Foreigners everywhere: Identity and estrangement in diplomacy As I wandered through the 2024 Biennale in Venice, captivated by the theme “Foreigners

Diplo BLOGS24 Insta Anita Lamprecht 11
www.diplomacy.edu

The AI Apprenticeship course has kicked off! Learn how participants are building their very own AI bots and why gaining independence from big tech is a game changer. Dr Anita Lamprecht writes.

Numéro 93 de la lettre d’information Digital Watch – octobre 2024

 Advertisement, Page, Text, Poster

Coup d’œil

Coup d’oeil : Les développements qui font des vagues

Gouvernance de l’IA

Le « Pacte pour le Futur », adopté lors du Sommet du Futur le 22 septembre 2024, définit un programme ambitieux pour faire face au changement climatique, à la transformation numérique et à la paix, tout en favorisant une gouvernance mondiale fluide.

Lors de la troisième journée de l’Assemblée générale des Nations Unies, les discussions ont porté sur les défis posés par les progrès technologiques rapides et leurs implications socioculturelles. L’accent a été mis sur la gestion de l’IA, la désinformation et la mésinformation, et plusieurs pays ont abordé leur impact négatif sur la stabilité démocratique.

L’organe consultatif des Nations Unies a publié son rapport final intitulé « Gouverner l’IA pour l’humanité », qui propose sept recommandations stratégiques pour une gouvernance mondiale de l’IA.

Israël façonne activement son paysage de l’IA en établissant un forum national d’experts sur la politique et la réglementation de l’IA. Dirigée par le ministère de l’Innovation, de la Science et de la Technologie, cette initiative témoigne de l’engagement du gouvernement à exploiter l’IA de manière responsable et à réunir des experts pour relever les défis et saisir les opportunités qu’elle offre.

Technologies 

Les modèles d’IA, tels que ChatGPT et Cohere, dépendaient autrefois de travailleurs peu coûteux pour la vérification de base des faits. Aujourd’hui, ces modèles nécessitent des formateurs humains possédant des connaissances spécialisées en médecine, en finance et en physique quantique.

La Chambre des représentants des États-Unis a récemment adopté un projet de loi visant à simplifier les procédures fédérales d’autorisation pour les projets de fabrication de semi-conducteurs, ce qui devrait profiter à des entreprises telles qu’Intel et TSMC. La législation cherche à répondre aux préoccupations selon lesquelles des examens environnementaux trop longs pourraient entraver la construction d’usines de puces sur le territoire américain, d’autant plus que les fabricants de puces ont promis des investissements importants à la suite de la loi Chips and Science Act de 2022.

Alors que les géants sud-coréens des mémoires Samsung Electronics et SK Hynix ont connu une augmentation significative de leurs ventes en Chine au cours du premier semestre de cette année, le rapport de l’Institut de recherche économique outre-mer de la Korea Eximbank indique que la dépendance de la Corée du Sud à l’égard de la Chine pour les matières premières essentielles aux semi-conducteurs s’accroît également.

Infrastructure

La FCC a pris une mesure décisive pour améliorer les services à large bande aux États-Unis en attribuant des fréquences supplémentaires dans la bande 17,3-17,7 GHz aux opérateurs de satellites non géostationnaires (NGSO), y compris à des fournisseurs importants comme Starlink.

La Chine et l’Afrique coopèrent pour améliorer l’infrastructure numérique, un aspect essentiel de leur partenariat économique. Les investissements chinois ont permis de constituer des cadres essentiels, notamment des câbles de fibre optique et des réseaux 5G, transformant les économies locales et développant le commerce électronique.

Cybersécurité

Les autorités américaines mettent en garde contre l’influence de l’IA étrangère à l’approche de l’élection présidentielle, la Russie menant la danse. Les efforts de Moscou se sont concentrés sur le soutien à Donald Trump et sur l’affaiblissement de Kamala Harris.

Le ministère chinois de la sécurité nationale a récemment affirmé qu’un groupe de pirates informatiques soutenu par Taïwan, Anonymous 64, avait attaqué des cibles en Chine, publiant même des photos d’individus qui, selon lui, font partie du groupe.

Le Federal Bureau of Investigation des États-Unis a mis hors d’état de nuire un autre grand groupe de pirates chinois, baptisé « Flax Typhoon » , qui avait compromis des milliers d’appareils dans le monde entier.

Après des mois de défi, la plateforme de médias sociaux d’Elon Musk, X, a déclaré à la Cour suprême du Brésil qu’elle s’était conformée aux ordonnances visant à freiner la diffusion de fausses informations et de contenus extrémistes.

Droits numériques

La Russie intensifie ses efforts pour contrôler l’internet en allouant près de 60 milliards de roubles (660 millions de dollars) au cours des cinq prochaines années afin de moderniser son système de censure du web, connu sous le nom de TSPU.

L’Australie se prépare à introduire des limites d’ âge pour l’utilisation des médias sociaux afin de protéger la santé mentale et physique des enfants.

Juridique

L’Australie a introduit le Privacy and Other Legislation Amendment Bill 2024, qui marque une avancée décisive dans la prise en compte des préoccupations liées à la protection de la vie privée dans le paysage numérique.

Meta, le propriétaire de Facebook, a été condamné à une amende de 91 millions d’euros (101,5 millions de dollars) par le régulateur européen de la protection de la vie privée pour avoir mal géré les mots de passe des utilisateurs. La Commission irlandaise de protection des données (DPC), qui supervise la conformité au RGPD de nombreuses entreprises technologiques américaines opérant dans l’UE, a ouvert une enquête après que Meta a signalé l’incident.

Un consultant politique s’est vu infliger une amende de 7,7 millions de dollars par la Commission fédérale des communications (FCC) pour avoir utilisé l’intelligence artificielle afin de générer des appels téléphoniques robotisés imitant la voix du président Biden. Ces appels, destinés aux électeurs du New Hampshire, les invitaient à ne pas voter lors des primaires démocrates, ce qui a suscité une vive controverse.

Le gouverneur de Californie, Gavin Newsom, a signé deux nouveaux projets de loi visant à protéger les acteurs et les interprètes contre l’utilisation non autorisée de leur image numérique par l’IA. Les mesures suivantes ont été introduites en réponse à l’utilisation croissante de l’IA dans l’industrie du divertissement, qui a suscité des inquiétudes quant à la reproduction non autorisée des voix et des images des artistes.

Économie de l’internet

L’or a atteint un niveau record de 2 629 dollars l’once à la suite de la récente baisse des taux d’intérêt de la Réserve fédérale américaine.

Le conseil d’administration d’OpenAI envisage de rémunérer le PDG Sam Altman avec des actions, bien qu’aucune décision n’ait été prise, selon le président du conseil d’administration Bret Taylor.

Développement

Le groupe de travail 05 du G20 sur la transformation numérique a dévoilé un document d’orientation intitulé « Advocating an International Decade for Data under G20 Sponsorship (Plaidoyer pour une Décennie internationale des données sous le parrainage du G20)», qui souligne le rôle fondamental des données accessibles et réutilisées de manière responsable pour stimuler le développement social et économique, en particulier dans le contexte des technologies émergentes telles que l’IA.

Le projet d’intégration numérique régionale de l’Afrique de l’Est (EARDIP) est sur le point de transformer le paysage numérique en Afrique de l’Est en améliorant la connectivité et l’accessibilité.

Socioculturel

Le fondateur de Telegram, Pavel Durov, a annoncé que la plateforme de messagerie allait renforcer sa politique de modération des contenus à la suite des critiques concernant son utilisation pour des activités illégales. Cette décision intervient après que Pavel Durov a été placé sous enquête formelle en France pour des délits liés à la fraude, au blanchiment d’argent et au partage de contenus abusifs.

Le conseil de surveillance de Meta a conseillé à la société mère de Facebook de ne pas supprimer automatiquement la phrase « Du fleuve à la mer », qui est interprétée par certains comme une manifestation de solidarité avec les Palestiniens et par d’autres comme antisémite.

La plateforme de médias sociaux d’Elon Musk, X, a pris des mesures pour répondre aux exigences légales au Brésil en nommant une nouvelle représentante juridique, Rachel de Oliveira Conceicao.

En bref

L’AGNU79 et le Pacte mondial pour le Futur?

Le « Pacte pour le Futur » , adopté lors du Sommet du Futur le 22 septembre 2024, apparaît comme une déclaration d’intention pour franchir le pas vers des lendemains incertains mais ambitieux. Le Pacte, présenté devant un auditoire de dirigeants mondiaux et de représentants de la société civile, est à la fois une feuille de route et un phare : il s’agit de relever les défis du climat, de la transformation numérique et de la paix, tout en s’efforçant de mettre en place des structures suffisamment souples pour s’adapter aux rythmes imprévisibles de la modernité. Il s’agit d’une poignée de main mondiale entre les générations : une promesse que la sagesse du passé ne fera pas stagner le progrès, mais lui insufflera au contraire un caractère d’urgence. Le secrétaire général des Nations Unies a déclaré : « Nous ne pouvons pas créer un avenir digne de nos petits-enfants avec un système construit par nos grands-parents », un sentiment qui sous-tend le cœur thématique du pacte.

L’ encre du « Pacte pour le Futur » était à peine sèche que les premières répercussions se faisaient sentir, notamment dans les salles de la 79e session de l’Assemblée générale des Nations Unies. Avec le changement climatique d’un côté et la promesse de la révolution numérique de l’autre, les dirigeants mondiaux se sont réunis pendant la semaine de haut niveau pour réaffirmer leur engagement en faveur des objectifs de développement durable (ODD). Ce qui s’est passé est un kaléidoscope de voix, de discussions et d’engagements qui ont cherché à donner vie à ce qui a souvent été considéré comme des objectifs nobles et lointains. Le rythme était rapide, mais l’ambition semblait faire écho à des vérités plus lentes – l’augmentation fébrile de la température de la Terre, les inégalités persistantes et les écarts croissants dans l’accès à l’infrastructure numérique.

 UN Flag

Alors que le Sommet du Futur a ouvert un nouvel espace de discussion sur l’utilisation et la gouvernance de l’IA et l’inclusion numérique, l’AGNU79 s’est efforcée de faire en sorte que ces discussions ne soient pas de simples réflexions abstraites et éphémères. Ancré dans le Pacte, le Pacte mondial pour le numérique a occupé le devant de la scène, traçant des lignes nettes autour des questions de gouvernance des données, de l’accès à l’internet et de la supervision de l’IA. Ces initiatives sont un clin d’œil à la fracture numérique qui ne cesse de se creuser et où l’avenir de la démocratie et des droits de l’Homme pourrait bien être façonné par les bits et les octets du cyberespace autant que par les bulletins de vote déposés dans les urnes. Les dirigeants mondiaux, semble-t-il, ne s’engageaient pas seulement à ce que tout le monde reste connecté, mais aussi à ce que tout le monde soit protégé dans un monde en ligne de plus en plus délicat. Une promesse audacieuse, en effet, à une époque où le rythme des changements technologiques dépasse de loin la vitesse à laquelle les cadres de gouvernance sont mis en place.

 Advertisement, Poster, People, Person, Crowd, Text, Bow, Weapon

Puis vint la danse délicate de la paix et de la sécurité, où les vieux ennemis et les nouvelles technologies se sont heurtés à l’ordre du jour. Les discussions autour de la réforme du Conseil de sécurité des Nations unies – sans doute l’une des plus progressistes depuis le milieu du XXe siècle – se sont accompagnées de nouveaux engagements en faveur du désarmement nucléaire et de la gouvernance de l’espace extra-atmosphérique. L’espace et l’intelligence artificielle, qui ne relèvent plus de la science-fiction, ont été reconnus comme les nouvelles frontières du conflit et de la coopération. Pourtant, la sous-représentation de l’Afrique sur la scène mondiale pourrait s’avérer être le changement le plus sismique. Si la promesse du pacte de corriger ce déséquilibre historique se concrétise, l’architecture même de la gouvernance mondiale pourrait s’en trouver modifiée d’une manière inédite depuis les vagues de décolonisation du milieu du siècle dernier.

Tout au long du processus, la résonance des générations futures a été omniprésente. Pour la première fois, une déclaration officielle sur les générations futures a été signée, rappelant aux dirigeants actuels que leurs décisions – ou leurs indécisions – influenceraient la vie de ceux qui ne sont pas encore nés. Un représentant du futur, des jeunes responsabilisés et une société civile revigorée semblent faire écho à un courant sous-jacent plus profond : ce Pacte, ce Sommet et l’AGNU79 ne resteront peut-être pas dans les mémoires pour leurs seules paroles, mais pour les actions qui suivront (ou ne suivront pas) dans leur sillage.

Analyse

Infrastructure publique numérique : Un résultat innovant des dirigeants de l’Inde au G20

D’un concept latent à un consensus mondial

Il y a quelques années à peine, l’acronyme IPN (Infrastructure publique numérique) n’était qu’un terme latent. Mais aujourd’hui, il est devenu un « terme internationalement reconnu » et jouit d’une grande notoriété mondiale. Cela ne signifie pas que des efforts en ce sens n’ont pas été déployés plus tôt, mais un consensus mondial tangible sur l’incorporation formelle du terme n’a pas pu être atteint.

Le rapport met en évidence la complexité d’une telle impasse ou ambiguïté,  de longue date autour d’une validation potentielle de l’IPN fondée sur un consensus est mise en évidence dans le rapport récemment publié par le groupe de travail indien du G20 sur l’infrastructure publique numérique. Le rapport souligne clairement que,

alors que l’IPN était conçue et mise en place indépendamment par des institutions sélectionnées dans le monde entier depuis plus d’une décennie, il n’existait pas de mouvement mondial permettant d’identifier l’approche de conception commune qui conduisait au succès, ni de prise de conscience politique au plus haut niveau de l’impact de l’IPN sur l’accélération du développement. 

digital public infrastructure

Ce n’est qu’à l’occasion de la présidence indienne du G20 en septembre 2023 que le tout premier consensus multilatéral a été atteint pour reconnaître l’IPN comme un moteur « sûr, sécurisé, fiable, responsable et inclusif » du développement socio-économique dans le monde entier. La « Déclaration de New Delhi » a notamment cultivé une approche de l’IPN, visant à renforcer un écosystème numérique robuste, résilient, innovant et interopérable, piloté par une interaction cruciale entre la technologie, les entreprises, la gouvernance et la communauté.

L’approche de l’IPN offre de manière convaincante une voie médiane entre un volet purement public et un volet purement privé, en mettant l’accent sur la « diversité et le choix », en encourageant « l’innovation et la concurrence » et en garantissant « l’ouverture et la souveraineté ». 

Indian Flag

D’un point de vue ontologique, cela marque un changement perceptible de l’idée exclusive du technocratisme-fonctionnalisme vers l’adoption des concepts de multipartenariat et d’universalisme pluraliste. Ces conceptualisations ont de la substance dans le domaine de la quête de l’Inde pour démocratiser et diversifier le pouvoir d’innovation, sur la base de compromis délicats et d’une compréhension intersubjective transversale. Néanmoins, il faut également comprendre qu’une transition numérique omniprésente, de plus en plus ancrée dans l’approche internationale émergente de l’IPN, a été exceptionnellement tirée de l’expérience réussie de l’Inde en matière de cadre national de l’IPN, à savoir India Stack.

India Stack est avant tout une agglomération d’interfaces de programmation d’applications (API) ouvertes et de biens publics numériques, visant à renforcer un écosystème social, financier et technologique largement dynamique. Elle offre de multiples avantages et des services ingénieux, tels que des paiements numériques plus rapides grâce à l’UPI, le système de paiement basé sur l’Aadhaar (AEPS), des transferts directs de bénéfices, des prêts numériques, des mesures de santé numérique, l’éducation et la formation professionnelle, et le partage sécurisé des données. Le parcours remarquable de l’Inde en matière de progrès numérique et la mise en œuvre cohérente et réussie de l’IPN au cours de la dernière décennie ont incontestablement été mis en lumière lors des délibérations du G20.

Le rôle de l’Inde dans la promotion de l’IPN par le biais de l’engagement du G20 et de l’initiative stratégique

Le dynamisme procédural avec lequel des actions ont été entreprises pour mobiliser le concept et l’efficacité de l’IPN au cours de diverses réunions et conférences du G20 tenues en Inde semble tout à fait exemplaire. Surtout, les réunions et les négociations du groupe de travail sur l’économie numérique (GTEN) ont été organisées en collaboration avec tous les membres du G20, les pays invités et d’éminents partenaires du savoir, tels que l’UIT, l’OCDE, le PNUD, l’UNESCO et la Banque mondiale. Le document final de la réunion des ministres de l’économie numérique a été approuvé à l’unanimité par tous les membres du G20 et a présenté un agenda numérique mondial complet avec des nuances techniques et des stratégies de gestion des risques appropriées.

Outre le GTEN, l’agenda de l’IPN a également pris de l’importance dans d’autres groupes de travail du G20 sous la présidence de l’Inde. Il s’agit notamment du groupe de travail sur le partenariat mondial pour l’inclusion financière, du groupe de travail sur la santé, du groupe de travail sur l’agriculture, du groupe de travail sur le commerce et l’investissement et du groupe de travail sur l’éducation. 

India G20

Parallèlement à ces diverses réunions de groupe, les dirigeants indiens ont également mené des négociations bilatérales avec leurs principaux partenaires stratégiques et commerciaux du G20, à savoir les États-Unis, l’UE, la France, le Japon et l’Australie. Il est intéressant de noter que les déclarations conjointes officielles de toutes ces réunions bilatérales contenaient le mot d’ordre « IPN ». On peut évidemment se demander si le moment était venu ou si c’était la stratégie bien préparée de l’Inde qui avait fini par porter ses fruits. Toutefois, on ne peut nier qu’un processus de négociation parallèle bien pensé a certainement joué un rôle déterminant dans l’obtention d’un effet de levier pour l’approche de l’IPN. 

En outre, dans le prolongement de la déclaration de New Delhi de septembre 2023, le Premier ministre indien a annoncé le lancement de deux initiatives phares dirigées par l’Inde lors du sommet virtuel des dirigeants du G20 en novembre 2023. Ces deux initiatives, dénommées Global Digital Public Infrastructure Repository (GDPIR) et Social Impact Fund (SIF), visent principalement à faire progresser l’IPN dans les pays du Sud, notamment en offrant une assistance technico-financière en amont et une expertise fondée sur les connaissances. Ce type d’approche holistique tournée vers l’avenir renforce raisonnablement la voie vers un discours numérique mondial transformateur.

Poursuivre sur la lancée : Le rôle du Brésil dans la promotion de l’IPN

Depuis que l’Inde a passé le relais de la présidence du G20 au Brésil, on attend beaucoup de ce dernier pour qu’il poursuive sur sa lancée et veille à ce que les technologies numériques émergentes répondent effectivement aux besoins des pays du Sud. Il est encourageant de constater que le Brésil fait un pas en avant avec véhémence pour maintenir l’élan, en mettant davantage l’accent sur l’approfondissement de la discussion sur les éléments cruciaux du DPI tels que l’identification numérique, la gouvernance des données, l’infrastructure de partage des données et les garanties mondiales des données. Bien que le Brésil ait acquis une expérience impressionnante dans l’utilisation de l’infrastructure numérique pour promouvoir la réduction de la pauvreté et la croissance inclusive dans le pays, une grande partie du succès du prochain sommet du G20 reposera sur sa capacité à stimuler les engagements politiques et financiers pour une plus grande disponibilité de cette infrastructure.

Bien que des efforts concertés soient déployés pour stimuler l’interopérabilité, l’évolutivité et l’accessibilité des IPN, il devient impératif de garantir leur confidentialité et leur intégrité. Cela s’avère d’autant plus alarmant dans le sillage de l’augmentation des atteintes à la cybersécurité, des intrusions injustifiées dans la confidentialité des données et des risques potentiels liés aux technologies émergentes telles que l’IA. Par conséquent, à ce stade critique, il est essentiel d’encourager des efforts mondiaux plus raffinés, coordonnés et élargis, ou plus précisément, une coopération numérique mondiale efficace.

La désinformation à l’ère numérique

La communication est la pierre angulaire de l’interaction sociétale, elle entretient le système social. En modelant le cours de la communication, les agents peuvent influencer le développement de la société. Dans notre monde de plus en plus numérique, la diffusion de fausses informations et de désinformations constitue une menace importante pour la cohésion sociale, la démocratie et les droits de l’Homme.  

L’utilisation trompeuse de l’information ne date pas d’hier. On en trouve des exemples emblématiques dans l’Égypte ancienne, sous l’Empire romain et après l’invention de l’imprimerie, par exemple. Pendant la guerre froide, les États-Unis et l’Union soviétique ont eu recours à des campagnes de désinformation pour promouvoir leurs intérêts stratégiques respectifs. La complexité et l’ampleur de la pollution de l’information dans le monde numérique constituent toutefois un défi sans précédent. Les médias sociaux, en particulier, ont permis la diffusion d’informations à plus grande échelle. Si ce nouveau paysage informationnel a permis aux individus d’exprimer leurs opinions, il a aussi parfois entraîné la diffusion de fausses informations et de désinformation.

La vitesse de propagation est intimement liée à la dynamique des médias sociaux. Les individus ont de plus en plus recours aux médias sociaux pour s’informer au jour le jour, mais ils utilisent encore ces plateformes dans un esprit récréatif, ce qui diminue leur esprit critique et les rend plus vulnérables aux contenus qui suscitent une réaction émotionnelle, qui ont une composante visuelle puissante ou une narration forte, ou qui sont diffusés de manière répétée.

Au niveau mondial, les données de 2022 montrent que plus de 70 % des individus dans certains pays en développement utilisent les médias sociaux comme source d’information. Ce chiffre était supérieur à 60 % dans certains pays européens, tels que la Grèce, la Bulgarie et la Hongrie. Aux États-Unis, 50 % des adultes s’informent par le biais des médias sociaux. Dans 19 pays développés, 84 % des personnes interrogées par Pew Research pensent que l’accès à l’internet et aux médias sociaux a facilité la manipulation des gens avec de fausses informations et des rumeurs. En outre, 70 % des personnes interrogées considèrent la diffusion de fausses informations en ligne comme une menace majeure, juste après le changement climatique.

Le rôle de la technologie

L’un des principaux mécanismes à l’origine du phénomène des médias sociaux est la curation algorithmique du contenu. Les plateformes de médias sociaux utilisent des algorithmes sophistiqués, conçus pour maintenir l’intérêt des utilisateurs en leur montrant le contenu le plus susceptible de capter leur attention et de susciter une interaction. Par conséquent, les messages qui suscitent des réactions émotionnelles fortes, telles que la colère, la peur ou l’indignation, ont tendance à être privilégiés. La désinformation, souvent sensationnelle et incendiaire, s’inscrit parfaitement dans ce modèle, d’où sa diffusion à grande échelle.

Cet effet d’amplification est accentué par le phénomène des « chambres d’écho » et des « bulles de filtre ». Les algorithmes des médias sociaux ont tendance à renforcer les croyances existantes des utilisateurs en leur montrant des contenus qui correspondent à leurs points de vue tout en filtrant les perspectives opposées. Cela crée un environnement dans lequel les utilisateurs sont principalement exposés à des informations qui confirment leurs préjugés, ce qui les rend plus sensibles à la désinformation qui soutient leurs opinions préexistantes. Dans ces chambres d’écho, les faux récits peuvent rapidement gagner du terrain, car ils sont continuellement renforcés par des individus et des groupes partageant les mêmes idées.

La nature virale des médias sociaux ne fait qu’exacerber le problème. La désinformation peut se propager rapidement sur les réseaux et atteindre un large public. Cette rapidité de diffusion fait qu’il est difficile pour les vérificateurs de faits et autres contre-mesures de suivre, ce qui permet aux fausses informations de s’implanter avant d’être démenties. En outre, une fois que la désinformation a été largement diffusée, il peut être difficile de rectifier le tir, car les rétractations ou les corrections ne reçoivent souvent pas le même niveau d’attention que les faussetés initiales.

Parallèlement, des recherches supplémentaires sont nécessaires pour comprendre la propagation de la désinformation et la manière dont les algorithmes des médias sociaux interagissent avec la recherche active de contenu par les individus, en particulier dans les pays non occidentaux et non anglophones. Dans ce contexte, une politique et une réglementation demandant aux entreprises de partager les données et les informations sur les algorithmes avec les chercheurs et d’autres acteurs approuvés pourraient constituer une étape importante vers une compréhension plus approfondie du désordre de l’information.

L’émergence de la désinformation générée par l’intelligence artificielle introduit une complexité supplémentaire. Les défis concernent non seulement la désinformation alimentée par des erreurs factuelles ou des informations fabriquées fournies par l’IA (souvent appelées « hallucinations » de l’IA), mais aussi la désinformation délibérée générée par des acteurs malveillants avec l’aide de l’IA. La possibilité d’utiliser des modèles génératifs d’IA pour produire des « hypertrucages » – des médias audiovisuels synthétiques de visages, de corps ou de voix humains – améliore la qualité et le pouvoir de persuasion de la désinformation, menaçant ainsi les fonctions essentielles de la démocratie. Dans des pays aussi divers que le Burkina Faso, l’Inde, la Slovaquie, la Turquie et le Venezuela, les « hypertrucages » ont été utilisés pour influencer les électeurs et façonner l’opinion publique. En fin de compte, ils risquent d’ébranler la confiance dans les élections et les institutions démocratiques.

Diplo INSTA_GIZ Decoding Disinformation

Réponses politiques et réglementaires à la désinformation

Un nombre considérable de cadres juridiques nationaux et régionaux, ainsi que des initiatives privées, ont été mis en place pour lutter contre la désinformation. D’une part, ils cherchent à donner aux individus les moyens de participer à la lutte contre la diffusion de la désinformation par l’éducation aux médias. Par ailleurs, certaines initiatives mettent en place une réglementation des contenus visant à s’attaquer à l’écosystème de l’information, en réduisant l’exposition sociale à la désinformation afin de protéger la société, en mettant particulièrement l’accent sur les groupes vulnérables.

Dans les deux cas, les politiques et les cadres de lutte contre la désinformation doivent viser à défendre les droits de l’Homme, tels que le droit à la liberté d’expression et le droit de recevoir et de transmettre des informations. Le Conseil des droits de l’Homme a affirmé que les réponses à la diffusion de fausses informations et de désinformation doivent être conformes au droit international en matière de droits de l’Homme, notamment aux principes de légalité, de légitimité, de nécessité et de proportionnalité. Toute limitation imposée à la liberté d’expression doit être exceptionnelle et interprétée de manière restrictive. Les lois sur la désinformation qui sont vagues ou qui confèrent au gouvernement un pouvoir discrétionnaire excessif pour lutter contre la désinformation sont préoccupantes, car elles peuvent conduire à la censure.

Parallèlement, il convient de faire davantage pour réduire les incitations économiques à la désinformation. Les entreprises sont censées procéder à des évaluations des risques en matière de droits de l’Homme et faire preuve de diligence raisonnable, en veillant à ce que leurs modèles commerciaux et leurs opérations n’aient pas d’incidence négative sur les droits de l’Homme. Cela inclut le partage de données et d’informations sur les algorithmes, ce qui pourrait permettre d’évaluer la corrélation entre la propagation de la désinformation et les modèles d’affaires « ad tech ».

Trouver le bon équilibre entre protection et participation à la lutte contre la désinformation signifie recourir judicieusement à la fois à la réglementation et à l’engagement. Ce dernier doit être conçu en termes larges, englobant non seulement la participation active des individus, mais aussi celle d’autres segments tels que les éducateurs, les entreprises et les acteurs techniques. Cette approche globale permet de lutter contre la désinformation tout en respectant les droits de l’Homme.

Le rapport « Décoder la désinformation : Lessons from Case Studies », publié par Diplo, propose une analyse approfondie de la désinformation et de son interaction avec la politique numérique et les droits de l’Homme. La recherche a été soutenue par le projet « Info Trust Alliance », financé par le ministère fédéral allemand des affaires étrangères et mis en œuvre par la GIZ Moldavie.



Digital Watch newsletter – Issue 93 – October 2024

 Page, Text, Advertisement, Poster, Person

Snapshot: The developments that made waves

AI governance

The ‘Pact for the Future,’ adopted at the Summit of the Future on 22 September 2024, sets out an ambitious agenda to address climate change, digital transformation, and peace while fostering agile global governance. 

On Day 3 of the UN General Assembly, discussions surrounded the challenges of rapid technological advancements and their sociocultural implications. A significant focus was placed on governing AI, misinformation, and disinformation, with several countries addressing their detrimental impact on democratic stability. 

The UN advisory body has released its final report, Governing AI for Humanity, proposing seven strategic recommendations for global AI governance.

Israel is proactively shaping its AI landscape by establishing a national expert forum on AI policy and regulation. Led by the Ministry of Innovation, Science, and Technology, this initiative demonstrates the government’s commitment to responsibly harnessing AI and unites experts to address its challenges and opportunities.

Technologies

AI models, including ChatGPT and Cohere, once depended on low-cost workers for basic fact-checking. Today, these models require human trainers with specialised knowledge in medicine, finance, and quantum physics. 

The US House has recently passed a bill aimed at streamlining federal permitting for semiconductor manufacturing projects, a move anticipated to benefit companies like Intel and TSMC. The legislation seeks to address concerns that lengthy environmental reviews could hinder the construction of domestic chip plants especially as chipmakers have pledged significant investments following the 2022 Chips and Science Act.

While South Korean memory giants Samsung Electronics and SK hynix experienced a significant sales increase in China during the first half of this year, the report by the Korea Eximbank Overseas Economic Research Institute indicates that South Korea’s reliance on China for critical semiconductor raw materials is also growing.

Infrastructure

The FCC has made a pivotal move to enhance broadband services across the USA by allocating additional spectrum in the 17.3-17.7 GHz band to non-geostationary satellite operators (NGSO), including notable providers like Starlink.

China and Africa cooperate to enhance digital infrastructure, a key aspect of their economic partnership. Chinese investments have built essential frameworks, including fibre optic cables and 5G networks, transforming local economies and expanding e-commerce.

Cybersecurity

US officials warn of foreign AI influence as the presidential election draws near, with Russia leading the charge. Moscow’s efforts have focused on supporting Donald Trump and undermining Kamala Harris.

China’s national security ministry has recently alleged that a Taiwan-backed hacking group, Anonymous 64, has been attacking targets in China, even releasing photos of individuals it claims are part of the group.

The US Federal Bureau of Investigation has disrupted another major Chinese hacking group, dubbed ‘Flax Typhoon,’ which had compromised thousands of devices globally.

After months of defiance, Elon Musk’s social media platform, X, told Brazil’s Supreme Court that it had complied with orders to curb the spread of misinformation and extremist content.

Digital rights

Russia is ramping up its efforts to control the internet by allocating nearly RUB 60 billion ($660 million) over the next five years to upgrade its web censorship system, known as TSPU.

Australia is preparing to introduce age limits for social media use to protect children’s mental and physical health.

Legal

Australia has introduced the Privacy and Other Legislation Amendment Bill 2024, marking a pivotal advancement in addressing privacy concerns within the digital landscape. 

Meta, Facebook’s owner, has been fined €91 million ($101.5 million) by the EU’s privacy regulator for mishandling user passwords. Ireland’s Data Protection Commission (DPC), which oversees GDPR compliance for many US tech firms operating in the EU, launched an investigation after Meta reported the incident.

A political consultant has been fined $7.7 million by the Federal Communications Commission (FCC) for using AI to generate robocalls mimicking President Biden’s voice. The calls, aimed at New Hampshire voters, urged them not to vote in the Democratic primary, sparking significant controversy.

California Governor Gavin Newsom has signed two new bills into law aimed at protecting actors and performers from unauthorised use of their digital likenesses through AI. The following measures have been introduced in response to the increasing use of AI in the entertainment industry, which has raised concerns about the unauthorised replication of artists’ voices and images.

Internet economy

Gold has soared to a record high of $2,629 per ounce following the US Federal Reserve’s recent interest rate cut.

OpenAI’s board is considering compensating CEO Sam Altman with equity, though no decision has been made, according to board chair Bret Taylor.

Development

The G20 Task Force 05 on Digital Transformation has unveiled a policy brief titled ‘Advocating an International Decade for Data under G20 Sponsorship’, highlighting the fundamental role of accessible and responsibly re-used data in driving social and economic development, particularly in the context of emerging technologies like AI.

The Eastern Africa Regional Digital Integration Project (EARDIP) is poised to transform the digital landscape across Eastern Africa by enhancing connectivity and accessibility.

Sociocultural

Telegram founder Pavel Durov has announced that the messaging platform will tighten its content moderation policies following criticism over its use for illegal activities. The decision comes after Durov was placed under formal investigation in France for crimes linked to fraud, money laundering, and sharing abusive content.

Meta’s Oversight Board has advised the Facebook parent company not to automatically remove the phrase ‘From the river to the sea’, which is interpreted by some as a show of solidarity with the Palestinians and by others as antisemitic.

Elon Musk’s social media platform, X, has moved to address legal requirements in Brazil by appointing a new legal representative, Rachel de Oliveira Conceicao.


UNGA79 and the ‘Pact for the Future’

The ‘Pact for the Future’, adopted at the Summit of the Future on 22 September 2024, emerges as a declaration of intent to leap from the past into an uncertain, but ambitious, tomorrow. The Pact, presented before an audience of world leaders and civil society representatives, encapsulates a roadmap and a lighthouse – navigating the challenges of climate, digital transformation, and peace while aiming to build structures agile enough for the unpredictable rhythms of modernity. It is a global handshake between generations: a promise that the wisdom of the past will not stagnate progress but rather infuse it with urgency. In the words of the UN Secretary-General: ‘We cannot create a future fit for our grandchildren with a system built by our grandparents’, we can hear a sentiment that underpins the thematic core of the Pact.

The ink on the Pact for the Future was barely dry when the first repercussions could be felt, especially within the UN General Assembly’s 79th session chambers. With climate change blazing on one side and the promise of digital revolution flickering on the other, world leaders convened during the high-level week to reassert their commitment to the Sustainable Development Goals (SDGs). What unfolded was a kaleidoscope of voices, discussions, and pledges that sought to breathe life into what had often been seen as lofty, distant goals. The pace was fast, yet the ambition seemed to echo slower truths – the earth’s fevered rise in temperature, persistent inequality, and the widening gaps in access to digital infrastructure.

AD 4nXdAGMBsmu7Yo1Y7y9TuYEa5WV3KqxEs3FWR7DuZxHMULVMP6AGGMyDaNKGbG7owx9tOBwzlYvFyMgz4U5Cv6tg8RhAag hThcPj0ao6VuP9kO5KlvNQplsHmQX5bueD0UcHacIE Zg6vtRX7G2G

While the Summit of the Future carved out new space for discussions on the use and governance of AI and digital inclusion, the UNGA79 focused on ensuring these discussions weren’t mere fleeting abstractions. Anchored in the Pact, the Global Digital Compact took centre stage, drawing sharp lines around data governance issues, internet access, and AI oversight. These initiatives were a nod to the ever-growing digital divide, where the future of democracy and human rights may just be shaped by the bits and bytes of cyberspace as much as by the ballots cast at polls. Global leaders, it seemed, were not just pledging to keep everyone connected – they were promising to keep everyone protected in an increasingly tricky online world. A bold promise indeed, in a time when the pace of technological change far outstrips the speed at which governance frameworks are formed.

AD 4nXcwrD6xQLLmu PK5EzaQBksisc8bVGPTfuhJocJrZawiTT TA1NN2PmO4JMQxygep1R5AKQ5LHC 2tcFvjLFYa4qN6Q2F071YtjcRzGKj84lt0L

Then came the delicate dance of peace and security, where old enemies and new technologies collided on the agenda. Discussions surrounding the reform of the UN Security Council – arguably one of the most progressive since the Mid-20th century – were matched with fresh commitments to nuclear disarmament and the governance of outer space. No longer the stuff of science fiction, space and AI were recognised as the new frontiers of conflict and cooperation. Yet Africa’s under-representation on the global stage may prove to be the most seismic of shifts. If the Pact’s promise to redress this historical imbalance holds, it could alter the very architecture of global governance in ways not seen since the decolonisation waves of the mid-1900s.

Through it all, the resonance of the future generations loomed large. For the first time, a formal Declaration on Future Generations was signed, reminding current leaders that their decisions – or indecisions – would shape the lives of the not yet born. A future envoy, empowered youth, and re-energised civil society seem to echo a deeper undercurrent: that this Pact, this Summit, and the UNGA79 may not be remembered for its words alone, but for the actions that will (or won’t) follow in its wake.


Digital Public Infrastructure: An innovative outcome of India’s G20 leadership

From latent concept to global consensus
Not more than a couple of years back, this highly jingled acronym of the present time – DPI (Digital Public Infrastructure), was merely a latent term. However, today it has gained an ‘internationally agreed vocabulary’ with wide-ranging global recognition. This could not imply that efforts in this direction had not been laid earlier, yet a tangible global consensus over the formal incorporation of the term was unattainable. 

The complex dynamics of such a long-standing impasse or ambiguity over a potential consensus-based acknowledgement of DPI is prominently highlighted in the recently published report of ‘India’s G20 Task Force on Digital Public Infrastructure’. The report clearly underlines that, 

While DPI was being designed and built independently by selected institutions around the world for over a decade, there was an absence of a global movement that identified the common design approach that drove success, as well as low political awareness at the highest levels of the impacts of DPI on accelerating development. 

 City, Urban, Light, Architecture, Building

It was only at the helm of India’s G20 Presidency in September 2023 that the first-ever multilateral consensus was reached to recognise DPI as a ‘safe, secure, trusted, accountable, and inclusive’ driver of socioeconomic development across the globe. Notably, the ‘New Delhi Declaration’ has cultivated a DPI approach, intending to enhance a robust, resilient, innovative, and interoperable digital ecosystem steered by a crucial interplay of technology, business, governance, and community.

The DPI approach persuasively offers a middle way between a purely public and a purely private strand, with an emphasis on addressing ‘diversity and choice’, encouraging ‘innovation and competition’,  and ensuring ‘openness and sovereignty’. 

 Flag

Ontologically, this marks a perceptible shift from the exclusive idea of technocratic-functionalism to embracing the concepts of multistakeholderism and pluralistic universalism.  These conceptualisations hold substance in the realm of India’s greater quest to democratise and diversify the power of innovation, based on delicate tradeoffs and cross-sectional intersubjective understanding. Nevertheless, it is also to be construed that an all-pervasive digital transition increasingly entrenched into the burgeoning international DPI approach, has been exceptionally drawn from India’s own successful experience of the domestic DPI framework, namely India Stack.

India Stack is primarily an agglomeration of open Application Programming Interfaces (APIs) and digital public goods, aiming to enhance a broadly vibrant social, financial, and technological ecosystem. It offers multiple benefits and ingenious services, like faster digital payments through UPI, Aadhaar Enabled Payments System (AEPS), direct benefit transfers, digital lending, digital health measures, education and skilling, and secure data sharing. The remarkable journey of India’s digital progress and coherently successful implementation of DPI over the last decade indisputably came into focus during the G20 deliberations.

India’s role in advancing DPI through G20 engagement and strategic initiative
What seems quite exemplary is the procedural dynamism with which actions have been undertaken to mobilise the vocabulary and effectiveness of DPI during various G20 meetings and conferences held within India. Most importantly, the Digital Economy Working Group (DEWG) meetings and negotiations were organised in collaboration with all the G20 members, guest countries, and eminent knowledge partners, like ITU, OECD, UNDP, UNESCO and the World Bank. As an effect, the Outcome Document of the Digital Economy Ministers Meeting was unanimously agreed to by all the G20 members and presented a comprehensive global digital agenda with appropriate technical nuances and risk-management strategies. 

Along with gaining traction in the DEWG, the DPI agenda also gained prominence in other G20 working groups under India’s presidency. These include the Global Partnership for Financial Inclusion Working Group, the Health Working Group, the Agriculture Working Group, the Trade and Investment Working Group, and the Education Working Group. 

 Text, Advertisement

Commensurate to these diverse group meetings, the Indian leadership also held bilateral negotiations with its top G20 strategic and trading partners, namely the USA, the EU, France, Japan, and Australia. Interestingly, the official joint statements of all these bilateral meetings decisively entailed the catchword ‘DPI’. It could be obviously considered whether the time was ripe, or it was India’s well-laid-out strategy that ultimately paid off. Yet, it could not be repudiated that a well-thought-out parallel negotiation process had certainly played an instrumental role in providing leverage for the DPI approach.

Further, in follow-up to the New Delhi Declaration of September 2023, the Prime Minister of India announced the launch of two landmark India-led initiatives during the G20 Virtual Leaders’ Summit in November 2023. The two initiatives denominated as the Global Digital Public Infrastructure Repository (GDPIR) and the Social Impact Fund (SIF) are mainly inclined towards the advancement of DPI in the Global South, particularly by offering upstream technical-financial assistance and knowledge-based expertise. This kind of forward-looking holistic approach reasonably fortifies the path towards a transformative global digital discourse.

Building on momentum: Brazil’s role in advancing DPI
Ever since India passed the baton of the G20 presidency to Brazil, expectations have been pretty high from the latter to carry forward the momentum and ensure that emerging digital technologies effectively meet the requirements of the Global South. It is encouraging to witness that Brazil is vehemently making a step forward to maintain the drive, with a greater emphasis on deepening the discussion over crucial DPI components such as digital identification, data governance, data sharing infrastructure, and global data safeguards. Although Brazil has seized an impressive track record of using digital infrastructure to promote poverty alleviation and inclusive growth within the country, a considerable measure of success at the forthcoming G20 summit will be its efficacy in stimulating political and financial commitments for a broader availability of such infrastructure. 

Although concerted endeavours are being deployed to boost the interoperability, scalability and accessibility of DPIs, it becomes highly imperative to ensure their confidentiality and integrity. This turns out to be more alarming in the wake of increased cybersecurity breaches, unwarranted data privacy intrusions, and potential risks attached to emerging technologies like AI. Hence, at this critical juncture, it is quintessential to foster more refined, coordinated and scaled-up global efforts, or more precisely, an effective global digital cooperation.



Disinformation in the digital era

Communication is the cornerstone of societal interaction, holding together the fabric of the social system. By shaping the course of communication, agents may influence the development of society. In our increasingly digital world, the spread of misinformation and disinformation poses a significant threat to social cohesion, democracy, and human rights.  

The deceptive use of information has a long history. Emblematic examples can be found in Ancient Egypt, during the Roman Empire, and after the invention of the printing press, for example. During the Cold War, the United States and the Soviet Union used disinformation campaigns to help advance their respective strategic interests. The complexity and scale of information pollution in the digitally connected world, however, present an unprecedented challenge. In particular, social media has allowed information to be disseminated on a wider scale. While this new informational landscape has empowered individuals to express their opinions, it has also sometimes resulted in the spread of mis- and disinformation. 

The speed of propagation is intimately related to the dynamics of social media. Individuals increasingly resort to social media for day-to-day information but still use these platforms with a recreational mindset, which lowers critical thinking and makes them more vulnerable to content that evokes an emotional response, has a powerful visual component or a strong narrative, or is shown repeatedly. 

Globally, data from 2022 shows that over 70% of individuals in some developing countries use social media as a source of news. This figure was above 60% in some European countries, such as Greece, Bulgaria, and Hungary. In the United States, 50% of adults get their news from social media. In 19 developed countries, 84% of Pew Research respondents believe that access to the internet and social media has made it easier to manipulate people with false information and rumours. Moreover, 70% of those surveyed consider the spread of false information online to be a major threat, second only to climate change. 

The role of technology

One of the key mechanisms behind the social media phenomenon is algorithmic content curation. Social media platforms use sophisticated algorithms, designed to keep users engaged by showing them the content most likely to capture their attention and prompt interaction. As a result, posts that evoke strong emotional responses—such as anger, fear, or outrage—tend to be favoured. Disinformation, with its often sensational and inflammatory nature, fits perfectly into this model, leading to its widespread dissemination.

This amplification effect is compounded by the phenomenon of ‘echo chambers’ and ‘filter bubbles’. Social media algorithms tend to reinforce users’ existing beliefs by showing them content that aligns with their views while filtering out opposing perspectives. This creates an environment where users are primarily exposed to information that confirms their biases, making them more susceptible to disinformation that supports their pre-existing opinions. In these echo chambers, false narratives can quickly gain traction, as they are continually reinforced by like-minded individuals and groups.

The viral nature of social media further exacerbates the problem. Disinformation can spread rapidly across networks, reaching large audiences. This speed of dissemination makes it difficult for fact-checkers and other countermeasures to keep up, allowing false information to gain a foothold before it can be debunked. Moreover, once disinformation has been shared widely, it can be challenging to correct the record, as retractions or corrections often do not receive the same level of attention as the original falsehoods. 

In parallel, more research is necessary to understand the spread of disinformation and how social media algorithms interplay with individuals’ active search for content, especially in non-Western and non-English speaking countries. Against this backdrop, policy and regulation that requests companies to share data and information on algorithms with researchers and other vetted actors could be an important step towards a deeper understanding of information disorder. 

The emergence of artificial intelligence-generated mis- and disinformation introduces additional complexity. The challenges relate not only to misinformation fuelled by factual errors or fabricated information provided by AI (often called AI ‘hallucinations’) but also to deliberate disinformation generated by malicious actors with the assistance of AI. The possibility to use generative AI models to produce ‘deepfakes’ – synthetic audio-visual media of human faces, bodies, or voices – enhances the quality and persuasiveness of disinformation, threatening core functions of democracy. Countries as diverse as Burkina Faso, India, Slovakia, Türkiye, and Venezuela have seen deepfakes used to sway voters and shape public opinion. Ultimately, deepfakes may undermine trust in elections and democratic institutions.

 Advertisement, Text, Poster, Newspaper

Policy and regulatory responses to disinformation 

A considerable number of national and regional legal frameworks, as well as private-led initiatives have been introduced to combat mis- and disinformation. On the one hand, they seek to empower individuals to participate in fighting the spread of mis- and disinformation through media literacy. On the other hand, there are initiatives that put in place content regulation aiming to tackle the information ecosystem, reducing social exposure to disinformation to protect society, with particular emphasis on vulnerable groups. 

In both cases, policies and frameworks to fight disinformation should seek to uphold human rights, such as the right to freedom of expression and the right to receive and impart information. The Human Rights Council has affirmed that responses to the spread of mis- and disinformation must be aligned with international human rights law, including the principles of lawfulness, legitimacy, necessity, and proportionality. Any limitation imposed on freedom of expression must be exceptional and narrowly construed. Disinformation laws that are vague or that confer excessive government discretion to fight disinformation are concerning, since they may lead to censorship.

In parallel, more should be done to curb the economic incentives to disinformation. Companies are expected to conduct human rights risk assessments and due diligence, ensuring their business models and operations do not negatively impact human rights. This includes sharing data and information on algorithms, which could allow the correlation between the spread of disinformation and ‘ad tech’ business models to be assessed. 

Striking the right balance between protection and participation in combating disinformation means resorting wisely to both regulation and engagement. The latter should be conceived in broad terms, encompassing not only the active involvement of individuals, but also the involvement of other segments such as educators, companies, and technical actors. This inclusive approach provides a pathway to curb disinformation while respecting human rights.

The report ‘Decoding Disinformation: Lessons from Case Studies’, published by Diplo, offers an in-depth analysis of disinformation and its interplay with digital policy and human rights. The research was supported by the project ‘Info Trust Alliance’, funded by the German Federal Foreign Office and implemented by GIZ Moldova.


DW Weekly #180 – 4 October 2024

 Page, Text

Dear readers, 

The drafting of the EU’s Code of Practice for general-purpose AI (GPAI) signals a crucial moment in European AI regulation and a global benchmark for managing innovation and risk. Leading academics, including AI pioneer Yoshua Bengio, are at the centre of this initiative, tasked with weaving a framework that balances transparency, safety, and innovation. The cast of academics, from seasoned professors to PhD candidates, showcases the EU’s desire to root its AI regulation in deep technical and legal expertise. Yet, as polished as this effort appears, questions linger about its timing and inclusivity—critical voices from industry and civil society are already showing signs of divergence.

The EU AI Act, hinging significantly on this Code of Practice, will not see final standards before 2026. Thus, this interim period, overseen by academic chairpersons, holds immense weight. While the presence of global AI figures like Bengio underscores the Code’s gravitas, the timing of their appointment, just after Parliament’s intervention, leaves a slightly bitter aftertaste. The process could have benefited from earlier transparency, with the ‘pity’ expressed by digital policy advisors reflecting broader concerns about the bureaucratic backlog. But there is no doubt about the intellectual firepower gathered here: the mix of AI technical savants, legal minds, and governance experts is the EU’s bet on building a human-centered and safe AI future.

Yet, the road ahead is bumpy. The first plenary, attended by nearly 1,000 stakeholders, unveiled the deep fault lines between general-purpose AI providers—like ChatGPT’s creators—and other participants. The latter, which includes civil society and academia, overwhelmingly pushed for stringent transparency on training datasets, supporting the inclusion of licensed content, open data, and even web-scraped material. GPAI providers, however, were notably less enthusiastic, baulking at demands for greater data disclosure, mainly when it came to open datasets. Their preference for self-policed data transparency, rather than third-party audits, exposes a friction between innovation-driven autonomy and regulation-enforced accountability.

 People, Person, Book, Comics, Publication, Face, Head

While academia and civil society rally behind risk assessment and strict audit trails, providers shy away from measures they deem overly stringent. Perhaps this is the core tension of the GPAI Code: can a framework fuel cutting-edge AI development and satisfy the public’s call for ethical safeguards? The European Commission’s ongoing consultation shows the battle for compromise is still in its early stages. With over 430 responses already collected, there is a palpable risk that the sheer diversity of opinions could derail progress, a possibility echoed by those close to the drafting process.

Creating this Code of Practice feels like a high-stakes balancing act. On the one hand, there is pressure to protect against AI’s ‘black box’ nature, ensuring transparency and responsibility. Conversely, the EU must remain competitive in AI, not shackling its innovators with undue restrictions. The stakes could not be higher. As Bengio puts it, this Code will have to stand the test of time, and not just be watched closely by Europe.

In other news, the Department of Government Efficiency (DOGE) token witnessed a staggering rise of over 33,000% in September before stabilising at approximately USD 0.02309. The surge was triggered by a playful comment from Elon Musk after a discussion with Donald Trump, who floated the idea of establishing a new government efficiency department, with Musk potentially at its helm if Trump wins the upcoming election. Amidst a closely contested race between Trump and Kamala Harris, meme coins, including politically themed tokens like DOGE, are seeing a resurgence, with trading volumes surging to over USD 10 million in 24 hours. 

For more insights on the intersection of digital economy, cybersecurity, and policy governance, visit dig.watch and stay updated on the latest developments through our daily updates on the specific topic of your interest.

Marko and the Digital Watch team


Highlights from the week of 27-4 October 2024

european parliament building in brussels 1

The first draft of the EU AI Code is expected by November, with finalisation planned for 2025.

UNGA 77

At the 79th UN General Assembly, 18 nations endorsed a joint statement emphasising the critical importance of securing undersea cable infrastructure, highlighting the need for policies that ensure its resilience,…

europe flag

The EU seeks to understand how these platforms’ algorithms could influence civic discourse, mental health, and child protection.

elon musk tesla dogecoin lawsuit

Analysts predict a new cryptocurrency supercycle, driven by the resurgence of meme coins and politically themed tokens like MAGA and ConstitutionDAO.

google 76517 1280

Russia’s digital ministry confirms Google’s account creation restrictions and warns users to back up data and consider alternative two-factor authentication methods.

washington monument in washington dc united states of america usa

Concurrently, the US is enhancing financial and technological support for allies like Israel, which raises ethical concerns amid ongoing regional conflicts.

man falls in love with ai

Companies offer advanced AI training, including quantum physics.

NTIA

The initiative aims to improve participation in the digital economy, telehealth, and distance learning, with grant applications open until 7 February 2025.

brazil fines twitter

X is likely to pay the fines but may challenge an additional $1.8 million penalty imposed by Brazil’s Supreme Court after a brief platform reappearance.

smart phone with 5g symbol hologram network worldwide connection 1

The project is expected to benefit local fishermen, tourism, shipping, and marine research, ultimately unlocking new economic opportunities for local communities.



Reading corner

FUTURE AI
dig.watch

Balancing innovation and ethics in AI.

Diplo BLOGS24 Insta Jovan Kurbalija 30 Sep 1080x1080px 1
www.diplomacy.edu

The conceptual and terminological confusion surrounding the use of “digital,” “cyber,” and “tech” diplomacy has practical consequences, as highlighted by a recent US Government Accountability Office report, which identifies this ambiguity as one of a major barrier to effective cyber and digital diplomacy. The key takeaway is that clarity in terminology is crucial, not only for clear communication but also for effective diplomatic action, underscoring the importance of understanding the context in which these terms are used.