Home | Newsletters & Shorts | Digital Watch newsletter – Issue 91 – July 2024

Digital Watch newsletter – Issue 91 – July 2024

Download your copy

EN
Front page of the Newsletter

Snapshot: The developments that made waves

AI governance

The UN General Assembly has adopted a non-binding resolution on AI capacity building, led by China, to enhance developing countries’ AI capabilities through international cooperation. It also calls for support from international organisations and financial institutions. African ICT and communications ministers have endorsed the Continental AI Strategy and the African Digital Compact to boost the continent’s digital transformation. The G7 Leaders’ Communiqué emphasised a coordinated strategy for handling AI’s opportunities and challenges, introducing an action plan for workplace AI adoption and underlining initiatives such as advancing the Hiroshima Process International Code of Conduct, supporting SMEs, and promoting digital inclusion and lifelong learning.

The International Monetary Fund has recommended fiscal policies for governments grappling with the economic impacts of AI, including taxes on excess profits and a carbon levy.

China leads the world in generative AI patent requests, significantly outpacing the USA. At the same time, US tech companies dominate in producing cutting-edge AI systems, according to the World Intellectual Property Organization (WIPO). A European Commission report shows the EU lags behind its 2030 AI targets, with only 11% of enterprises using designated AI technologies, far short of the 75% target. The Japanese Defence Ministry has introduced its first AI policy to enhance defence operations. Brazil is partnering with OpenAI to modernise legal processes, reduce court costs, and improve efficiency in the solicitor general’s office.

Technologies

The USA has introduced draft rules to regulate investments in China, focusing on AI and advanced technology sectors that may pose national security threats. The USA plans to expand sanctions on semiconductor chips and other goods sold to Russia, targeting Chinese third-party sellers. Discussions are ongoing with the Netherlands and Japan to restrict 11 Chinese chipmaking factories and extend equipment export controls. The USA faces a projected shortage of 90,000 semiconductor technicians by 2030, prompting the Biden administration to launch a workforce development program.

The European Commission is seeing industry views about China’s increased production of older-generation computer chips.

China will develop standards for brain-computer interfaces (BCI) through a new technical committee, focusing on data encoding, communication, visualisation, electroencephalogram data collection, and applications in various fields.

Infrastructure

Telecommunications companies from Kazakhstan and Azerbaijan will invest over USD 50 million in laying 370 kilometres of fibre optic cables under the Caspian Sea. Meanwhile, Senegal’s new digital chief announced plans to enhance digital infrastructure, coordinate government programs, foster collaborations, and build on previous achievements to increase the digital economy’s GDP contribution.

Cybersecurity

The UN Security Council held an open debate on cybersecurity, focusing on evolving cyber threats and the need for positive digital advancements.

A recent cyberattack on the cloud storage company Snowflake is shaping up to be one of the largest data breaches ever, impacting hundreds of Snowflake business customers and millions of individual users. Indonesia’s national data centre was hit by a variant of LockBit 3.0 ransomware, disrupting immigration checks and public services. The hackers have since apologised and offered to release the keys to the stolen data. The University Hospital Centre in Zagreb, Croatia, also suffered a cyberattack by LockBit. Despite rising ransomware attacks, a Howden report indicates that global cyber insurance premiums are decreasing as businesses improve their loss mitigation capabilities. Additionally, nearly ten billion unique passwords were leaked in a collection named RockYou2024, heightening risks for users who reuse passwords.

Australia has mandated internet companies to create enforceable codes within six months to prevent children from accessing inappropriate content. New Zealand transitioned the Christchurch Call to Action against online terrorist content into an NGO, now funded by tech companies like Meta and Microsoft.

Digital rights

The EU’s proposed law mandating AI scans of messaging app content to detect child sexual abuse material (CSAM) faces criticism over privacy threats and potential false positives. EU regulators charged Meta with breaching tech rules via a ‘pay or consent’ ad model on Facebook and Instagram, alleging it forced users to consent to data tracking. The US Department of Justice (DOJ) plans a lawsuit against TikTok for alleged children’s privacy violations. Google is accused by European data protection advocacy group NOYB (none of your business) of tracking users without their informed consent through its Privacy Sandbox feature.

Legal

The International Criminal Court is investigating alleged Russian cyberattacks on Ukrainian infrastructure as potential war crimes. In Australia, legal action has been initiated against Medibank for a data breach affecting 9.7 million individuals. ByteDance and TikTok are challenging a US law aiming to ban the app, citing concerns about free speech. Global streaming companies are contesting new Canadian regulations mandating 5% of revenues be used for local news, questioning the legality of the government’s actions.

Internet economy

China’s Ministry of Commerce has introduced draft rules to bolster cross-border e-commerce by promoting the establishment of overseas warehouses and improving data management and export supervision. Nvidia is facing potential charges in France over allegations of anti-competitive behaviour. The first half of 2024 saw a significant surge in cryptocurrency theft, with over USD 1.38 billion stolen by 24 June.

Development

The first part of the Broadband Commission’s annual State of Broadband report ‘Leveraging AI for Universal Connectivity’ explores AI’s impact on e-government, education, healthcare, finance, and environmental management, and its potential to bridge or widen the digital divide. The second part will provide further insights into AI’s development and propose strategies for equitable digital advancement. India will require USB-C as the standard charging port for smartphones and tablets starting in June 2025, aligning with the EU’s efforts to enhance user convenience and reduce electronic waste.

Sociocultural

New York state lawmakers passed a law limiting social media platforms in displaying addictive algorithmic content to users under 18 without parental consent. The European Commission has asked Amazon for details on how it complies with Digital Services Act rules, focusing on transparency in its recommender systems. Google Translate is significantly expanding, adding 110 languages, driven by AI advancements.

THE TALK OF THE TOWN – GENEVA

From 4 to 14 June, the Council of the International Telecommunication Union (ITU)  made key decisions on space development, green digital action, and global digital cooperation. The council reviewed the ITU Secretary-General’s report on the implementation of the Space 2030 Agenda, focusing on leveraging space technology for sustainability. Resolutions were drafted to highlight ITU’s role in using digital technologies for sustainability, with a report on current green digital initiatives. ITU will continue engaging with the Global Digital Compact (GDC) to enhance global digital cooperation.

On 14 June, the first UN Virtual Worlds Day showcased technologies like virtual and augmented reality, the metaverse, and spatial computing to advance SDGs. The event included a high-level segment, real-world applications, discussions on policy, and the launch of the Global Initiative on Virtual Worlds – Discovering the CitiVerse, a platform to develop frameworks, raise awareness, share best practices, and test metaverse solutions in cities.


AI@UN: Navigating the tightrope between innovation and impartiality

The UN is not short on risks, but AI adds novel ones for the organisation. As off-the-shelf AI proprietary systems carry the bias of data and the algorithm on which it is developed and come with limitations and challenges for transparency, reliance on proprietary AI will open inevitable questions about the impartiality of such systems.

Why is impartiality important for the UN? The principle of impartiality is the linchpin of the UN’s credibility, ensuring that policy advice remains objective, grounded in evidence, and sensitive to diverse perspectives. This impartiality will be tested as the UN reacts to the inevitable need to automate reporting, drafting, and other core activities central to its operation. 

Ensuring impartiality would require transparency and explainability of the full AI cycle, from the data on which foundational models are based to assigning weights to different segments of AI systems.

An inclusive approach to AI development is key for upholding the principle of impartiality. We, the peoples, the first three words of the UN Charter,  should guide the development of AI at the UN. Contributions of countries, companies, and communities worldwide to AI@UN could bolster the high potential of AI to support the UN’s missions of upholding global peace, advancing development, and protecting human rights.

AI@UN has two main goals:

  • support policy discussions on the sustainable AI transformation of the UN ecosystem
  • inspire the contributions of AI models and agents by member states and other actors
Emblem of the UN in white on a blue disk superimposed on an AI circuit with connectors radiating outwards from the centred disc.

As a starting point, the following guiding principles are proposed for the development and deployment of AI models, modules, and agents at the UN: 

1. Open source: Abiding by the open-source community’s principles, traditions, and practices. Openness and transparency should apply to all phases and aspects of the AI life-cycle, including curating data and knowledge for AI systems, selecting parameters and assigning weights to develop foundational models, vector databases, knowledge graphs, and other segments of AI systems. 

2. Modularity: Developing self-contained modules according to shared standards and parameters. AI@UN should start with AI agents and modules for core UN activities and operations.

3. Public good: Walking the talk of public good by using AI to codify UN knowledge as a public good to be used by countries, communities, and citizens worldwide. By doing so, the UN would inspire the AI-enabled codification of various knowledge sources, including ancient texts and oral culture, as the common heritage of humankind.   

4. Inclusivity: Enabling member states, companies, and academia to contribute, by their capacities and resources, to the technical, knowledge, and usability aspects of AI@UN. 

5. Multilingualism: Representing a wide range of linguistic and cultural traditions. A special focus should be on harvesting the knowledge and wisdom available in oral traditions that are not available in the written corpus of books and publications.

6. Diversity: Ensuring inputs from a wide range of professional, generational, cultural, and religious perspectives. While AI@UN should aim to identify convergences between different views and approaches, diversity should not be suppressed by the least common denominator approach inferred in AI. Diversity should be built in through the transparent traceability of sources behind AI-generated outputs. 

7. Accessibility: Adhering to the highest standards for accessibility, in particular for people with disabilities. AI@UN must increase the participation of people with disabilities in UN activities, from meetings to practical projects. Simple solutions and low-bandwidth demand should make the system affordable for all. 

8. Interoperability: Addressing the problem of organisational silos in managing knowledge and data within the UN system. Interoperability should be facilitated by knowledge ontologies and taxonomies, data curation, and shared technical standards.

9. Professionalism: Following the highest industry and ethical standards of planning, coding, and deploying software applications. This will be achieved by testing, evaluating, and submitting AI solutions to a peer-review process. The main focus will be maximising the reliable development of AI solutions to directly impact human lives and well-being. 

10. Explainability: Tracing every AI-generated artefact, such as a report or analysis, to sources used by AI inference, including texts, images and sound recording. Explainability and traceability would ensure transparency and impartiality of AI@UN systems.

11. Protection of data and knowledge: Achieving the highest level in protecting data, knowledge and other inputs into AI systems. 

12. Security: Guaranting the highest possible level of security and reliability of AI@UN. Open source, red-teaming, and other approaches will ensure that the systems are protected by having as many critical eyes as possible to test and evaluate AI code and algorithms. AI communities will be encouraged to contribute to red-teaming and other tests of the AI@UN system.

13. Sustainability: Realisation of SDGs and Agenda 2030 through three main approaches: firstly, ensuring that SDGs receive higher weights in developing AI models and tools; secondly, the AI systems themselves should be sustainable through, for example, sharing the code, building resources, and providing proper documentations and development trails; thirdly, AI solutions should be developed and deployed with environmental sustainability in mind.

14. Capacity: By developing an AI system, the UN should develop its own and wider AI capacities. Capacity development should be: (a) holistic, involving the UN Secretariat, representatives of member states, and other communities involved in UN activities; and (b) comprehensive, covering a wide range of AI capacities from a basic understanding of AI to high-end technical skills. 

15. Future-proofing: Planning and deploying systems dealing with future technological trends. Experience and expertise gathered around AI@UN should be used to deal with other emerging technologies, such as augmented/virtual reality and quantum computing. 

Opportunities in crises. AI transformation will inevitably trigger tensions due to its impact on deeper layers of how the UN functions. Likely opposition based on human fear and attachments to the status quo should be openly addressed and reframed around opportunities that AI transformation will open on individual and institutional levels. 

For instance, AI can help small and developing countries to participate in more informed and impactful ways in the work of the UN. AI can help compensate for the smaller size of their diplomatic missions and services, which must follow the same diplomatic dynamics as larger systems. An emphasis on AI will reduce current AI asymmetry.

AI can also help the UN Secretariat to refocus time and resources and spend less time on traditional paperwork, like preparing reports, to allow more work on the ground in member states where their help is critical.

Next steps. Embarking on this journey towards integrating AI into the UN’s operations is not merely a step but a leap into the future – one that demands boldness, a cooperative spirit, and an unwavering dedication to the ideals that have anchored the UN since its inception. The potential for AI to bolster the UN’s mission to uphold global peace, advance development, and champion human rights is immense. In fact, the need to adopt an open-source AI framework exceeds the need for technological innovation. The UN will be able to evolve, take the lead, and remain relevant in a rapidly changing global landscape by adopting an open approach to AI.

By leveraging the transformative power of AI, the UN can turn a looming challenge into a watershed moment, ensuring the organisation’s relevance and leadership in charting the course of human progress for all.

This text was adapted from AI@UN: Navigating the tightrope between innovation and impartiality, first published on Diplo’s blogroll.

AIatUN 1
www.diplomacy.edu

The UN faces the challenge of integrating AI in a way that maintains its impartiality and credibility, advocating for an open-source AI platform contributed to by countries, companies, and citizens to ensure transparency, inclusivity, and adherence to its core principles.


How AI chatbots master language: Insights from Saussure’s linguistics

Linguistics, intertwined with modern technology, prompts questions about how chatbots function and respond articulately to diverse inputs. Chatbots, powered by large language models (LLMs) like ChatGPT, acquire digital cognition and articulate responses using principles rooted in the linguistic theories of Ferdinand de Saussure.

Saussure’s early 20th-century work laid the groundwork for understanding language through syntax and semantics. Syntax refers to the rules governing the arrangement of words to form meaningful sentences. Saussure saw syntax as a system of conventions within a language community, interlinked with other linguistic elements like semantics. Semantics involves the study of meaning in language. Saussure introduced the concept of the sign, consisting of the signifier (sound/image) and the signified (concept), which is crucial for understanding how LLMs process and interpret word meanings.

Two humanoid robots drawn in cubism style talk as though in conversation.

How LLMs process language. LLMs like ChatGPT process and understand language through several core mechanisms:

  1. Training on vast amounts of textual data from the internet to predict the next word in a sequence
  2. Tokenisation to divide the text into smaller units
  3. Learning relationships between words and phrases for semantic understanding
  4. Using vector representations to recognise similarities and generate contextually relevant responses
  5. Leveraging transformer architecture to efficiently process long contexts and complex linguistic structures

LLMs transform text into tokenised units (signifiers) and map these to embeddings that capture their meanings (signified). The model learns these embeddings by processing vast amounts of text, identifying patterns and relationships analogous to Saussure’s linguistic structures.

Semantics and syntax in LLMs. Understanding and generating text in LLMs involves both semantic and syntactic processing. 

LLMs process semantics through (a) contextual word embeddings that capture word meanings in different contexts based on usage, (b) an attention mechanism to prioritise important words, and (c) layered contextual understanding that handles words that have multiple related meanings (polysemy) and different words with the same meaning (synonymy). The model is pre-trained on general language patterns and fine-tuned on specific datasets for enhanced semantic comprehension. 

For syntax, LLMs use (a) positional encoding to understand word order, (b) attention mechanisms to maintain sentence structure,   (c) layered processing to build complex sentences, and (d) learn probabilistic grammar from large amounts of text. Tokenisation and sequence modelling help track relationships between words, and the transformer model integrates both sentence structure and meaning at each stage, ensuring responses are both meaningful and grammatically correct. Training on diverse datasets further enhances its ability to generalise across various ways of using language, making the chatbot a powerful natural language processing tool.

Integrating Saussure’s linguistic theories with the cognitive mechanisms of large language models illuminates the inner workings of contemporary AI systems and also reinforces the enduring relevance of classical linguistic theories in the age of AI.

This text was adapted from In the beginning was the word, and the word was with the chatbot, and the word was the chatbot, first published on the Digital Watch Observatory.

generate and image of digital letters and algorithms to depict how chatbots acquire knowledge respectively how they learn by being fed with words and sentences
dig.watch

Given the profound importance of language and its various disciplines in technological developments, it is crucial to consider how chatbots function as products of advanced technology. Specifically, it contributes to understanding how chatbots learn through algorithmic cognition and how they effectively and accurately respond to diverse user queries reflecting their systems in linguistics studies



Social media giants win in free speech showdown at US Supreme Court

Social media platforms play an imperative role in people’s lives, not only in communication but also in receiving and disseminating information. At the same time, some risks may come with social media content, such the possibility of as hate speech, the spread of mis- and disinformation, and harassment. This has raised questions on the liability of social media platforms in regulating such content, as well as the role of governments in taking action. Do social media platforms have free speech rights? Can governments implement policies against social media platforms and their own content policies? The US Supreme Court tackled those questions in its decision in Moody vs. NetChoice and NetChoice, LLC vs. Paxton

NetChoice and the Computer and Communications Industry Association (CCIA), a coalition of social media companies and internet platforms, challenged the laws of two US states, Florida and Texas. These laws were enacted in 2021 amidst growing Republican party criticism of social media companies’ enforcement of their own policies

NetChoice and CCIA claimed that the Florida and Texas laws violate private companies’ first amendment rights and that governments should not be allowed to intervene in private companies’ speech policies. A group of political scientists filed an amicus brief stating that these two laws do not set a threshold as to what they consider to be hate speech and what dangerous and violent election-related speech could prevent social media platforms from moderating threats against election officials. On the other hand, officials from Texas and Florida argue that these laws aim to regulate the liability of social media platforms rather than restrict speech online while stressing that the first amendment does not apply to private businesses. One US federal appeals court invalidated Florida’s statute, while another upheld the Texas law. However, both laws were suspended pending the US Supreme Court’s final decision.

The hand of a black-robed figure holds a gavel striking its wooden base, on a desk, with the scales of justice in the background.

The supreme court decided that the lower courts’ decisions were inadequate for free speech rights under the first amendment and that the two laws are unconstitutional. In concluding its decision, the supreme court found that social media platforms are protected by the first amendment when they create content. The supreme court has also ruled that presenting a curated collection of others’ speech counts as expressive activity. 

Essentially, this sets a precedent for setting free speech rights under the first amendment for social media platforms and private businesses in the USA. Namely, US states cannot implement policies restricting their ability to regulate the content disseminated on their platforms. This could prevent governments from enacting laws leading to social media platforms losing their independence in regulating their content.


Governments steam forward with digital antitrust oversight 

In 1996 John Perry Barlow penned ‘A Declaration of the Independence of Cyberspace’. This anthology document, which reflected the libertarian internet culture of the time, was a push-back against governmental intervention and regulation of the blooming technology sector. Accordingly, governments around the world adopted a hands-off approach, under the assumption that regulation could stifle innovation. 

Almost three decades later, this understanding has radically changed. In recent years, reports published by several organisations, such as the World Bank, the Internet Society, and UNCTAD have shown a growing concentration of wealth and power in the digital economy. Data divides are particularly relevant in this context, as they lead to concentration upstream, in data-intensive technology sectors, such as AI. Against this backdrop, investigations into the potentially anti-competitive behaviour adopted by tech companies are proliferating.

A human hand holds a magnifying glass over four blocks. The first, third, and fourth blocks have green checkmarks on them. The second has a red triangle with an exclamation point inside of it.

In the EU, recent investigations have led to the first charge brought by the European Commission against a tech company under the Digital Markets Act (DMA), a law designed to curb Big Tech’s dominance and foster fair competition. According to the preliminary findings of an investigation launched in March, Apple would be in violation of the DMA. Apple’s App Store allegedly squeezes out rival marketplaces by making it more difficult for users to download apps from alternative stores, and by not allowing app developers to communicate freely and conclude contracts with their end users. Apple has been given the opportunity to review the preliminary findings, and it can still avoid a fine if it presents a satisfactory proposal to address the problem. 

Other countries are also hardening their laws on competition. The ‘Brussels effect’ and the influence of the DMA can be seen in the Digital Competition Bill, proposed by the government of India to complement existing antitrust laws. Similarly to the DMA, the law would target large companies and could introduce similarly heavy fines. In particular, tech giants would be prohibited from exploiting non-public user data and from favouring their own products or services on their platforms. They would also be barred from restricting users’ ability to download, install, or use third-party apps.

The bill is raising concern among tech companies. A US lobbying group has opposed the move, fearing its impact on business. Inspired by the common belief that dominated the tech sector in the 1990s, technology companies claim that India’s bill could stifle innovation. However, the claim seems unlikely to prosper. 

Concerns about tech sector competition are rising in the USA, traditionally an advocate for minimal regulation. The USA is tightening AI industry controls, with the DOJ and the Federal Trade Commission (FTC) dividing oversight: the FTC will regulate OpenAI and Microsoft, while the DOJ oversees Nvidia. Although less active than the EU in antitrust regulation, the US closely monitors mergers and acquisitions. This recent agreement between the two governmental bodies paved the way for antitrust investigations to be launched.

Competition is increasingly becoming a playing field with significant ​governmental activity and oversight. As countries re-assert their jurisdiction, claims of cyber independence seem a distant echo from the past.