UN Secretary-General issues policy brief for Global Digital Compact

As part of the process towards developing a Global Digital Compact (GDC), the UN Secretary-General has issued a policy brief outlining areas in which ‘the need for multistakeholder digital cooperation is urgent’: closing the digital divide and advancing sustainable development goals (SDGs), making the online space open and safe for everyone, and governing artificial intelligence (AI) for humanity. 

The policy brief also suggests objectives and actions to advance such cooperation and ‘safeguard and advance our digital future’. These are structured around the following topics:

  • Digital connectivity and capacity building. The overarching objectives here are to close the digital divide and empower people to participate fully in the digital economy. Proposed actions range from common targets for universal and meaningful connectivity to putting in place or strengthening public education for digital literacy. 
  • Digital cooperation to accelerate progress on the SDGs. Objectives include making targeted investments in digital public infrastructure and services, making data representative, interoperable, and accessible, and developing globally harmonised digital sustainability standards. Among the proposed actions are the development of definitions of safe, inclusive, and sustainable digital public infrastructures, fostering open and accessible data ecosystems, and developing a common blueprint on digital transformation (something the UN would do). 
  • Upholding human rights. Putting human rights at the centre of the digital future, ending the gender digital divide, and protecting workers are the outlined objectives in this area. One key proposed action is the establishment of a digital human rights advisory mechanism, facilitated by the Office of the UN High Commissioner for Human Rights, to provide guidance on human rights and technology issues. 
  • An inclusive, open, secure, and shared internet. There are two objectives: safeguarding the free and shared nature of the internet, and reinforcing accountable multistakeholder governance. Some of the proposed actions include commitments from governments to avoid blanket internet shutdowns and refrain from actions disrupting critical infrastructures.
  • Digital trust and security. Objectives range from strengthening multistakeholder cooperation to elaborate norms, guidelines, and principles on the responsible use of digital technologies, to building capacity and expanding the global cybersecurity workforce. The proposed overarching action is for stakeholders to commit to developing common standards and industry codes of conduct to address harmful content on digital platforms. 
  • Data protection and empowerment. Ensuring that data are governed for the benefit of all, empowering people to control their personal data, and developing interoperable standards for data quality as envisioned as key objectives. Among the proposed actions are an invitation for countries to consider adopting a declaration on data rights and seeking convergence on principles for data governance through a potential Global Data Compact. 
  • Agile governance of AI and other emerging technologies. The proposed objectives relate to ensuring transparency, reliability, safety, and human control in the design and use of AI; putting transparency, fairness, and accountability at the core of AI governance; and combining existing norms, regulations, and standards into a framework for agile governance of AI. Actions envisioned range from establishing a high-level advisory body for AI to building regulatory capacity in the public sector. 
  • Global digital commons. Objectives include ensuring inclusive digital cooperation, enabling regular and sustained exchanges across states, regions, and industry sectors, and developing and governing technologies in ways that enable sustainable development, empower people, and address harms. 

The document further notes that ‘the success of a GDC will rest on its implementation’. This implementation would be done by different stakeholders at the national, regional, and sectoral level, and be supported by spaces such as the Internet Governance Forum and the World Summit on the Information Society Forum. One suggested way to support multistakeholder participation is through a trust fund that could sponsor a Digital Cooperation Fellowship Programme. 

As a mechanism to follow up on the implementation of the GDC, the policy brief suggests that the Secretary-General could be tasked to convene an annual Digital Cooperation Forum (DCF). The mandate of the forum would also include, among other things, facilitating collaboration across digital multistakeholder frameworks and reducing duplication; promoting cross-border learning in digital governance; and identifying and promoting policy solutions to emerging digital challenges and governance gaps.

AI revolutionarise journalism and media

According to the Economist’s coverage, AI is revolutionising journalism, including newsgathering, proofreading, and creating data-driven stories. AI is becoming increasingly sophisticated and is being used to help busy newsrooms.

Examples of AI usage include Reuters using AI to look for patterns in large data sets, AP using AI for “event detection”, scanning social media for ripples of news, Chat GPT being used to assess the newsworthiness of research papers, Semafor using AI to proofread stories, Radar AI creating data-driven pieces for local papers, and Schibsted launching an AI tool to turn long articles into short packages for Snapchat, a social network.

https://www.economist.com/business/2023/05/04/artificial-intelligence-is-remixing-journalism-into-a-soup-of-language

G7 digital and tech ministers discuss AI, data flows, digital infrastructure, standards, and more

On 29-30 April 2023, G7 digital and tech ministers met in Takasaki, Japan, to discuss a wide range of digital policy topics, from data governance and artificial intelligence (AI), to digital infrastructure and competition. The outcomes of the meeting – which was also attended by representatives of India, Indonesia, Ukraine, the Economic Research Institute for ASEAN and East Asia, the International Telecommunication Union, the Organisation for Economic Co-operation and Development, UN, and the World Bank Group – include a ministerial declaration and several action plans and commitments to be endorsed at the upcoming G7 Hiroshima Summit.

During the meeting, G7 digital and tech ministers committed to strengthening cooperation on cross-border data flows, and operationalising Data Free Flow with Trust (DFFT) through an Institutional Arrangement for Partnership (IAP). IAP, expected to be launched in the coming months, is dedicated to ‘bringing governments and stakeholders together to operationalise DFFT through principles-based, solutions-oriented, evidence-based, multistakeholder, and cross-sectoral cooperation’. According to the ministers, focus areas for IAP should include data location, regulatory cooperation, trusted government access to data, and data sharing.

The ministers further noted the importance of enhancing the security and resilience of digital infrastructures. In this regard, they have committed to strengthening cooperation – within G7 and with like-minded partners – to support and enhance network resilience through measures such as ensuring and extending secure and resilient routes of submarine cables. Moreover, the group endorsed the G7 Vision of the future network in the Beyond 5G/6G era, and is committed to enhancing cooperation on research, development, and international standards setting towards building digital infrastructure for the 2030s and beyond. These commitments are also reflected in a G7 Action Plan for building a secure and resilient digital infrastructure

In addition to expressing a commitment to promoting an open, free, global, interoperable, reliable, and secure internet, G7 ministers condemned government-imposed internet shutdowns and network restrictions. When it comes to global digital governance processes, the ministers expressed support for the UN Internet Governance Forum (IGF) as the ‘leading multistakeholder forum for Internet policy discussions’ and have proposed that the upcoming Global Digital Compact reinforce, build on, and contribute to the success of the IGF and World Summit on the Information Society (WSIS) process. Also included in the internet governance section is a commitment to protecting democratic institutions and values from foreign threats, including foreign information manipulation and interference, disinformation and other forms of foreign malign activity. These issues are further detailed in an accompanying G7 Action Plan for open, free, global, interoperable, reliable, and secure internet

On matters related to emerging and disruptive technologies, the ministers acknowledged the need for ‘agile, more distributed, and multistakeholder governance and legal frameworks, designed for operationalising the principles of the rule of law, due process, democracy, and respect for human rights, while harnessing the opportunities for innovation’. They also called for the development of sustainable supply chains and agreed to continue discussions on developing collective approaches to immersive technologies such as the metaverse

With AI high on the meeting agenda, the ministers have stressed the importance of international discussions on AI governance and interoperability between AI governance frameworks, and expressed support for the development of tools for trustworthy AI (e.g. (non)regulatory frameworks, technical standards, assurance techniques) through multistakeholder international organisations. The role of technical standards in building trustworthy AI and in fostering interoperability across AI governance frameworks was highlighted both in the ministerial declaration and in the G7 Action Plan for promoting global interoperability between tools for trustworthy AI

When it comes to AI policies and regulations, the ministers noted that these should be human-centric, based on democratic values, risk-based, and forward-looking. The opportunities and challenges of generative AI technologies were also tackled, as ministers announced plans to convene future discussions on issues such as governance, safeguarding intellectual property rights, promoting transparency, and addressing disinformation. 

On matters of digital competition, the declaration highlights the importance of both using existing competition enforcement tools and developing and implementing new or updated competition policy or regulatory frameworks ‘to address issues caused by entrenched market power, promote competition, and stimulate innovation’. A summit related to digital competition for competition authorities and policymakers is planned for the fall of 2023.

Telegram to appeal Brazilian judge’s order to block the platform

Telegram’s CEO, Pavel Durov, has announced that the company would appeal a Brazilian court’s order to suspend its services temporarily. The court order follows the platform’s non-compliance with a prior court order to provide data on two neo-Nazi groups accused of inciting violence in schools. Durov claimed that compliance with such a request was ‘technologically impossible’.

The judge had also set a daily fine of nearly US$200,000 for noncompliance. Telegram’s CEO did not state whether the company intends to pay the fine.

India introduces stricter regulation for social media platforms

The Indian government has recently introduced the 2023 amendments to the 2021 Intermediary Guidelines and Digital Media Ethics Code that regulate social media intermediaries (SMIs). The revised norms have three well-defined objectives: to address emergent challenges caused by technological innovations, to ensure an open, safe and trusted internet, and to set up a fact-checking unit to screen digital content.

The new rules stipulate that SMIs must inform their users of the ‘host, display, upload, publish, transmit, store, update or share any information in respect to any business of the Indian government that is identified as fake or false or misleading by such fact-check unit’ that governs these platforms. They are also tasked with ‘making reasonable efforts’ to prevent prohibited content from being hosted on their platform by the users. For all SMIs, a grievance officer will be mandated to acknowledge the receipt of a complaint within 24 hours and dispose of it within 15 days. The ministry said that the new rules are not aimed at early-stage start-ups but at large players such as Google, Meta, and Twitter.

The revised norms have elicited angry responses from news organisations, who have expressed fears that the move is yet another bid to ‘muzzle the media’. Critics of the new rules are not buying the government’s arguments that compliance with the amended regulations is mandatory to ‘ensure an open, safe and trusted internet’. However, the administration of Narendra Modi is determined to push forward with the proposed regulation of social media intermediaries.

Google and access to news content

Google has recently blocked access to news content in Canada in response to a bill that would force big internet companies to pay publishers for displaying links to their stories. This is part of a larger effort by governments to squeeze money out of Silicon Valley and into local media companies.

Google’s News Showcase is a five-week trial that will spend about $1bn in 2020-23 on licensing content from more than 2,000 news organisations in more than 20 countries. The trial affects about 4% of users in the country. Facebook’s News Tab has also been scaled back, but the company estimates that it sends 1.9bn clicks a year to Canadian media, publicity it values at C$230m.

Google and Facebook are profiting from content that is not theirs, driving traffic to publishers, and sending clicks to Canadian media. Furthermore, search engines are getting better at displaying information without referring visitors to external sources. This could also mean that AI-search companies should be made to license the content they regurgitate.

News organisations have seen most of their advertising revenue disappear online, and Google and Facebook have set up mechanisms for funnelling ‘support’ to media companies. Australia’s law has reportedly been worth about A$200m in the scheme’s first year.

Regulating digital games

Regulating digital games is a topic that has been at the forefront recently due to the advancement in gaming technology. US Congress pushed the games industry to set up an Entertainment Software Ratings Board to determine age ratings. This led to the Pan European Game Information rating in 2003. This rating system is similar to that of films.

As online games have become more impactful, the question of content regulation is becoming more important. This issue got in the focus after the Christchurch shootings in 2019, when users of Roblox, an online gaming platform, started re-enacting the event. After this incident, Roblox employs “thousands” of human moderators and artificial intelligence to check user-submitted games and police chat among its 60m daily users, who have an average age of about 13.

This has prompted debates on how to regulate social media-like conversations. Politicians have argued that the constitution should protect in-game chat as it is considered similar to one-to-one conversation.

Game makers are doing their best to design out bad behaviour before it occurs. For example, when users of Horizon Worlds complained of being virtually groped, a minimum distance between avatars was introduced.

Similar o content moderation of social media, online gaming will trigger new policy and governance challenges.

Ireland establishes working group to develop National Counter Disinformation Strategy

The Minister for Tourism, Culture, Arts, Gaeltacht, Sport and Media in Ireland, Catherine Martin TD, announced the establishment of a working group to develop a National Counter Disinformation Strategy. The creation of this multistakeholder working group was among the recommendations outlined in the 2022 report of the Future of Media Commission. The strategy, to be finalised by the end of 2023, is intended to ‘coordinate national efforts to counter organised coordinated campaigns of manipulation of Irish internet users and ensure transparency about content moderation policies that impact Irish citizens’.

Twitch fined in Russia over failure to remove fake information

The magistrates’ court in Moscow’s Tagansky district fined video live streaming service app Twitch 4 million rubles (US$57,000) for refusing to remove fake information about Russia’s military campaign in Ukraine, the Interfax news agency reported.

According to the ruling, the platform has refused to remove content the court determined as being intended to discredit the Russian Armed Forces. In particular, the motion referred to the non-removal from the streaming service of an interview with former lawyer Mark Feigin (designated as a foreign agent in Russia) and former advisor to the Ukrainian Presidential Office Alexei Arestovich.