Home | Newsletters & Shorts | IGF 2023 – Daily 3

IGF 2023 – Daily 3

 Logo, Advertisement, Text, Art, Graphics

IGF Daily Summary for

Tuesday, 10 October 2023

Dear reader, 

On Day 2, the IGF got into full swing with intense debates in conference rooms and invigorating buzz in the corridors. The inspiring local flavours permeated the debate on the impact of digitalisation on the treasured Japanese manga culture and Jovan Kurbalija’s parallels between the Kyoto School of Philosophy and AI governance.

 Gate, Torii, Architecture, Building

After the formalities and protocol of the first day, the level of usual ‘language’ decreased, and new insights and controversies increased. AI remains a prominent topic, with much more clarity in the debate. While the familiar AI lingo continues, it was refreshing to see increased clarity in thinking about AI away from the prevailing hype and fear-mongering in the media space.

The quality of debate was increased by viewing AI from various perspectives, from technology and standardisation to human rights and cybersecurity. While acknowledging the reality of different interests and powers, the debate on AI brought the often-missing insight that all countries face similar challenges in governing AI. For this reason, focusing on human-centred AI may help reduce geopolitical tensions in this field.

The Global Digital Compact (GDC) triggered intense and constructive debate. While there is overwhelming support for the GDC as the next step in developing inclusive digital governance, the focus on details is increasing, including the role of the IGF in implementing the GDC and preserving the delicate balance between the multilateral negotiations of the GDC and multistakeholder participation. This year, the IGF also intensified academic debate with policy implications on the difference between internet and ‘digital’. 

Further down in this summary, you can find more on, among other things, internet fragmentation, cybersecurity, content moderation, and digital development. 

You can also read more on an exciting initiative using AI to preserve the rich knowledge that the IGF has generated since its first edition in Athens in 2006.

We wish you inspiring discussions and interesting exchanges on the third day of the IGF in Kyoto!

The Digital Watch Team

A rapporteur writes a report on a laptop while observing a dynamic panel discussion

Do you like what you’re reading? Bookmark us at https://dig.watch/igf2023 and tweet us @DigWatchWorld

Have you heard something new during the discussions, but we’ve missed it? Send us your suggestions at digitalwatch@diplomacy.edu

The highlights of the discussions

The day’s top picks

  • AI: Increasing clarity in debates
  • Japan: Manga, Kyoto philosophers and digital governance
  • The GDC: Overall support and discovering the ‘devils in the details’
  • The IGF itself: Using AI to preserve the rich knowledge treasure of the IGF

Artificial Intelligence

Refreshing clarity in AI debates

 Advertisement, Poster, Art, Graphics, Publication, Text, Book, Nature, Outdoors

We want AI, but what does that mean? Today’s main session brought refreshing clarity to the discussion on the impact of AI on society. It moved from the cliche of AI ‘providing opportunities while triggering threats’ to providing more substantial insights. The spirit of this debate starkly contrasted to the rather alarming hype about existential human threats that AI has triggered.

AI was compared to electricity, with the suggestion that AI is becoming similarly pervasive and requires global standards and regulations to ensure its responsible implementation. The discussion recognised AI as a critical infrastructure. 

 Business Card, Paper, Text

A trendy analogy comparing AI governance to the International Atomic Energy Agency (IAEA) was criticised as containing more differences than similarities between AI and nuclear energy.

While we wait for new international regulations to be developed, a wide range of actors could adopt voluntary standards for AI. For instance, UNICEF uses a value-based design approach developed by the Institute of Electrical and Electronics Engineers (IEEE).

The private sector in AI governance is both inevitable and indispensable. Therefore, its involvement must be transparent, open, and trustworthy. Currently, this is not the case. However, the representative of OpenAI noted the recent launch of the Red Teaming Network as an industry attempt to be more open and inclusive. Other examples are the LEGO Group’s implementation of measures to protect children in their online and virtual environments and the UK Children’s Act.

Calls were made for national and regional attempts to advance local-context AI governance, as in Mauritius, Kenya and Egypt, which are taking steps towards national policies. In Latin America, challenges also arise from unique regional contexts, global power dynamics, and the intangible nature of AI. 

AI and human rights 

Children’s safety and rights were the focus of a workshop organised by UNICEF. AI has already entered classrooms, but without clearly defined criteria for responsible integration. It is already clear that there are benefits: The innovative use of AI for bridging cultural gaps heralds a new era of global connectedness, and it can support fair assessments. Going beyond, according to Honda, its HARU robot can provide emotional support to vulnerable children or AI can fill a gap in a severely under-resourced mental healthcare system, such as in Nigeria, where an ‘Autism VR’ project is increasing awareness, promoting inclusion and supporting neurodiverse children. 

However, a note of caution was also sounded: the future of education lies in harnessing technology’s potential while championing inclusivity, fairness, and global collaboration. Some solutions are: integrating educators in the research process, adopting a participatory action approach, involving children from various cultural and economic backgrounds, and recognising global disparities given that AI datasets are insufficiently capturing the experiences of children from the Global South.

Human rights approaches carried weight today, echoing in the workshop on a Global human rights approach to responsible AI governance. The discussion highlighted the ‘Brussels effect,’ wherein EU regulations became influential worldwide. Countries with more robust regulatory frameworks tend to shape AI governance practices globally, emphasising the implications of rules beyond national borders. In contrast, as some observers noted, in Latin America, the regional history of weak democratic systems generated a general mistrust towards participation in global processes, hindering the continent’s engagement in AI governance. Yet, Latin America provides raw materials, resources, data, and labour for AI development while the tech industry aggressively pushes for regional AI deployment in spite of human rights violations. The same can be said for Africa. 

To address these challenges, it is necessary to strengthen democratic institutions and reduce regional asymmetries, keeping in mind that human rights should represent the voice of the majority. To ensure an inclusive and fair AI governance process, reducing regional disparities, strengthening democratic institutions, and promoting transparency and capacity development are essential. 

AI and standardisation

It appears that regional disparities plague standardisation efforts as well. Standardisation is indispensable for linkages between technology developers, policy professionals and users. Yet, as the workshop Searching for standards: The global competition to govern AI noted, silos remain problematic, isolating developers and policymakers or providers and users of the technology. The dominance of advanced economies as providers of AI tech, heavily guarding intellectual property rights, and early standard-setting has led to situations where harms are predominately framed through a lens in the Global North, at the cost of impacts on users, usually in the Global South. 

As a potential way of opening the early standard-setting game, open-source AI models support developing countries by offering immediate opportunities for local development and involvement in the evolving ecosystem. There is, however, a need for technical standards for AI content, with watermarking proposed as a potential standard. 

AI and cybersecurity

The use of AI in cybersecurity provides numerous opportunities, as noted during the workshop on AI-driven cyber defence: Empowering developing nations. The discussion centred on the positive role of AI in cybersecurity, emphasising its potential to enhance protection rather than pose threats. One example is AI’s effectiveness in identifying fake accounts and inauthentic behaviour.  

Collaboration and open innovation were emphasised as critical factors for AI cybersecurity. Keeping AI accessible to experts in other areas helps prevent misuse, and policymakers should incentivise open innovation. 

A person’s finger touches a digital fingerprint icon on an interlocking network of digital functions represented by icons connected to AI.

Unlocking the IGF’s knowledge using AI

Yesterday, Diplo and the GIP supported the IGF Secretariat in organising a side session, where how to unlock IGF’s knowledge to gain AI-driven insights for our digital future was discussed. The immense amount of data accumulated through the IGF over 18 years – which is a public good that belongs to all stakeholders – presents an opportunity for valuable insights when mined and analysed effectively, with AI applications serving as useful tools in this process. This way, this wealth of knowledge can be more effectively utilised to contribute to the SDGs.

Jovan Kurbalija smiles and reaches for the microphone on a panel at the IGF.

AI can enhance human capabilities to assist the IGF mission. AI has the capability to generate interactive reports from sessions (as it does at IGF2023), with detailed breakdowns by speaker and topic, narrative summaries, and discussion points. Such a tool can codify and translate the arguments presented during sessions, identify and label key topics, and develop a comprehensive knowledge graph. It can connect and compare discussions across different IGF sessions, identify commonalities, link related topics, and facilitate a more comprehensive understanding of the subject matter, as well as associate relevant SDGs with the discussions. 

AI can mitigate the challenge of the crowded schedule of IGF sessions, by establishing links to similar discussions and sessions from past years, which enables better coordination and consolidation of related themes over the course of years and meetings. Ultimately, AI can help us visualise hours of discussions and thousands of pages of discussion in the format of a knowledge graph, as done in Diplo’s experiment with daily debates at this year’s IGF (see below).

An intricate multicoloured lace network of lines and nexuses representing a knowledge graph of Day 0 of IGF2023

AI can increase the effectiveness of disseminating and utilising the knowledge generated by the IGF. It can also help identify underrepresented and marginalised groups and disciplines in the IGF processes, allowing the IGF to increase its focus on involving them. 

Importantly, preserving the IGF’s knowledge and modus operandi can show the relevance and power of respectful engagement with different opinions and views. Since this approach is not a ‘given’ in our time, the IGF’s contribution could be much broader, far beyond the focus on internet governance per se.

Digital governance processes

GDC in the spotlight

While it may look like AI is the one and only most popular topic at this year’s IGF, there is at least one more thing on many participants’ minds: the much anticipated Global Digital Compact (GDC). It’s no surprise, then, that a main session was dedicated to it. If this is the first time you are reading about the GDC (we strongly doubt it), we invite you to familiarise yourself with it on this process page before moving on. 

If you know what the GDC is, then you most likely also know that one sour point in discussions so far has concerned the process itself: While the GDC is expected to be an outcome of the multilateral 2024 Summit of the Future, many state and non-state actors argue that there should be multistakeholder engagement throughout the GDC development process. But, as highlighted during yesterday’s main session, balancing multilateral processes and multistakeholder engagement is indeed a challenge. How to address this challenge seems to remain unclear, but for the time being, stakeholders are encouraged to engage with their member states to generate greater involvement. 

And speaking of multistakeholderism, one expectation (or rather a wish) that some actors have for the GDC is that it will recognise, reinforce, and support the multistakeholder model of internet governance. Another expectation is that the GDC will establish clear linkages with existing processes while avoiding duplication of efforts and competition for resources. For instance, it was said during the session that the IGF itself should have a role in implementing the GDC principles and commitments and in the overall GDC follow-up process. 

Beyond issues of process and focus, one particularly interesting debate has picked up momentum within the GDC process: whether and to what extent internet governance and digital governance/digital cooperation are distinct issues. Right now, there are arguments on both sides of the debate. Please contribute your views to the survey on the internet vs. digital debate.

 Electronics, Hardware, Diagram

Multistakeholder voices in cyber diplomacy                                               

The IGF is, by nature, a multistakeholder space, but many other digital governance spaces struggle with how to define stakeholder engagement. This was highlighted in the session Stronger together: Multistakeholder voices in cyber diplomacy, where many participants called for enhanced stakeholder participation in policy-making and decision-making processes related, in particular, to cybersecurity, cybercrime, and international commerce negotiations.

The non-governmental stakeholders’ perspective is essential for impactful outcomes, transparency, and credibility. The absence of their input not only results in the loss of valuable perspectives and expertise, but also undermines the legitimacy and effectiveness of the policies and decisions made. Moreover, collaboration between state and non-state stakeholders can also be seen as mutually beneficial. Multistakeholder involvement could aid governments in the gathering of diverse ideas during negotiations and decision-making processes related to digital issues. 

However, as the session on enhancing the participation and cooperation of CSOs in/with multistakeholder IG forums noted, civil society organisations, especially from the Global South, face barriers to entry into global multistakeholder internet governance spaces, and the need for increased capacity building, transparency in policy processes, and spaces that allow for network building and coordination to impactfully engage in global multistakeholder internet governance processes.

One approach to solving the conundrum of multistakeholder engagement in intergovernmental processes was proposed: implementing a policy on stakeholder participation. Such a policy, it was said, would transform stakeholder involvement into an administrative process, ensuring that all perspectives are consistently considered and incorporated into policy-making.

People in business dress and holding laptop computers converse in a hallway


Turning back the tide on internet fragmentation

The concerned words of Secretary-General Antonio Guterres at the start of the 78th UN General Debate still echo in the minds of many of us. ‘We are inching ever closer to a great fracture in economic and financial systems and trade relations,’ he told world leaders, ‘one that threatens a single, open internet with diverging strategies on technology and AI, and potentially clashing security frameworks.’

Those same concerns were raised within the halls of the IGF yesterday. In one of the workshops, experts tried to foresee the internet in 20 years’ time: The path we’re on today, mired with risks, does not bode well for the internet’s future. In a second workshop, experts looked at the different dimensions of fragmentation – fragmentation of the user experience, that of the internet’s technical layer, and fragmentation of internet governance and coordination (explained in detail in this background paper) – and the consequences they all carry. In a third workshop, experts looked at the technical community’s key role in the evolution of the internet and how they can best help shape the future of the internet.

The way we imagine the future of the internet might vary in detail. Still, the core issue is the same: If we don’t act fast, the currently unified internet will come precariously closer to fragmenting into blocs. 

It could be the beginning of the end of the founding vision of the free, open, and decentralised internet, which shaped its development for decades. We need to get back to the values and principles that shaped the internet in its early days if we are to recover those possibilities. These values and principles underpin the technical functioning of the internet, and ensure that the different parts of the internet are interconnected and interoperable. Protecting the internet’s critical properties is crucial for global connectivity.

As risks increase, we shouldn’t lose sight of the lasting positive aspects either. The internet has been transformational; it has opened the doors for instantaneous communication; its global nature has enabled the free flow of ideas and information and created major opportunities for trade. 

A swift change to the current state of affairs (undoubtedly affecting the online space) is forthcoming, The Economist argued recently. But if we want to be more proactive, there are plenty of spaces that can help us understand and mitigate the risks (one of which is the Global Digital Compact). Perhaps this will also give us the space to renew our optimism in technology and its future.

Debate on ‘fair share’ heats up

Internet traffic has increased exponentially, prompting telecom operators to request that tech companies contribute their fair share to maintain the infrastructure. In Europe, this issue is at the centre of a heated debate. Just a few days ago, 20 CEOs from most of Europe’s largest telecom companies called on lawmakers to introduce new rules. 

Yesterday, one lively discussion during an IGF workshop tackled this very question: whether over-the-top (OTT) service providers (e.g. Netflix, Disney Plus) should contribute to the costs associated with managing and improving the infrastructure. While the debate isn’t new, there were at least two points raised by experts that are worth highlighting:

  • Instead of charging network fees, ISPs could partner with OTT providers in profit-sharing agreements. 
  • It might be better if governments are left out of this debate. Instead of imposing new regulations, governments could encourage cooperation between companies. This seems to be an approach actively embraced by the Republic of Korea.


Digital and the environment

The hottest summer ever recorded on Earth is behind us: June, July, and August 2023 were the hottest three months ever documented, World Meteorological Organisation (WMO) data shows. The discussion of the overall impact of digital technologies on the environment at the IGF, therefore, came as no surprise.

Internet use comes with a hefty energy bill, even for seemingly small things like sending texts – it gobbles up data and power. In fact, the internet’s carbon footprint amounts to 3.7% of global emissions. The staggering number of devices globally( over 6.2 billion), need frequent charging, contributing to significant energy consumption. Some of these devices also perform demanding computational tasks that require substantial power, further compounding the issue. Moreover, the rapid pace of electronic device advancement and devices’ increasingly shorter lifespans have exacerbated the problem of electronic waste (e-waste).

There are, however, a few things we can do. For instance, we can use satellites and high-altitude connectivity devices to make the internet more sustainable. We can take the internet to far-off places using renewable energy sources, like solar power. And crucially, if we craft and implement policies right from the inception of technology, we can create awareness among start-up stakeholders about its carbon footprint. We can also leverage AI to optimise electrical supply and demand and reduce energy waste and greenhouse gas emissions, which together, might even generate more reliable and optimistic projections of climate change.

A modern, white, three-bladed windmill stands in a field of green plants, against a blue sky.

Broadband from space

The latest data from ITU shows that approximately 5.4 billion people are using the internet. That leaves 2.6 billion people offline and still in need of access. One of the ways to bring more people online is by using Low Earth Orbit (LEO) satellites – think Starlink – to provide high-speed, low-latency internet connectivity. Another important element here are libraries, which often incorporate robotics, 3D printing, and Starlink connections, enabling individuals to engage with cutting-edge innovations.

There are, however, areas of concern regarding LEO satellites. It could be (is) technically challenging to launch LEO satellites on a large scale. Their environmental impact, both during their launch and eventual disposal in the upper atmosphere, is unclear. For some communities, the cost of using such services might be too high. Additionally, satellites are said to cause issues for astronomical and scientific observations. 

To fully harness the potential of these technologies, countries must re-evaluate and update their domestic regulations related to licensing and authorising satellite broadband services. Additionally, countries must be aware of international space law and its implications to make informed decisions. Active participation in international decision-making bodies, such as ITU and the UN Committee on Peaceful Uses of Outer Space (COPUOS), is crucial for shaping policies and regulations that support the effective deployment of these technologies. 

By doing so, countries can unlock the benefits of space-based technologies and promote the uninterrupted provision of wireless services on a global scale.

Starlink satellite dish on the roof of residential building

Accessible e-learning for persons with disabilities (PWD)

The accessibility challenges in e-learning platforms pose substantial hardships for people with disabilities, both physical and cognitive. Unfortunately, schools frequently fail to acknowledge or address the difficulties associated with online resource access with the immediacy they need and deserve. Those uninformed about and inexperienced with the obstacles of cognitive impairments often regard these issues as insignificant. This lack of awareness compounds the problem, leaving students with disabilities, especially those with cognitive impairments, to silently wrestle with these issues, a workshop on accessible e-learning experience noted.

Some solutions identified are: 

  • Involving users with disabilities in the development process of e-learning platforms
  • Integrating inclusion into everyday practice in educational institutions 
  • Implementing proactive measures and proper benchmarking and assessment tools to effectively address digital inclusion
  • Collaborating globally to make e-learning more accessible

Human rights

Digital threats in conflict zones

With all that’s going on in the Middle East, we can’t help but wonder how digital threats and misinformation are potentially impacting the lives of civilians residing in conflict zones, in a negative way. This issue was tackled in three workshops yesterday – Encryption’s role in safeguarding human rights, Safeguarding the free flow of information amidst conflict, and Current developments in DNS privacy.

In modern conflicts, digital attacks are not limited to traditional military targets. Civilians and civilian infrastructures, such as hospitals, power grids, and communications networks, are also at risk. In addition, with the growing reliance on a shared digital infrastructure, civilian entities are more likely to be inadvertently targeted. The interconnectedness of digital systems means that an attack on one part of the infrastructure can have far-reaching consequences, potentially affecting civilians not directly involved in the conflict.

The blurred lines between civilian and military targets in the digital realm has other far-reaching implications for trust and safety. It affects the credibility of humanitarian organisations, the provision of life-saving services, the psychological well-being of civilians, and their access to essential information.

Experts advocated a multi-faceted approach to address digital threats and misinformation in conflict zones. This included building community resilience, collaborating with stakeholders, enforcing policies, considering legal and ethical implications, and conducting thorough due diligence.

Connected paper cutout dolls in red, yellow, green, and blue hold hands, filling a white surface.


Multilingualism, cultural diversity, and local content

As in previous years, the discussion on digital inclusion touched on the need to foster multilingualism and access to digital content and tech in native languages. This is particularly challenging in the case of less spoken languages such as Furlan, Sardo, and Arberesh, and these challenges need to be addressed if we want to truly empower individuals and communities to meaningfully engage in and take advantage of the digital world. The Tribal Broadband Connectivity Programme was highlighted as an example of an initiative that works to preserve indigenous languages, thereby adding tribal languages and cultural resources to the internet ecosystem. 

Universal acceptance (UA) was brought up as a way to enable a more inclusive digital space. UA is not only a technical issue (i.e. making sure that domain names and email addresses can be used by (are compatible with) all internet applications, devices, and systems irrespective of script and language), but also one of digital inclusion. It fosters inclusivity and accessibility in the digital realm. And while core technical issues have mostly been resolved, more needs to be done to drive substantive progress on UA. Approaches include more efforts to raise awareness within the technical community about UA readiness, economic incentives (e.g. government preference in public procurement for vendors who demonstrate UA), government support and involvement in the uptake of UA; and policy coordination among different stakeholders.

Multilingualism is not only about accessing content in local languages and in native scripts but also about developing such content. It was noted in a dedicated session that local content creation in minority languages contributes significantly to cultural and linguistic diversity. But challenges remain here as well. 

But in order to create content, users need to be able to access the internet. Yet, digital divides remain a reality, as do the lack of robust infrastructure, affordability issues (e.g. some households can only afford one device, while in many, even this one device is seen as a luxury), and gender inequalities, which prevent many from creating content. In addition, the mismatch between broadband pricing and the spending power of individuals hinders digital inclusion. Continued efforts are required to deploy reliable infrastructure with affordable pricing options.

Nonetheless, there is hope for universal access to the internet in the future. Advancements in technology are gradually making access less expensive with more options, potentially enabling broader internet access. And initiatives such as Starlink and Project Kuiper, which aim to provide connectivity to remote areas via satellites, are helping to bridge the digital divide.

One interesting point in the discussion was that the internet has not evolved into the egalitarian platform initially envisioned for content creation and access to information. Despite the TikTok phenomenon, instead of empowering individuals to become content publishers – it was said – the internet has given rise to powerful intermediaries who aggregate, licence, and distribute content. These intermediaries dominate the industry by delivering uniform content to a global market. And so, challenges remain regarding content distribution and ensuring equal access for all. 
In considering local content contributions, platform and content neutrality should also be considered to ensure a fair and diverse content ecosystem.

Cybersecurity and Digital Safety

The development of offensive cyber capabilities by states, impactful ransomware attacks, and the high risks of sexual abuse and exploitation of minors online, have all raised the profile of cybersecurity and the importance of protecting against new and existing threats, the Main Session on Cybersecurity, Trust & Safety Online noted.

Offensive cyber capabilities and the legitimacy of using force in response to cyberattacks were outlined as important challenges, along with fighting the use of social networks as tools for interventionism, the promotion of hate speech, incitement to violence, destabilisation, and the dissemination of false information and fake news. 

Given the long list and complexity of issues, some feel a legally binding international instrument is needed to complement existing international law and encompass cyberspace adequately. Others underline the need to involve different stakeholders – the technical community, civil society, and companies, including law firms – in shaping any such instrument. The fast pace of tech development is another challenge in this endeavour. The limitations of a comprehensive solution to which we aspire should be acknowledged, and we should prioritise actions that could have the greatest near-future impact for mitigating risks.

Cybercrime negotiations

The debate on a UN treaty to combat cybercrime identified the following challenges: 

  • The current draft of the cybercrime treaty aims to extend the power of law enforcement but offers weak safeguards for privacy and human rights; treaty-specific safeguards may be necessary. 
  • Geopolitics dominates negotiations, and expert input is often needed (but not available) to understand the reality and shape of current cybercrime policies. 
  • Companies must play a crucial role in international cooperation against cybercrime.

Some concrete suggestions to foster increased cooperation and efficiency to combat cybercrime beyond international treaty provisions include the creation of a database of cybersecurity and cybercrime experts for knowledge and information sharing (the efforts of the UN OEWG and the OAS were outlined), developing a pool of existing knowledge to support capacity development for combating cybercrime (not least because policymakers often feel intimidated by technical topics), and focusing on expanding the role of existing organisations such as Interpol. Importantly, states and businesses should become more aware of the economic benefits and potential increase in GDP due to investments in cybersecurity.

International negotiations should also focus more on strengthening systems and networks at a technical level. This includes measures to ensure the development of more secure software, devices, and networks, through security-by-design and security-by-default; providing legal protections for security researchers when identifying vulnerabilities; enhancing education and information sharing; using AI in cybersecurity for identifying vulnerabilities in real-time and other tasks. The risks of emerging technologies have come to the forefront of cybersecurity; however, international discussions should not lose sight of the broader cybersecurity landscape.

Cybersecurity and development

In emerging cybersecurity governance frameworks, developing countries’ specificities should be considered. Taking West Africa as an example, challenges include the lack of national and regional coordination to effectively combat cyber threats; resource limitations on technical, financial, and human fronts; insufficient allocation of resources to the cybersecurity sector; a shortage of qualified personnel in the region; and weak critical infrastructure, which is particularly susceptible to cyber attacks (where frequent power outages and telecommunication disruptions are already commonplace). 

Cybersecurity frameworks developed in such an environment should be based on peer-to-peer cooperation between the states of the region, cooperation, information sharing with the private sector, and local adaptation of global best practices, considering the local context and challenges. Notable initiatives are the Joint Platform for Advancing Cybersecurity in West Africa, launched under the G7 German presidency, which aims to establish an Information Sharing and Analysis Center (ISAC), and the Global Forum on Cyber Expertise (GFCE) work with the Economic Community of West African State (ECOWAS) to enhance capacities through partnerships.

A sturdy padlock sits on a black table in front of a computer keyboard.


The potential of regulatory sandboxes

What happens when you toss traditional regulation into a sandbox and hand innovators a shovel? You get a regulatory playground where creativity flourishes, rules adapt, and the future takes shape one daring experiment at a time. 

The workshop Sandboxes for data governance highlighted the growing interest in this tool for development of new regulatory solutions. Regions like Africa and the Middle East are in the early stages of adopting fintech-related sandboxes; Singapore has gained more experience and has fostered collaboration between industry and regulators. GovTech sandboxes, as seen in Lithuania, have become integral to the regulatory process where (un)controlled environments facilitate the testing and implementation of mature technologies in the government sector.

A common main challenge is the significant resources and time required to implement sandboxes, and how taxing this is for developing countries. It helps to learn from established sandboxes and tailor them to specific contexts. But more than that, collaborative efforts are needed between government authorities, industry players, civil society organisations, and regulatory bodies to make the process work.

The content creation revolution

The tectonic shift in content creation over the past decade has been internet-shaking. Content creation is no longer limited to an elite group of professionals, thanks to the widespread availability of user-friendly and inexpensive tools. Users are now generating vast amounts of unique and dynamic new content and sharing it on social media platforms and wherever the latest trend thrives.

Yesterday’s workshops on intellectual property discussed this shift, and the efforts to support the accessibility and availability of content through digital platforms. One workshop that looked at content creation and access to open information recognised that the industry is adapting to new technological advancements, while the workshop that looked at the manga culture (a cultural treasure of Japan, the host country) examined how the global manga market enjoyed rapid growth during the COVID-19 pandemic.

Both discussions explained how this transformation has its own challenges. The surge in user-generated content has raised important questions about intellectual property rights (IPR) and the ethical consumption of creative output, including the complexities of identifying and thwarting pirate operators, whose elusive tactics threaten creators’ livelihoods. The need for multistakeholder cooperation involving government bodies, internet communities, and industry players to effectively combat this threat goes without saying.

As the discussions unfolded, a common thread emerged: the need for innovation to meet the evolving demands of the digital age. But the discussions also demonstrated that the digital age demands not only legal frameworks and technological fortification but a nuanced understanding of the evolving dynamics between creators and consumers, and the content they develop and consume. 

Scales and a computer laptop form the background for a judge’s gavel on a desk.
Diplo/GIP at the IGF

Don’t miss our sessions today! 

Sorina Teleanu will speak at the open forum From IGF to GDC: A new era of global digital governance: A SIDS perspective. The session will examine the challenges developing countries face when engaging in global digital governance processes and explore ways to address such challenges. It will also discuss expectations from the ongoing GDC process and the relationship between the GDC process and the IGF. When and where? Wednesday, 11 October, at 09:45–11:15 local time (00:45–02:15 UTC), in Room I.

Anastasiya Kazakova will speak at a workshop on ethical principles for using AI in cybersecurity. The session will discuss what concrete measures stakeholders must take to implement ethical principles in practice and make them verifiable. It will also gather input and positions on how a permanent multistakeholder dialogue and exchange could be stimulated on this topic. When and where? Wednesday, 11 October, at 15:15–11:15 local time (06:15–16:15 UTC), in Annex Hall 1.