Decolonise Digital Rights: For a Globally Inclusive Future | IGF 2023 WS #64

11 Oct 2023 06:45h - 08:15h UTC

Event report

Speakers and Moderators

Speakers:
  • Ananya Singh, Government, Asia-Pacific Group
  • Shalini Joshi, Civil Society, Asia-Pacific Group
  • Pedro de Perdigão Lana, Technical Community, Latin American and Caribbean Group (GRULAC)
  • Tevin Gitongo, Civil Society, African Group
  • Mark Graham, Private Sector, Western European and Others Group (WEOG)
Moderators:
  • Man Hei Connie Siu, Civil Society, Asia-Pacific Group

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Ananya Singh

The analysis features speakers discussing the exploitation of personal data without consent and drawing parallels to colonialism. They argue that personal data is often used for profit without knowledge or permission, highlighting the need for more transparency and accountability in handling personal data. The speakers believe that the terms of service on online platforms are often unclear and full of jargon, leading to misunderstandings and uninformed consent.

One of the main concerns raised is the concept of data colonialism, which is compared to historical colonial practices. The speakers argue that data colonialism aims to capture and control human life through the appropriation of data for profit. They urge individuals to question data-intensive corporate ideologies that incentivise the collection of personal data. They argue that the collection and analysis of personal data can perpetuate existing inequalities, lead to biases in algorithms, and result in unfair targeting, exclusion, and discrimination.

In response, the speakers suggest that individuals should take steps to minimise the amount of personal data they share online or with technology platforms. They emphasise the importance of thinking twice before agreeing to terms and conditions that may require sharing personal data. They also propose the idea of digital minimalism, which involves limiting one’s social media presence as a way to minimise data.

The analysis also highlights the need for digital literacy programmes to aid in decolonising the internet. Such programmes can help individuals navigate the internet more effectively and critically, enabling them to understand the implications of sharing personal data and make informed choices.

Overall, the speakers advocate for the concept of ownership by design, which includes minimisation and anonymisation of personal data. They believe that data colonialism provides an opportunity to create systems rooted in ethics. However, they caution against an entitled attitude towards data use, arguing that data use and reuse should be based on permissions rather than entitlements or rights.

Some noteworthy observations from the analysis include the focus on the negative sentiment towards the unregulated collection and use of personal data. The speakers highlight the potential harm caused by data exploitation and advocate for stronger regulation and protection of personal data. They also highlight the need for a more informed and critical approach to online platforms and the terms of service they offer.

In conclusion, the analysis underscores the importance of addressing the exploitation of personal data without consent and the potential harms of data colonialism. It calls for more transparency, accountability, and individual action in minimising data sharing. It also emphasises the need for critical digital literacy programmes and promotes the concept of ownership by design to create ethical systems.

Audience

The discussions revolved around several interconnected issues, including legal diversities, accessibility, privacy, and economic patterns. These topics were seen as not always being respected globally due to economic interests and the perpetuation of stereotypes. This highlights the need for increased awareness and efforts to address these issues on a global scale.

One of the arguments put forth was that privacy should be considered as a global right or human right. This suggests the importance of acknowledging privacy as a fundamental aspect of individual rights, regardless of geographical location or cultural context.

Another point of discussion was the need for a taxonomy that identifies specific local needs and how they relate to cultural, historical, or political characteristics. The argument advocates for better understanding and consideration of these factors to address the unique requirements of different communities and regions. This approach aims to reduce inequalities and promote inclusive development.

The distinction between local and global needs was also highlighted as crucial for effective population planning and reducing migration to the Global North. By focusing on empowering individuals to thrive in their country of origin, the discussion emphasized the importance of creating conditions that allow people to stay and contribute to their local communities.

The importance of reimagining digital literacy and skills training was emphasized as essential for empowering marginalized communities. This involves providing equitable access to digital tools and promoting inclusivity in digital participation. Bridging the digital divide was seen as necessary to ensure that everyone has the necessary tools and skills to fully participate in the digital world.

The discussions also delved into the decolonization of the Internet and the digital landscape. It was recognized that this is an ongoing journey that requires continuous reflections, open dialogue, and actionable steps. The complexities surrounding decolonization were explored in relation to factors such as economic gains and the question of who benefits from the current digital landscape.

Lastly, the need to strive for a digital space that is inclusive and empowers all individuals, regardless of their background or geographical location, was highlighted. This vision of a future in which the internet becomes a force of equality, justice, and liberation motivates efforts towards digital inclusivity and empowerment.

In conclusion, the discussions explored various critical aspects related to legal diversities, accessibility, privacy, and economic patterns. They underscored the importance of addressing these issues globally, recognizing privacy as a universal right, understanding local needs, bridging the digital divide, and advocating for a decolonized digital space. The overall emphasis was on promoting inclusivity, reducing inequalities, and fostering empowerment in the digital age.

Jonas Valente

The analysis highlights several important points from the speakers’ discussions. Firstly, it is noted that the development and deployment of artificial intelligence (AI) heavily rely on human labor, particularly from countries in the global South. Activities such as data collection, curation, annotation, and validation are essential for AI work. This dependence on human labor underscores the important role that workers from the global South play in the advancement of AI technologies.

However, the analysis also reveals that working conditions for AI labor are generally precarious. Workers in this industry often face low pay, excessive overwork, short-term contracts, unfair management practices, and a lack of collective power. The strenuous work schedules in the sector have also been found to contribute to sleep issues and mental health problems among these workers. These challenges highlight the need for improved working conditions and better protections for AI labor.

One positive development in this regard is the Fair Work Project, which aims to address labor conditions in the AI industry. The project evaluates digital labor platforms based on a set of fair work principles. Currently operational in almost 40 countries, the Fair Work Project rates platforms based on their adherence to these principles, including factors such as pay conditions, contract management, and representation. This initiative seeks to improve conditions and drive positive change within the AI labor market.

Another concern raised in the analysis is the exploitation of cheap labor within the development of AI. Companies benefit from the use of digital labor platforms that bypass labor rights and protections, such as minimum wage and freedom of association. This trend, which is becoming more common in data services and AI industries, highlights the need for a greater emphasis on upholding labor rights and ensuring fair treatment of workers, particularly in the global South.

Furthermore, the analysis underscores the importance of considering diversity and local context in digital technology production. Incorporating different cultural expressions and understanding the needs of different populations are key factors in creating inclusive and fair digital labor platforms and global platforms. By doing so, the aim is to address bias, discrimination, and national regulations to create a more equitable digital landscape.

The analysis also acknowledges the concept of decolonizing digital technologies. This process involves not only the use of digital technologies but also examining and transforming the production process itself. By incorporating the labor dimension and ensuring basic fair work standards, the goal is to create a structurally different work arrangement that avoids exploitation and supports the liberation of oppressed populations.

In conclusion, the analysis highlights the challenges and opportunities surrounding AI labor and digital technology production. While the global South plays a crucial role in AI development, working conditions for AI labor are often precarious. The Fair Work Project and initiatives aimed at improving labor conditions are prominent in the discussion, emphasizing the need for fair treatment and better protections for workers. Additionally, considerations of diversity, local context, and the decolonization of digital technologies are crucial in creating a more inclusive and equitable digital landscape.

Tevin Gitongo

During the discussion, the speakers emphasised the importance of decolonising the digital future in order to ensure that technology benefits people and promotes a rights-based democratic digital society. They highlighted the need for creating locally relevant tech solutions and standards that address the specific needs and contexts of different communities. This involves taking into consideration factors such as cultural diversity, linguistic preferences, and social inclusion.

The importance of stakeholder collaboration in the decolonisation of digital rights was also emphasised. The speakers stressed the need to involve a wide range of stakeholders, including government, tech companies, fintech companies, academia, and civil society, to ensure that all perspectives and voices are represented in the decision-making process. By including all stakeholders, the development of digital rights frameworks can be more inclusive and reflective of the diverse needs and concerns of the population.

Cultural context was identified as a crucial factor to consider in digital training programmes. The speakers argued that training programmes must be tailored to the cultural context of the learners to be effective. They highlighted the importance of working with stakeholders who have a deep understanding of the ground realities and cultural nuances to ensure that the training programmes are relevant and impactful.

The speakers also discussed the importance of accessibility and affordability in digital training. They emphasised the need to bridge the digital divide and ensure that training programmes are accessible to all, regardless of their economic background or physical abilities. Inclusion of people with disabilities was specifically noted, with the speakers advocating for the development of digital systems that cater to the needs of this population. They pointed out the assistance being provided in Kenya to develop ICT standards for people with disabilities, highlighting the importance of inclusive design and accessibility in digital training initiatives.

Privacy concerns related to personal data were identified as a universal issue affecting people from both the global north and south. The speakers highlighted the increasing awareness and concerns among Kenyans about the protection of their data, similar to concerns raised in European countries. They mentioned the active work of the office of data commissioner in Kenya in addressing these issues, emphasising the importance of safeguarding individual privacy in the digital age.

The speakers also emphasised the need for AI products and services to be mindful of both global and local contexts. They argued that AI systems should take into account the specific linguistic needs and cultural nuances of the communities in which they are used. The speakers raised concerns about the existing bias in AI systems that are designed with a focus on the global north, neglecting the unique aspects of local languages and cultures. They stressed the importance of addressing this issue to bridge the digital divide and ensure that AI is fair and effective for all.

Digital literacy was highlighted as a tool for decolonising the internet. The speakers provided examples of how digital literacy has empowered individuals, particularly women in Kenya, to use digital tools for their businesses. They highlighted the importance of finding people where they are and building on their existing skills to enable them to participate more fully in the digital world.

One of the noteworthy observations from the discussion was the need to break down complex information, such as terms and conditions, to ensure that individuals fully understand what they are agreeing to. The speakers noted that people often click on “agree” without fully understanding the terms and emphasised the importance of breaking down the information in a way that is easily understandable for everyone.

Overall, the discussion emphasised the need to decolonise the digital future by placing people at the centre of technological advancements and promoting a rights-based democratic digital society. This involves creating inclusive tech solutions, collaborating with stakeholders, considering cultural context in training programmes, ensuring accessibility and affordability, addressing privacy concerns, and bridging the digital divide through digital literacy initiatives. By adopting these approaches, it is hoped that technology can be harnessed for the benefit of all and contribute to more equitable and inclusive societies.

Shalini Joshi

The analysis highlights several important points related to artificial intelligence (AI) and technology. Firstly, it reveals that AI models have inherent biases and promote stereotypes. This can result in inequalities and gender biases in various sectors. Experiments with generative AI have shown biases towards certain countries and cultures. In one instance, high-paying jobs were represented by lighter-skinned, male figures in AI visualisations. This not only perpetuates gender and racial stereotypes but also reinforces existing inequalities in society.

Secondly, the analysis emphasises the need for transparency in AI systems and companies. Currently, companies are often secretive about the data they use to train AI systems. Lack of transparency can lead to ethical concerns, as it becomes difficult to assess whether the AI system is fair, unbiased, and accountable. Transparency is crucial to ensure that AI systems are developed and used in an ethical and responsible manner. It allows for scrutiny, accountability, and public trust in AI technologies.

Furthermore, the analysis points out that AI-based translation services often overlook hundreds of lesser-known languages. These services are usually trained with data that uses mainstream languages, which results in a neglect of languages that are not widely spoken. This oversight undermines the preservation of unique cultures, traditions, and identities associated with these lesser-known languages. It highlights the importance of ensuring that AI technologies are inclusive and consider the diverse linguistic needs of different communities.

Additionally, the analysis reveals that women, trans people, and non-binary individuals in South Asia face online disinformation that aims to marginalise them further. This disinformation uses lies and hate speech to silence or intimidate these groups. It targets both public figures and everyday individuals, perpetuating gender and social inequalities. In response to this growing issue, NIDAN, an organisation, is implementing a collaborative approach to identify, document, and counter instances of gender disinformation. This approach involves a diverse set of stakeholder groups in South Asia and utilises machine learning techniques to efficiently locate and document instances of disinformation.

The analysis also highlights the importance of involving local and marginalised communities in the development of data sets and technology creation. It emphasises that hyperlocal communities should be involved in creating data sets, as marginalised people understand the context, language, and issues more than technologists and coders. Inclusive processes that include people from different backgrounds in technology creation are necessary to ensure that technology addresses the needs and concerns of all individuals.

In conclusion, the analysis underscores the pressing need to address biases, promote transparency, preserve lesser-known languages, counter online disinformation, and include local and marginalised communities in the development of technology. These steps are crucial for creating a more equitable and inclusive digital world. By acknowledging the limitations and biases in AI systems and technology, we can work towards mitigating these issues and ensuring that technology is a force for positive change.

Pedro de Perdigão Lana

The analysis highlights several concerns about Internet regulation and its potential impact on fragmentation. It argues that governmental regulation, driven by the concept of digital colonialism, poses a significant threat to the Internet. This is because such regulations are often stimulated by distinctions that are rooted in historical power imbalances and the imposition of laws by dominant countries.

One example of this is seen in the actions of larger multinational companies, which subtly impose their home country’s laws on a global scale, disregarding national laws. For instance, the Digital Millennium Copyright Act (DMCA) is mentioned as a means by which American copyright reform extends its legal systems globally. This kind of imposition from multinational companies can undermine the sovereignty of individual nations and lead to a disregard for their own legal systems.

However, the analysis also recognizes the importance of intellectual property in the discussions surrounding Internet regulations. In Brazil, for instance, a provisional measure was introduced to create barriers for content moderation using copyright mechanisms. This indicates that intellectual property is a crucial topic that needs to be addressed in the context of Internet regulations and underscores the need for balance in protecting and respecting intellectual property rights.

Another important aspect highlighted is platform diversification, which refers to the adaptation of platforms to individual national legislation and cultural contexts. It is suggested that platform diversification, particularly in terms of user experience and language accessibility, may act as a tool to counter regulations that could lead to fragmentation of the Internet. By ensuring that platforms can adapt to different national legislations, tensions can be alleviated, and negative effects can be minimized.

Pedro, one of the individuals mentioned in the analysis, is portrayed as an advocate for the diversification of internet content and platforms. Pedro presents a case in which internet content-based platforms extended US copyright laws globally, enforcing an alien legal system. Thus, diversification is seen as a means to counter this threat of fragmentation and over-regulation.

The analysis also explores the concern of multinational platforms and their attitude towards the legal and cultural specificities of the countries they operate in. While it is acknowledged that these platforms do care about such specifics, the difficulty of measuring the indirect and long-term costs associated with this adaptation is raised.

Furthermore, the discrepancy in the interpretation of human rights across cultures is highlighted. Human rights, including freedom of expression, are not universally understood in the same way, leading to different perspectives on issues related to Internet regulation and governance.

The importance of privacy and its differing interpretations by country are also acknowledged. It is suggested that privacy interpretations should be considered in managing the Internet to strike a balance between ensuring privacy rights and maintaining a safe and secure digital environment.

The analysis concludes by emphasizing the need for active power sharing and decolonization of the digital space. It underscores that preserving the Internet as a global network and a force for good is crucial. The failure of platforms to diversify and respect national legislation and cultural contexts is seen as a factor that may lead to regional favoritism and even the potential fragmentation of the Internet.

In summary, the analysis highlights the concerns about Internet regulation, including the threats posed by governmental regulation and the subtle imposition of home country laws by multinational companies. It emphasizes the importance of intellectual property in the discussions surrounding Internet regulations, as well as the potential benefits of platform diversification. The analysis also highlights the need for active power sharing, the differing interpretations of human rights, and considerations for privacy. Overall, preserving the Internet as a global network and ensuring its diverse and inclusive nature are key priorities.

Moderator

The analysis delves into the various aspects of the impact that AI development has on human labour. It highlights the heavy reliance of AI development on human labour, with thousands of workers involved in activities such as collection, curation, annotation, and validation. However, the analysis points out that human labour in AI development often faces precarious conditions, with insufficient arrangements regarding pay, management, and collectivisation. Workers frequently encounter issues like low pay, excessive overwork, job strain, health problems, short-term contracts, precarity, unfair management, and discrimination based on gender, race, ethnicity, and geography. This paints a negative picture of the working conditions in AI prediction networks, emphasising the need for improvements.

The distribution of work for AI development is another area of concern, as it primarily takes place in the Global South. This not only exacerbates existing inequalities but also reflects the legacies of colonialism. Large companies in the Global North hire and develop AI technologies using a workforce predominantly from the Global South. This unbalanced distribution further contributes to disparities in economic opportunities and development.

The analysis also highlights the influence of digital sovereignty and intellectual property on internet regulation. It argues that governments often regulate the internet under the pretext of digital sovereignty, which extends the legal systems of larger nations to every corner of the globe. This practice is justified through the concept of digital colonialism, where multinational companies subtly impose alien legislation that does not adhere to national standards. Intellectual property, such as the DMCA, is cited as an example of this behaviour. To counter this, the analysis suggests that diversification of internet content and platforms can be an essential tool, safeguarding against regulations that may result in fragmentation.

Furthermore, the analysis emphasises the need for documentation and policy action against gender disinformation in South Asia. Women, trans individuals, and non-binary people are regularly targeted in the region, with disinformation campaigns aimed at silencing marginalised voices. Gender disinformation often focuses on women in politics and the public domain, taking the form of hate speech, misleading information, or character attacks. The mention of NIDAN’s development of a dataset focused on gender disinformation indicates a concrete step towards understanding and addressing this issue.

Digital literacy and skills training are highlighted as important factors in bridging the digital divide and empowering marginalised communities. The analysis emphasises the importance of democratising access to digital education and ensuring that training is relevant and contextualised. This includes providing practical knowledge and involving the user community in the development process. Additionally, the analysis calls for inclusive digital training that takes into consideration the needs of persons with disabilities and respects economic differences.

The analysis also explores the broader topic of decolonising the internet and the role of technology in societal development. It suggests that the decolonisation of digital technologies should involve not only the use of these technologies but also the production process. There is an emphasis on the inclusion of diverse perspectives in technology creation and data analysis to avoid biases and discrimination. The analysis also advocates for the adaptation of platform policies to respect cultural differences and acknowledge other human rights, rather than solely adhering to external legislation.

In conclusion, the analysis provides a comprehensive assessment of the impact of AI development on human labour, highlighting the precarious conditions faced by workers and the unequal distribution of work. It calls for improvements in labour conditions and respect for workers’ rights. The analysis also raises awareness of the need to document and tackle gender disinformation, emphasises the importance of digital literacy and skills training for marginalised communities, and supports the decolonisation of the internet and technology development. These insights shed light on the challenges and opportunities in ensuring a more equitable and inclusive digital landscape.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more