Decolonise Digital Rights: For a Globally Inclusive Future | IGF 2023 WS #64
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Ananya Singh
The analysis features speakers discussing the exploitation of personal data without consent and drawing parallels to colonialism. They argue that personal data is often used for profit without knowledge or permission, highlighting the need for more transparency and accountability in handling personal data. The speakers believe that the terms of service on online platforms are often unclear and full of jargon, leading to misunderstandings and uninformed consent.
One of the main concerns raised is the concept of data colonialism, which is compared to historical colonial practices. The speakers argue that data colonialism aims to capture and control human life through the appropriation of data for profit. They urge individuals to question data-intensive corporate ideologies that incentivise the collection of personal data. They argue that the collection and analysis of personal data can perpetuate existing inequalities, lead to biases in algorithms, and result in unfair targeting, exclusion, and discrimination.
In response, the speakers suggest that individuals should take steps to minimise the amount of personal data they share online or with technology platforms. They emphasise the importance of thinking twice before agreeing to terms and conditions that may require sharing personal data. They also propose the idea of digital minimalism, which involves limiting one’s social media presence as a way to minimise data.
The analysis also highlights the need for digital literacy programmes to aid in decolonising the internet. Such programmes can help individuals navigate the internet more effectively and critically, enabling them to understand the implications of sharing personal data and make informed choices.
Overall, the speakers advocate for the concept of ownership by design, which includes minimisation and anonymisation of personal data. They believe that data colonialism provides an opportunity to create systems rooted in ethics. However, they caution against an entitled attitude towards data use, arguing that data use and reuse should be based on permissions rather than entitlements or rights.
Some noteworthy observations from the analysis include the focus on the negative sentiment towards the unregulated collection and use of personal data. The speakers highlight the potential harm caused by data exploitation and advocate for stronger regulation and protection of personal data. They also highlight the need for a more informed and critical approach to online platforms and the terms of service they offer.
In conclusion, the analysis underscores the importance of addressing the exploitation of personal data without consent and the potential harms of data colonialism. It calls for more transparency, accountability, and individual action in minimising data sharing. It also emphasises the need for critical digital literacy programmes and promotes the concept of ownership by design to create ethical systems.
Audience
The discussions revolved around several interconnected issues, including legal diversities, accessibility, privacy, and economic patterns. These topics were seen as not always being respected globally due to economic interests and the perpetuation of stereotypes. This highlights the need for increased awareness and efforts to address these issues on a global scale.
One of the arguments put forth was that privacy should be considered as a global right or human right. This suggests the importance of acknowledging privacy as a fundamental aspect of individual rights, regardless of geographical location or cultural context.
Another point of discussion was the need for a taxonomy that identifies specific local needs and how they relate to cultural, historical, or political characteristics. The argument advocates for better understanding and consideration of these factors to address the unique requirements of different communities and regions. This approach aims to reduce inequalities and promote inclusive development.
The distinction between local and global needs was also highlighted as crucial for effective population planning and reducing migration to the Global North. By focusing on empowering individuals to thrive in their country of origin, the discussion emphasized the importance of creating conditions that allow people to stay and contribute to their local communities.
The importance of reimagining digital literacy and skills training was emphasized as essential for empowering marginalized communities. This involves providing equitable access to digital tools and promoting inclusivity in digital participation. Bridging the digital divide was seen as necessary to ensure that everyone has the necessary tools and skills to fully participate in the digital world.
The discussions also delved into the decolonization of the Internet and the digital landscape. It was recognized that this is an ongoing journey that requires continuous reflections, open dialogue, and actionable steps. The complexities surrounding decolonization were explored in relation to factors such as economic gains and the question of who benefits from the current digital landscape.
Lastly, the need to strive for a digital space that is inclusive and empowers all individuals, regardless of their background or geographical location, was highlighted. This vision of a future in which the internet becomes a force of equality, justice, and liberation motivates efforts towards digital inclusivity and empowerment.
In conclusion, the discussions explored various critical aspects related to legal diversities, accessibility, privacy, and economic patterns. They underscored the importance of addressing these issues globally, recognizing privacy as a universal right, understanding local needs, bridging the digital divide, and advocating for a decolonized digital space. The overall emphasis was on promoting inclusivity, reducing inequalities, and fostering empowerment in the digital age.
Jonas Valente
The analysis highlights several important points from the speakers’ discussions. Firstly, it is noted that the development and deployment of artificial intelligence (AI) heavily rely on human labor, particularly from countries in the global South. Activities such as data collection, curation, annotation, and validation are essential for AI work. This dependence on human labor underscores the important role that workers from the global South play in the advancement of AI technologies.
However, the analysis also reveals that working conditions for AI labor are generally precarious. Workers in this industry often face low pay, excessive overwork, short-term contracts, unfair management practices, and a lack of collective power. The strenuous work schedules in the sector have also been found to contribute to sleep issues and mental health problems among these workers. These challenges highlight the need for improved working conditions and better protections for AI labor.
One positive development in this regard is the Fair Work Project, which aims to address labor conditions in the AI industry. The project evaluates digital labor platforms based on a set of fair work principles. Currently operational in almost 40 countries, the Fair Work Project rates platforms based on their adherence to these principles, including factors such as pay conditions, contract management, and representation. This initiative seeks to improve conditions and drive positive change within the AI labor market.
Another concern raised in the analysis is the exploitation of cheap labor within the development of AI. Companies benefit from the use of digital labor platforms that bypass labor rights and protections, such as minimum wage and freedom of association. This trend, which is becoming more common in data services and AI industries, highlights the need for a greater emphasis on upholding labor rights and ensuring fair treatment of workers, particularly in the global South.
Furthermore, the analysis underscores the importance of considering diversity and local context in digital technology production. Incorporating different cultural expressions and understanding the needs of different populations are key factors in creating inclusive and fair digital labor platforms and global platforms. By doing so, the aim is to address bias, discrimination, and national regulations to create a more equitable digital landscape.
The analysis also acknowledges the concept of decolonizing digital technologies. This process involves not only the use of digital technologies but also examining and transforming the production process itself. By incorporating the labor dimension and ensuring basic fair work standards, the goal is to create a structurally different work arrangement that avoids exploitation and supports the liberation of oppressed populations.
In conclusion, the analysis highlights the challenges and opportunities surrounding AI labor and digital technology production. While the global South plays a crucial role in AI development, working conditions for AI labor are often precarious. The Fair Work Project and initiatives aimed at improving labor conditions are prominent in the discussion, emphasizing the need for fair treatment and better protections for workers. Additionally, considerations of diversity, local context, and the decolonization of digital technologies are crucial in creating a more inclusive and equitable digital landscape.
Tevin Gitongo
During the discussion, the speakers emphasised the importance of decolonising the digital future in order to ensure that technology benefits people and promotes a rights-based democratic digital society. They highlighted the need for creating locally relevant tech solutions and standards that address the specific needs and contexts of different communities. This involves taking into consideration factors such as cultural diversity, linguistic preferences, and social inclusion.
The importance of stakeholder collaboration in the decolonisation of digital rights was also emphasised. The speakers stressed the need to involve a wide range of stakeholders, including government, tech companies, fintech companies, academia, and civil society, to ensure that all perspectives and voices are represented in the decision-making process. By including all stakeholders, the development of digital rights frameworks can be more inclusive and reflective of the diverse needs and concerns of the population.
Cultural context was identified as a crucial factor to consider in digital training programmes. The speakers argued that training programmes must be tailored to the cultural context of the learners to be effective. They highlighted the importance of working with stakeholders who have a deep understanding of the ground realities and cultural nuances to ensure that the training programmes are relevant and impactful.
The speakers also discussed the importance of accessibility and affordability in digital training. They emphasised the need to bridge the digital divide and ensure that training programmes are accessible to all, regardless of their economic background or physical abilities. Inclusion of people with disabilities was specifically noted, with the speakers advocating for the development of digital systems that cater to the needs of this population. They pointed out the assistance being provided in Kenya to develop ICT standards for people with disabilities, highlighting the importance of inclusive design and accessibility in digital training initiatives.
Privacy concerns related to personal data were identified as a universal issue affecting people from both the global north and south. The speakers highlighted the increasing awareness and concerns among Kenyans about the protection of their data, similar to concerns raised in European countries. They mentioned the active work of the office of data commissioner in Kenya in addressing these issues, emphasising the importance of safeguarding individual privacy in the digital age.
The speakers also emphasised the need for AI products and services to be mindful of both global and local contexts. They argued that AI systems should take into account the specific linguistic needs and cultural nuances of the communities in which they are used. The speakers raised concerns about the existing bias in AI systems that are designed with a focus on the global north, neglecting the unique aspects of local languages and cultures. They stressed the importance of addressing this issue to bridge the digital divide and ensure that AI is fair and effective for all.
Digital literacy was highlighted as a tool for decolonising the internet. The speakers provided examples of how digital literacy has empowered individuals, particularly women in Kenya, to use digital tools for their businesses. They highlighted the importance of finding people where they are and building on their existing skills to enable them to participate more fully in the digital world.
One of the noteworthy observations from the discussion was the need to break down complex information, such as terms and conditions, to ensure that individuals fully understand what they are agreeing to. The speakers noted that people often click on “agree” without fully understanding the terms and emphasised the importance of breaking down the information in a way that is easily understandable for everyone.
Overall, the discussion emphasised the need to decolonise the digital future by placing people at the centre of technological advancements and promoting a rights-based democratic digital society. This involves creating inclusive tech solutions, collaborating with stakeholders, considering cultural context in training programmes, ensuring accessibility and affordability, addressing privacy concerns, and bridging the digital divide through digital literacy initiatives. By adopting these approaches, it is hoped that technology can be harnessed for the benefit of all and contribute to more equitable and inclusive societies.
Shalini Joshi
The analysis highlights several important points related to artificial intelligence (AI) and technology. Firstly, it reveals that AI models have inherent biases and promote stereotypes. This can result in inequalities and gender biases in various sectors. Experiments with generative AI have shown biases towards certain countries and cultures. In one instance, high-paying jobs were represented by lighter-skinned, male figures in AI visualisations. This not only perpetuates gender and racial stereotypes but also reinforces existing inequalities in society.
Secondly, the analysis emphasises the need for transparency in AI systems and companies. Currently, companies are often secretive about the data they use to train AI systems. Lack of transparency can lead to ethical concerns, as it becomes difficult to assess whether the AI system is fair, unbiased, and accountable. Transparency is crucial to ensure that AI systems are developed and used in an ethical and responsible manner. It allows for scrutiny, accountability, and public trust in AI technologies.
Furthermore, the analysis points out that AI-based translation services often overlook hundreds of lesser-known languages. These services are usually trained with data that uses mainstream languages, which results in a neglect of languages that are not widely spoken. This oversight undermines the preservation of unique cultures, traditions, and identities associated with these lesser-known languages. It highlights the importance of ensuring that AI technologies are inclusive and consider the diverse linguistic needs of different communities.
Additionally, the analysis reveals that women, trans people, and non-binary individuals in South Asia face online disinformation that aims to marginalise them further. This disinformation uses lies and hate speech to silence or intimidate these groups. It targets both public figures and everyday individuals, perpetuating gender and social inequalities. In response to this growing issue, NIDAN, an organisation, is implementing a collaborative approach to identify, document, and counter instances of gender disinformation. This approach involves a diverse set of stakeholder groups in South Asia and utilises machine learning techniques to efficiently locate and document instances of disinformation.
The analysis also highlights the importance of involving local and marginalised communities in the development of data sets and technology creation. It emphasises that hyperlocal communities should be involved in creating data sets, as marginalised people understand the context, language, and issues more than technologists and coders. Inclusive processes that include people from different backgrounds in technology creation are necessary to ensure that technology addresses the needs and concerns of all individuals.
In conclusion, the analysis underscores the pressing need to address biases, promote transparency, preserve lesser-known languages, counter online disinformation, and include local and marginalised communities in the development of technology. These steps are crucial for creating a more equitable and inclusive digital world. By acknowledging the limitations and biases in AI systems and technology, we can work towards mitigating these issues and ensuring that technology is a force for positive change.
Pedro de Perdigão Lana
The analysis highlights several concerns about Internet regulation and its potential impact on fragmentation. It argues that governmental regulation, driven by the concept of digital colonialism, poses a significant threat to the Internet. This is because such regulations are often stimulated by distinctions that are rooted in historical power imbalances and the imposition of laws by dominant countries.
One example of this is seen in the actions of larger multinational companies, which subtly impose their home country’s laws on a global scale, disregarding national laws. For instance, the Digital Millennium Copyright Act (DMCA) is mentioned as a means by which American copyright reform extends its legal systems globally. This kind of imposition from multinational companies can undermine the sovereignty of individual nations and lead to a disregard for their own legal systems.
However, the analysis also recognizes the importance of intellectual property in the discussions surrounding Internet regulations. In Brazil, for instance, a provisional measure was introduced to create barriers for content moderation using copyright mechanisms. This indicates that intellectual property is a crucial topic that needs to be addressed in the context of Internet regulations and underscores the need for balance in protecting and respecting intellectual property rights.
Another important aspect highlighted is platform diversification, which refers to the adaptation of platforms to individual national legislation and cultural contexts. It is suggested that platform diversification, particularly in terms of user experience and language accessibility, may act as a tool to counter regulations that could lead to fragmentation of the Internet. By ensuring that platforms can adapt to different national legislations, tensions can be alleviated, and negative effects can be minimized.
Pedro, one of the individuals mentioned in the analysis, is portrayed as an advocate for the diversification of internet content and platforms. Pedro presents a case in which internet content-based platforms extended US copyright laws globally, enforcing an alien legal system. Thus, diversification is seen as a means to counter this threat of fragmentation and over-regulation.
The analysis also explores the concern of multinational platforms and their attitude towards the legal and cultural specificities of the countries they operate in. While it is acknowledged that these platforms do care about such specifics, the difficulty of measuring the indirect and long-term costs associated with this adaptation is raised.
Furthermore, the discrepancy in the interpretation of human rights across cultures is highlighted. Human rights, including freedom of expression, are not universally understood in the same way, leading to different perspectives on issues related to Internet regulation and governance.
The importance of privacy and its differing interpretations by country are also acknowledged. It is suggested that privacy interpretations should be considered in managing the Internet to strike a balance between ensuring privacy rights and maintaining a safe and secure digital environment.
The analysis concludes by emphasizing the need for active power sharing and decolonization of the digital space. It underscores that preserving the Internet as a global network and a force for good is crucial. The failure of platforms to diversify and respect national legislation and cultural contexts is seen as a factor that may lead to regional favoritism and even the potential fragmentation of the Internet.
In summary, the analysis highlights the concerns about Internet regulation, including the threats posed by governmental regulation and the subtle imposition of home country laws by multinational companies. It emphasizes the importance of intellectual property in the discussions surrounding Internet regulations, as well as the potential benefits of platform diversification. The analysis also highlights the need for active power sharing, the differing interpretations of human rights, and considerations for privacy. Overall, preserving the Internet as a global network and ensuring its diverse and inclusive nature are key priorities.
Moderator
The analysis delves into the various aspects of the impact that AI development has on human labour. It highlights the heavy reliance of AI development on human labour, with thousands of workers involved in activities such as collection, curation, annotation, and validation. However, the analysis points out that human labour in AI development often faces precarious conditions, with insufficient arrangements regarding pay, management, and collectivisation. Workers frequently encounter issues like low pay, excessive overwork, job strain, health problems, short-term contracts, precarity, unfair management, and discrimination based on gender, race, ethnicity, and geography. This paints a negative picture of the working conditions in AI prediction networks, emphasising the need for improvements.
The distribution of work for AI development is another area of concern, as it primarily takes place in the Global South. This not only exacerbates existing inequalities but also reflects the legacies of colonialism. Large companies in the Global North hire and develop AI technologies using a workforce predominantly from the Global South. This unbalanced distribution further contributes to disparities in economic opportunities and development.
The analysis also highlights the influence of digital sovereignty and intellectual property on internet regulation. It argues that governments often regulate the internet under the pretext of digital sovereignty, which extends the legal systems of larger nations to every corner of the globe. This practice is justified through the concept of digital colonialism, where multinational companies subtly impose alien legislation that does not adhere to national standards. Intellectual property, such as the DMCA, is cited as an example of this behaviour. To counter this, the analysis suggests that diversification of internet content and platforms can be an essential tool, safeguarding against regulations that may result in fragmentation.
Furthermore, the analysis emphasises the need for documentation and policy action against gender disinformation in South Asia. Women, trans individuals, and non-binary people are regularly targeted in the region, with disinformation campaigns aimed at silencing marginalised voices. Gender disinformation often focuses on women in politics and the public domain, taking the form of hate speech, misleading information, or character attacks. The mention of NIDAN’s development of a dataset focused on gender disinformation indicates a concrete step towards understanding and addressing this issue.
Digital literacy and skills training are highlighted as important factors in bridging the digital divide and empowering marginalised communities. The analysis emphasises the importance of democratising access to digital education and ensuring that training is relevant and contextualised. This includes providing practical knowledge and involving the user community in the development process. Additionally, the analysis calls for inclusive digital training that takes into consideration the needs of persons with disabilities and respects economic differences.
The analysis also explores the broader topic of decolonising the internet and the role of technology in societal development. It suggests that the decolonisation of digital technologies should involve not only the use of these technologies but also the production process. There is an emphasis on the inclusion of diverse perspectives in technology creation and data analysis to avoid biases and discrimination. The analysis also advocates for the adaptation of platform policies to respect cultural differences and acknowledge other human rights, rather than solely adhering to external legislation.
In conclusion, the analysis provides a comprehensive assessment of the impact of AI development on human labour, highlighting the precarious conditions faced by workers and the unequal distribution of work. It calls for improvements in labour conditions and respect for workers’ rights. The analysis also raises awareness of the need to document and tackle gender disinformation, emphasises the importance of digital literacy and skills training for marginalised communities, and supports the decolonisation of the internet and technology development. These insights shed light on the challenges and opportunities in ensuring a more equitable and inclusive digital landscape.
Session transcript
Moderator:
Hi, good morning, good afternoon, and good evening to all those who are joining us on site or online. Welcome to our workshop. This workshop is called Decolonize Digital Rights for a Globally Inclusive Future. Before we begin, I would like to encourage both on site and remote participants to scan the QR meter, the code that’s just on the screen here. And, you know, the link is being published on the Zoom right now to express your expectations for the session. And as a reminder, I would also like to request that all the speakers and the audience whom we ask questions during the question and answer round to please speak clearly and at a very reasonable pace. I would like to also request that everyone participating to maintain a respectful and inclusive environment in the room or in the chat. For those who wish to ask questions during the question and answer sessions, please raise your hands. And once I call upon you, if on site, please take the microphone at this the left or the right side. Clearly state your name, the country you come from, and then you can go ahead and ask the question. Additionally, please make sure that all mics are muted and other devices and other audio devices are also muted just to avoid disruptions. If you have any questions or comment or would like the moderator to read out your questions or comments online, please type it in the Zoom chat. And when posting, please start and end your sentence with a question mark to indicate whether it’s a question or a comment. Thank you. We may now begin our session. So, thank you for joining the session, whether you are online or on-site, and it is going to be a very thought-provoking session that is going to delve into decolonisation of the Internet. I am Mariam Job, and I will be your on-site moderator for today’s session. Online, we have Nelly, who is going to be the moderator, and we have Keolu Bojil, who is going to be a reporter for today’s session. Keolu Bojil, my sincere apologies if I miss the pronunciation of the name, he is going to be a reporter for the session. Today, we are gathered here to confront the very uncomfortable truth that the Internet is a space where everyone is not always equal, and it is very far from being a place where everyone is equal. In the U.K., there is a very strong idea that all groups are equal because of historical bias and power imbalances. Maginalised groups continue to face barriers in the creation and design of technology. This often results to digital colonialism, and the dominance of privileged groups and shaping technology design often leads to discrimination and discrimination in the U.K. and in the global south, and perpetuating linguistic bias and slower content removal from non-English content, regardless of the magnitude of hate or harm. The unequal response to these strategies, however, further highlights the disparity. While future such as safety check and on one click option to ensure that the U.K. is safe, the U.K. is safe, and the U.K. is safe, the U.K. is safe, and the U.K. is safe, and the U.K. is safe, and the U.K. is safe. While platforms have introduced fact-checking measures for major elections in the west, misuse of data and information on the U.K. information and disinformation, whereas some platforms continue to plague the global south. However, the under-representation of authors of color on online knowledge platforms paints a stark picture of the inequalities that persist. Even voice assistants designed to assist and interact with users have been found to reinforce gender biases, normalize sexual harassment, and perpetuate conversational behavior patterns imposed on women and girls. This not only limits their autonomy, but also puts them in the forefront of errors and biases. Hate speech targeting marginalized communities continues to rage online, creating a very unsafe environment for those from the global south and those from the marginalized communities. Users in the global south also have the right to feel safe and to feel the same autonomy as users in the global north. In this workshop today, we are going to delve into the concept of decolonization in relation to the internet, in relation to technology, and human rights and freedoms online. Our esteemed panelists, who will be joining us too, we have two on-site and we have online panelists as well, they will unpack the evidence that exists of gender stereotypes, linguistic bias, and racial injustice that are coded into technology. They will shed light on how apps are often built based on creators opinions of what the average user should or should not prefer. Furthermore, they will also offer recommendations on how online knowledge can be decentralized and how ideological influences can be delinked from the digital arena. They will propose practices and processes that can help decolonize the internet and transform it into a truly global, interoperable space. Throughout the sessions, we’re going to address three policy questions. One is that, what are the colonial manifestations of technologies such as language, gender, media, and artificial intelligence? in the digital domain, and how do we address internet discrimination against people of color and the intersectionality that exists in social intelligence that are emerging on the Internet. Two is that how do we address these legacies that shape the Internet and have become the ongoing chron verklzing, and determine its future. How do we address the intersectionality that exists in the Internet? How do we address the intersectionality that exists in our technology and the digital arena as a whole? How can we better include marginalized communities in these discussions? We hope that by attending the session, participants will get a deeper understanding to the context of decolonization in relation to the Internet and will learn to recognize the ways in which bias is built into our technology and understand how we are supporting discrepancy in IT services. And fourth, we hope to lead a connection into code, drawing data from actors’ beliefs and systems that perpetuate stereotypes and historical prejudice. During the session, we hope and aim to have a conversation of how we lead to ensure that we de-colonize technology and digital space and pave a way for a more inclusive process related to endless mainstreaming and migration and transgenic activities in Africa. Today we’re going to be hearing a number Let me introduce you to our students with the dual and security manager and presenter for the youth scale training team in the digital resource space. Linnea Ayaligus is … and is involved in addressing this She’s also the co-founder of Khabar, India’s only independent digital rural news network. And one of the next speaker is Ananya, who’s here with us in person. She’s the youth advisor of USAID’s Digital Youth Council. Over the recent years, Ananya has been very active in the Global Digital Development Forum and has also been a Next Generation ICANN ambassador over the 64, the ICANN 64, ICANN 68 and ICANN 76. Ananya holds a master’s degree in development and labor studies from Jawaharlal University, New Delhi. This is Ananya. We have Pedro, who is joining us online as well. He is an innovation lawyer at Systema Industry, has a PhD student at UFBR with an LM, with a LMM from the University of Coimbra. He is a board member of IODA, ISOC Brazil, and the Creative Commons Brazil, and an organizer of the Youth LACIF. We have Marinda, who’s here with us in person. Tevin, he is from GIZ and is a tech and policy lawyer based in Nairobi, Kenya. He holds a data governance, he heads rather, he heads the data governance division at the DTC GIZ Kenya, and previously he worked as a data protection advisor at GIZ. He also serves as the secretary of the Kenya Privacy Professional Association. And with that, we begin the session today. We will start with Jonas, who’s joining us online to have a brief presentation. Yes, Jonas, are you with us?
Jonas Valente:
Yes, yes, good afternoon. Can I get the possibility of sharing my screen?
Moderator:
Yes, please, let me see that. Can you see it?
Jonas Valente:
Yes, yes, we can see your screen now..Okay, thank you so much. Good afternoon for you. Good morning for me in London right now. And good morning even more for Pedro in Brazil. So it’s an honor for us from the Fair Work Project. to join this panel. I’m going to talk about the labor conditions in AI global prediction networks. And this is super important because normally, we look in the digital rights community to the effects of technologies like AI, but we need to look also to the workers who are producing that. So the first assumption is that AI development and deployment is super dependent on human labor. And unfortunately, this human labor is characterized by a set of features that make it very precarious and with very, let’s say insufficient arrangements regarding a set of conditions like pay, management and collectivization. When we talk about data work, we talk about activities like collection, curation, annotation, validation, and throughout all these chain, you have human labor. So when we talk about artificial intelligence, it’s important to know that it’s not so artificial. So we need like thousands of workers and those thousands of workers are distributed all around the world. But this distribution is not random or neutral. This distribution express the legacies of colonialism. When we have big companies in the global North who are hiring and developing these technologies and a workforce mainly in the global South, we can see here how the main countries are India, Bangladesh, Pakistan. We also have a workforce in the United States or the United Kingdom, but mainly global South countries are taking part in this through business process outsources or digital labor platforms. The Fair Work Project assesses digital labor platforms against a set of principles. And we try to address the risks of platform work and the platform economy and which risks are that. First of all, a risk of low pay. Our Cloud Work Report launched this year showed how. micro workers earned around two dollars an hour and other reports and studies show the same. So of course when you’re talking about some countries considering the currency this may be not so bad but what the studies are showing is that those payments structures and payment amounts they are super insufficient to ensure like adequate and meaningful livelihoods. Another problem is the excessive overwork and job strength. So this leads to health issues. We have workers working 15 16 hours. Normally workers need to switch day by night because they need to be awake during the global north time instead of being awake in their own country time and this leads to exhaustion leads to problems related to sleep and very other mental health questions that we’ve been found finding in our studies. Also workers suffer with short-term contract and precarity. So normally if you have a business process outsourced you have a one month or a two month contract and when we mention cloud work platforms you don’t have a contract in a traditional sense and these workers need to search for tests all the time. Our 2022 report showed that those workers worked eight hours on unpaid tasks and once again this is a legacy that we see of colonial and capitalist regimes and work arrangements. Those workers suffers with unfair management and especially with discrimination and you can see this discrimination based on gender based on race and ethnicity and based on geography and we can see the legacies of colonialism. Now there’s also data data work workers they face this personalized as bully, and they are subject to extreme surveillance. And finally, another risk is the lack of collective power. And of course, that this turns into more asymmetries between workers and platforms. The Fair Work Project, it’s working across all over the world, almost 40 countries. It’s coordinated by the Oxford Internet Institute and the Vis-a-vis Social Center in Berlin, and funded mainly by JSF, connected to the German government. We are assessing location-based platforms, cloud work platforms, and AI. And we have this five principles, pay, sorry about that. We have this five principles, pay conditions, contract managements, and representations. We collect data from different sources, and we rank platforms, and to finish, our AI project is looking at-
Moderator:
Jonas, please help us round up.
Jonas Valente:
Yeah, I’m rounding up, but this is the last slide. We are assessing specific AI companies, and when we try to do that, we try to show that the platform economy can be different, and to be different is part of the decolonizing process of AI technologies. Thank you so much.
Moderator:
Thank you so much, Jonas. That was quite insightful with data to back it up, that we could actually look at the fact that these are concerning issues when it comes to the decolonization of the internet. We’re gonna take another five-minute presentation from another of our online speakers, Shalini. Please go ahead and share your presentation.
Shalini Joshi:
Thank you. I don’t have a presentation, but I made some points for the discussion today. Thank you very much to IGF. Thank you to the organizers of this workshop. It’s a real honor to be here. I’m going to talk about the problems with AI in terms of gender, in terms of language, and I’m also going to talk about the work that Mee Dan, the organization that I work with. work with has been doing in order to address some of these issues. So as we all know that there have been experiments that have been carried out with generative AI on how different image generators visualize people from different countries and cultures. And when we look at these images, they almost always promote biases and stereotypes related to those countries and cultures. When text-to-image models were prompted to create representations of workers for high-paying jobs and low-paying jobs, high-paying jobs were dominated by subjects with lighter skin tones and were mostly male-dominated. Images that we see don’t represent the complexity and the heterogeneity and diversity of many cultures and people. We also know that AI models have inherent biases that are representative of the data sets that they are trained on. Image generators are being used for several applications and many industries and even in tools that have been designed to make forensic sketches of crime suspects. And this can cause real harm. A lot of the models that are used tend to assume a western context and the AI systems look for patterns in data on which they are trained, often looking at trends that are more dominant. And they are also designed to mimic what has come before, not create diversity. So we’re talking about inclusivity in technology. How do we ensure that AI technology is fair and representative, especially as more and more of us start using AI for the work that we are doing? Any technical solutions to solve for such bias would likely have to start with the training data that is being used. And to seek transparency from AI systems and from the companies that are involved is also really important. Because very often, these companies are very secretive about the data that they use to train their systems. There’s also the issue of language. Often, AI models are trained with data that uses mainstream languages. Often, these are languages of the colonizers. Many AI-based translation services use only major languages, overlooking hundreds of lesser-known languages. And some of these are not even lesser-known languages. So languages such as Hindi and Bengali and Swahili, which are spoken a lot by people and by many people, they also need more resources to develop AI solutions. And from a sociocultural standpoint, preserving these languages is vital, since they hold unique traditions, knowledge, and an entire culture’s identity, while protecting their richness and language diversity. So in this context, what is it that we are doing at Midan, the organization that I work with? We are a technology nonprofit. Over the last 10 years, as the internet has evolved and changed, Midan has maintained a unique position as a trusted partner and collaborator, working both with civil society organizations and with technology companies that harness the affordances of digital technology to communicate. Our approach has been consistent. We build collaborations, we build networks, and we build digital tools that make it easier for hyperlocal community perspectives to be integrated into how global information challenges are met. We understand that our ability to work across community technology and policy stakeholders is a privilege. And this is our unique contribution. We see ourselves as facilitators and enablers of change. And we do this by developing open source software that incorporates the state-of-the-art ML and AI technologies by building coalitions. A lot of these coalitions are built around large events, such as elections, that enable skills sharing and capacity building. And this multi-pronged approach strengthens collaboration and the ability for hyperlocal community perspectives and participation in addressing.
Moderator:
Thank you so much for that, Shalini. Thank you. That was quite insightful to learn about the work that you do and how the methods and codes in our technology and our internet that has been existing for as long as we’ve been using the internet. And if we don’t tackle them, if we don’t talk about them, if we don’t even realize that these stereotypes, these gender biases are coded into our internet and the way that we use it to digital technologies, we have a long way to go when it comes to decriminalizing the internet. We’re going to take another five-minute presentation or speech from another of our speakers. This one is on site. But before we do that, I would like to share some of the comments that we made about the expectations of the session. We see that people are expecting reflections, candid direction, articulation, radical, honest manifestations. Of course, the link is still on the Zoom chat. So if you would like to include your expectations, you may still go ahead and make the comment. Ananya, you may go ahead. ahead, please.
Ananya Singh:
Thank you so much. First of all, let me begin by saying that I’m very happy to be here in Japan. And no, it’s not just because Japan is such a beautiful country and the people here are so nice. I mean, well, I mean, of course they are. But also because I can finally live a day where I do not get spammed by calls from a range of companies trying to sell me their products, a bunch of coaching centers trying to send me to their engineering institutions with the aid of their tutors. By the way, I have a master’s degree in development studies, so engineering was clearly never my choice. Random call center agents forcing me to invest in certain deals or just another customer support automated call trying to divert my attention from my work. The one question that always comes to my mind when my phone rings and the Truecaller app detects it as a spam call is how did they get my number? Who gave them my number? And why did they give it to them? Why was I not asked? Given that it is my number and my number is connected to very obviously a ton of different data related to me and since I own both the number and any data related to that number, I should have been asked. But I wasn’t. And I’m sure we are all very familiar with those lottery emails. Come on, we have a dedicated spam folder where all those great deals and gone in a day bumper offers and their likes of ad emails keep lurking. So how did they choose you or me? I mean, I have never been that lucky in my entire life, by the way. So who gave them our email address? And if they found our email addresses, are they going to be very far from our residential addresses or our bank account numbers? So the way we live our lives has become excessively dependent on virtual and online activities and even more so after the pandemic. For instance, social media. inbuilt GPS, health apps, taxi apps, Google searches, everything, all of them require access to our personal data. Our details set to public or private are available for usage by online companies. The principal actors here capture our everyday social acts, translate them into quantifiable data, which is analyzed and used for the generation of profit. In the book, The Costs of Connection, the authors Nick Coldray and Ulysses Meijas also reiterate this view by emphasizing that instead of natural resources and labor, what is now being appropriated is human life through its conversion into data, meaning our online identities have become a commodity which can be exploited and used for capital gains, controlling our time and usage and influencing important decisions or processes in our lives. Hence the term data colonialism. But I know some people do contest the usage of the term data colonialism because historically, colonialism is unthinkable without violence, the takeover of lands and populations by sheer physical force. That’s true. But let’s take the example of the Spanish empire’s requerimiento or the demand document. It was made to inform the natives of the colonists’ right to conquest. Confiscators read this document out, demanding the natives’ acceptance of their new conditions in Spanish, which no local understood. Now think of the terms of service we sign up to every time we join a platform. They’re often unclear, long, full of jargons, which we rarely have the time to read, and so automatically, almost like a reflex, we click on, I agree. But do we really agree? Unknowingly, we are giving consent to being tracked online, being called at odd hours to be sold insurance policies for the children. By the way, I don’t have it. And hence our ignorance, our implied or uninformed consent for these kinds of data collection provides a very valuable yet free raw material. data. Once a senior official from a very famous company stated that data is more like the sunlight than oil, meaning a resource that can be harvested sustainably for the benefit of humanity. But this very idea makes my personal data a non-excludable natural resource available for public use. But does it not contradict the very word personal in personal data? Okay, I’ll leave you with that..
Moderator:
Thank you. She’s the only person who’s been on time since this session started. Thank you, thank you very much for that Ananya. We’re gonna take a five-minute as well, a five-minute presentation from Pedro who’s joining us online. Pedro, are you online? Yes, yes, we can hear you.
Pedro de Perdigão Lana:
That’s great. So good afternoon everyone. I hope you’re all well. I’m greeting you from a 4 a.m. pre-holiday morning here in Brazil. But to get to the presentation, what I want to comment on with you today during the session, yes just let me put a time here, there we go, is the results of a research project funded by a Latin program focused on youth named Lideres 2.0. It is an amazing program with many interesting and diverse phases and I recommend you all to seek more information about it, maybe as a way to repeat the idea in your regions. And for the sake of time, back to the real content of the presentation. The idea here is simple, linking sovereignty, fragmentation, regulation as a reaction and the theme that I try to force into everything that I research, intellectual property. So governmental regulation is probably one of the most important threats we have to the internet when we are talking specifically about the dangers of fragmentation, but it’s important to see what is behind this regulatory proposal, or to be more precise, what serves as justification for these movements. The argument that I will try to put forward here is that even when this is not the real reason that motivates public authorities, especially when I’m talking about authoritarian ones, hard regulation based on digital sovereignty arguments is frequently stimulated by distinctions that are originated in what we call digital colonialism, be it from multinational tech companies or countries who have much more steering power on modeling the internet than others, even if that’s not implemented in such a direct and explicit manner. We can see this when those larger multinational companies end up extending the legal systems of their home countries to every corner of the globe, subtly imposing alien legislation even when it doesn’t follow the standards of the national laws that actually apply. This is where intellectual property comes in. The Digital Millennium Copyright Act, or DMCA, a result of the copyright reform for the Information Society in the USA, establishes systems of notification and counter-notification and other mechanisms that are severely favorable to the rights holder, the copyright rights holder, and the largest content-based platform seems to have repeated those systems all over the planet, sometimes, of course, with great support from the international lobby of the American entertainment industry. Similarly, when I go to a Brazilian page, for example, that responds to allegations of copyright infringements on these content-based platforms, I will almost always see explanations on how Fair Use works, which is an institute that simply doesn’t exist in the Brazilian legal system, since this is a country that adopts a system of limitation exceptions for permitted users of copyrighted works. Of course, this example maybe seems strange to some. So, how many people actually care about intellectual property when compared to discussions such as disinformation or freedom of expression? But apart from the effects that all these areas are umbilically linked. In Brazil, for example, we even have a provisional measure which is something like an executive bill that intended to create obstacle for content moderation through copyright mechanisms. The most important point here is just to simplify a much broader behavior that attracts a lot of negativization and may be instrumentalized by ill-intended actors. If a national platform doesn’t even care about conveying an image that will follow something as central to the idea of sovereignty as national legislation, you can only imagine what a foot plate this is for movements that want to showcase the transnational interactions that are made possible through the internet as something dangerous or something that needs to be controlled. Summing this up, internet content and platform diversification, we’re talking about user experience and language accessibility, et cetera, is not the same as fragmentation. Not only that, it’s not just not only the same, but this diversification if platform is actually adapting to certain cultural contexts may actually be an important tool against pushes for regulation that may result in fragmentation. So back to you.
Moderator:
Thank you, thank you so much for that, Pedro. That was quite insightful. And now we’ll take our last opening remark from Tevin from the Kijazi, Kenya. Yes, it’s working now.
Tevin Gitongo:
Okay, good afternoon, everyone. So my name is Tevin Mwenda Gichonga. And I think we’ve had quite a number of presentations, and mine is going to take a different tangent. Mine is going to show you how we are trying to decolonize the digital future. So we’ve had all the things that are happening, and sometimes it sounds scary. So ours is more of let’s try and actually solve it. Let’s put our money where our mouth is. And I’m going to make a short presentation of the project that we’re working on at GIZ. As you’ve heard, I work for GIZ Kenya under the Digital Transformation Center, which is a project supported by the German government and Team Europe, working together with the Kenyan government, specifically the Ministry of ICT. And in our own little way, I can’t say we are perfect, but we are trying to see how we can do this with different aspects. One thing we must recognize is that I know we’ve had a lot of presentations on AI, but when decolonizing the digital rights future, it’s just not AI only. It has to be every other facet as well that builds up to the AI. And that’s what we are trying to do in our own small way. So the project, as you can see, the objective is to support Kenya’s digital transition towards a sustainable and human-centered digital economy. And I’m going to look at two, there are three visions and missions, but I’m going to look at two major ones that affect this panel. The first one is we recognize that we must make technology work for people. And throughout the presentations you’ve had, that’s maybe where we are really going wrong, particularly in developing countries. It’s the technology being made at some point maybe is not working as ideally it was intended. The other one is to enable a rights-based and democratic digital society. So we really have to be aware of that. And so what approach did we decide to take with this, I can say, interesting experiment, is on one hand to leapfrog Kenya’s digital economy. We decided the first thing we’re going to do, and this is working together. So, I’m going to give you a brief overview of what we’re doing. So, the first thing we’re doing is we’re working with the local digital innovation ecosystem to build capacities on data protection and IT security, to foster a data-driven economy, and to work towards a decent job creation in gig economy. And all this actually build up together to enable that. The other thing that we’ve done is to build Kenya’s digital society, and this is exploring emerging tech like AI. So, we’re building a digital society, and we’re building capacities on data protection. So, I’m going to show you an example. We’re digitalizing public services, but in a user-centric way so we don’t leave anyone behind, and building capacities on data protection. And also, we focus on bridging the digital divides, and we do this by ensuring no one is left behind. So, the youth, women, rural, urban, and also persons with disabilities. So, what the approach we took is, as you can see there, what you see on the side are all our stakeholders. So, we’re not just a technology company, we’re also a digital company, and we’re trying to make sure that we’re building the capacity of an IGF in practicing, in working in everyday work that we do, because at any one moment, like in my work, I deal with all those stakeholders, because we recognize that fact. One of the best ways to actually achieve a future where you can digitalize digital rights would be you leave no one behind. So, we have governments in our teams, we have private sector, we have civil society, and we have academia. We have the big, the big, the big, the big, the big tech companies, and we have the tech companies. So, we have in our team a team of about sixty to seventy people, and we’ve got two major ones, they’re quite a number. The ones that are relevant to this. So, the first one was a study on human-centered cyber security approach. So, if you know Kenya, we are known as a fintech powerhouse in terms of the work that we do there, but out of that, we’ve got a couple of other things that we’ve done. The other thing that we’ve done is data protection and privacy from a gender perspective, and I think that’s important, because… We always forget that the most vulnerable groups, particularly when it comes to data protection, in most cases are women. So we decided to look at data protection and privacy from a gender perspective and how to enable participation online. The next thing that we did was I’m going to jump to our other, yes, strengthening, strengthening gig workers, right? So every year we publish a report where we rank digital labor platforms and under the ILO Fair Work principles and how are they performing. And the other one when it comes to AI and leaving no one behind, maybe the one that I’m always excited about is building local solutions. And one of the things that we did, for example, working with Kenyan actually, Kenyan entrepreneurs and Kenyan coders, was we are now creating chatbots. The ones, the versions that you see of open AI, but these ones now are locally created. They’re able to speak English, Swahili, a version of English and Swahili. And in that way some of these products that are created are kind of geared towards the persons and they’re able to help. So that’s just, and also in relation to PWDs, we developed the first-ever continental-wide ICT accessibility standards. So they’re just some of the few ways that we are trying to, I can say, decolonize digital rights. And I was just showing an overview of it all. Thank you very much.
Moderator:
Thank you. Thank you very much for that, Tevin. I think, you know, our collective efforts are always very well needed in this kind of issues. And our panelists have shed light on the concept of decolonization in relation to the Internet technology and human rights and online freedoms. I think it’s time that we engage in discussions that goes deeper into these. concepts and explore the synergies and trade-offs that are involved. Our objective really is to understand how we can harness these innovations and these issues to, you know, responsibly create something more sustainable and equitable for a global inclusive digital future. I would now like to ask, we would start with Jonas who is online. Jonas, what are some of the ways in which cheap labor from the global south powers contemporary digital products and services?
Jonas Valente:
Cheap labor is key for all AI development and this is why lots of companies are using digital labor platforms because these digital labor platforms, they circumvent the social protections and digital labor rights, basic digital labor rights, and sometimes we’re talking about the 19th century rights like minimum wage or freedom of association and using that those companies can benefit from this cheap labor and those workers unfortunately are not being compensated, do not have health and safety protection measures, and don’t have the rights that we talk about as once again from the 19th century to the 20th century and unfortunately this is becoming a rule in the data services global value chains including AI and that’s why we need to address this issue and talk about how to ensure those labor rights to workers all around the world but focusing specifically on what’s happening in the global south.
Moderator:
Thank you Jonas. I have a question for Shahily but before I go on to that I have a follow-up question to you Jonas. Why are these conditions so bad and how is it that the Fair Work project, how is the Fair Work project working to improve them? Jonas you have the floor. Thanks to you. Why are these conditions so bad and how is the Fair Work Project that you’re working on working to improve them?
Jonas Valente:
Currently, so far, the regulatory efforts, they are only addressing on-location platforms.
Moderator:
Okay, we’re talking about the Internet Governance Forum and, you know, we’re having internet issues online. Okay, so we’re going to go ahead and move to Shalini, since there’s internet blockage over there. Shalini, you mentioned some of the work you do at MENA during your opening remarks, and what forms does online hate and falsehood take while it’s present in the APAC region?
Shalini Joshi:
Thanks. I’m going to focus on the issue of gender in the Asia-Pacific region, and I’m going to focus on South Asia. So women, trans people, non-binary people in South Asia are regularly targeted with online disinformation, and this disinformation is propagated in an attempt to silence already marginalized individuals and make it difficult for them to safely participate in public discourse. Much of the work on gender disinformation covers women in politics and those in the public domain. Research also shows the narrow definitions of gender disinformation and the current focus on women public figures are sometimes sidelining affected girls and women and gender minorities who do not have a public presence. Gender disinformation, as we know, can take many forms. That includes hate speech, intentionally misleading information. and rumors, attacks on the character and affiliations of people, and attacks on private and public lives of people, which impacts people in a way that they are either self-censoring or removing their social media contents or living in hiding. There are direct and indirect threats to their lives, and also generally enforcing stereotypes of vulnerability. So what we’re trying to do at NIDAN is that we are developing a data set on instances of gender disinformation to build more evidence for supporting research and policy action. And we have brought together a diverse set of stakeholder groups in South Asia to work collaboratively to define gender disinformation from a South Asian perspective, to identify, document, and annotate a high-quality data set of gender disinformation and hate in online spaces for better understanding and countering the issue. We’re going to use machine learning techniques in the process. And as we document more instances of gender disinformation online, we feel that the technology that we use will also become better at locating additional content and thereby creating a virtuous cycle.
Moderator:
Thank you, Shalini. Thank you for that. When you started answering the question, I was going to make a follow-up question about some of the best practices and measures that you guys have taken in place, put it in place rather, to counter online hate that target marginalized communities. And with regards to your context, you’re talking about women. But you answered that when you were talking about the data set that you guys are developing. So, thank you for that. Ananya, you talked about, when you were making opening remarks, you talked about data, a lot about data and how really it’s affected. It’s the key, it’s oil. And so, what are some of the implications of data colonialism and surveillance capitalism on digital rights? And how can individuals and communities really reclaim control over their personal data that they sometimes are not even aware that they’re giving out? And how do they protect their privacy in the digital realm?
Ananya Singh:
Yes, apparently it’s no longer oil, but it’s sunlight. Well, historically, the era of colonialism ushered in by boats that came to the new world to expand empires through infrastructure building and precious metals extraction. Now, like every other thing, colonialism is also going digital. It establishes extensive communication, networks like social media and harvest the data of millions to influence things as simple as advertising and as critical as elections. Data colonialism justifies what it does as an advance in scientific knowledge, personalized marketing or rational management, just as historical colonialism claimed itself to be a civilizing mission. But yes, some things have changed. If historical colonialism annex territories, their resources and the bodies that worked on them, data colonialism’s power grab clusters around the capture and control of human life itself through appropriating the data that can be extracted from it for profit. Data colonialism is global, playing out in both the global north and the global south, dominated by powerful forces in both the east and the west. Unfortunately, regardless of who directs these practices or where these practices take place, they often lead to the erosion of privacy rights such as individual’s personal data is. collected, and analyzed, and used without their knowledge or explicit or informed consent. And like you saw in the example that I gave you about the spam calls I get, there is little to no retracement mechanism. I mean, yes, I can block and report, but can I live happily ever after? No. Because there will be yet another company which has actually employed another spammer waiting to call me again to sell their policies. My data, your data, are now in the hands of so many people that it is going to be extremely difficult for us to individually trace and then erase our data. Hence this will ultimately result in a loss of autonomy and control over our own personal information. While our data may be widely dispersed, the power to capture and control our data continues to remain concentrated in the hands of a few. This can lead to a lack of transparency, accountability, and democratic controls over data practices, potentially undermining individuals’ rights and freedoms. The collection and analysis of personal data can perpetuate existing inequalities like some of my able panelists have already mentioned. Training emerging technology on biased data can lead to biases in algorithms, unfair targeting, exclusion, discrimination, and the list goes on. These practices can also be used to manipulate and influence individuals’ behaviors, opinions, choices, threatening individuals and democracies. We have seen that happening already. Undeniably, ideologies such as connection, building communities, personalization will keep incentivizing corporations to collect more of our personal data. Hence the only way to prevent data colonialism from further expanding is to question these very ideologies. Individually, we must prioritize data minimization, like be mindful of the information we share online or limit the amount of personal data we share. technology platforms. I personally do this by limiting my social media presence, which by the way is very good for your mental health as well. I like to call this digital minimalism. Further, think twice before you agree to the terms and conditions. While it is easy to be fatigued by the almost incomprehensibly long document written in complicated language, take time to think before giving into an impulse of clicking on I agree. So I’ll stop with that because I don’t want to take more time than I have been allocated. Thank you..
Moderator:
Thank you for that Ayanya. That’s quite insightful. However, I do have a comment to make. Honestly, we want people to be able to be at ease, be at comfort, be safe on the internet, not have to restrict themselves from using the internet or using social media. So I think this is something that we actually have to talk about again another session maybe or towards the end of this session about how we also have to talk about data, making sure that data is utilized properly with purpose, not just for spam calls like you experience. I will move to Pedro who’s online. Pedro, my question to you is, do multinational platforms care about the legal and cultural particularities of the countries in which they operate?
Pedro de Perdigão Lana:
Yeah, I will try to shorten up my presentation as well so we can give the floor back to Jonas at the end of this section so he can finish his. But I do think they care because especially generating conflicts with geocultural and legal particularities of the markets in which you are trying to sell your services usually mean less profits or at least more costs. But these concerns just go as far as the immediate costs of this adaptation can be considered not too high and this is a problem when you consider the difficulty of measuring the indirect and long-term costs that platform would certainly suffer in a fragmentation scenario. For example, while platforms investigated in the research project translated their main pages about intellectual property policies, but when you browse for more details, you’ll notice that not even something as simple as the translation of some pages were normally done, or even the interlinks led us to English versions. One of them, which was not content-based, had only the most basic page translated.
Moderator:
And how is this reflected in the global human rights system that, as a rule, it still has the sovereignty of national legal systems that determine the factors of jurisdictional conflicts, rather?
Pedro de Perdigão Lana:
Well, I think that this reflects directly on human rights. Intellectual property is itself globally considered a human right, but what I mean here is that, although we have some international frameworks, human rights are not interpreted the same worldwide. So freedom of expression is a good example. Some cultures see it as a much broader right than others. Copyright itself may be stronger or weaker when confronted with other fundamental rights, such as education or access to culture. So if platforms need to re-inform their policies around such concepts, they should at least do it in a way that is not so clearly unbalanced towards a single perspective. Specifically saying that the user should follow as a guidance on external legislation is, quite frankly, a bit offensive, since it really wouldn’t cost that much to get someone to do a quick review on the legal policies to deliver some adaptation, even if superficial. The problem here is this image that those platforms simply do not care to some basic elements of some societies that they have as markets for their services and products, especially when we see that they can evidently adapt. So as one can observe with changes made because of the German legislation called NSDG, especially on social media.
Moderator:
All right, thank you. Thank you for that. And I will move on to Tevin here. Tevin talked during his opening, has talked a lot about what Jersey is doing with regards to working with the communities, especially the marginalized communities. And I wanna ask you, how can digital literacy and digital skills training be re-imagined in a way that is such that it empowers the marginalized communities and bridges the digital divide? And in such a way that it ensures that everyone has the necessary tools to fully participate in the digital realm?
Tevin Gitongo:
Thank you for the question. And I think I’ll pick up from the question you asked earlier on do large entities care about the legal and cultural considerations. I think the lesson I’ve learned is you have to care about the cultural considerations for you to have any impactful trainings or digital skills. It’s a case of you have to bring yourself to the level and be there with the partner that you want to achieve the training to. And maybe thinking of it practically is, so how do you do that? How do you actually demonstrate that you are aware of the person’s context and how you can help them to kind of bring them up to where you want them to be in terms of lessening the digital divide? And I think how I look at it is, I normally look at it like a four step. And the first one is the stakeholders that you work with. Because more or less or not, you’re always guilty of working with stakeholders who have no clue what’s happening on the ground. You know, you go there, you tell them you’re going to do this, then they tell you, yes, we’ve done a training. Then you go on the ground, you realize, oh, this was a wrong stakeholder. Clearly, I did not understand what was happening in this context. And that way, you’re really, really. the training doesn’t have impact. The next thing I think I look to is accessibility. And how I look at that is in relation to democratizing the knowledge. And by this I say, when you do a training, it should be one that you are actually transferring knowledge, not just ticking a box. I’ve seen there’s a huge difference there, because most cases we are ticking boxes, but you’re not actually transferring knowledge, a knowledge that actually helps them grow. And one of the things we’ve done with that, I’ll give an example of, and I see my colleague is also here, when we were developing the AI chatbots, we basically brought, because it is a skill we were trying to transfer, we brought Kenyan developers in the room, we brought other developers, I think it was from Europe, who have expertise in developing such models. And we were like, we want you guys to teach them, to teach each other, actually, not just to teach them, it’s to teach each other how to develop this, because they are coming with indigenous knowledge of how Swahili works to develop an NLP in Swahili, or a mixture of Swahili and English. Maybe they come with the knowledge of how to develop these systems. And what happened is after they built the first system, the next system that we’re developing, because we’re developing another one for the Kenya’s Data Protection Commissioner, it’s the Kenyans that are running the show now. It’s them who are developing everything. So you start seeing, you’re slowly reducing that gap. The next thing is affordability, of course. If you really want to create any impact, you have to create training that people can, it also goes back to accessibility. And lastly, inclusion of everyone. And this can also be done practically, and one of the things I think I mentioned we assisted developing is the ICT standards for persons with disabilities for Kenya. So whenever you’re designing a system, how do you design it for persons with disabilities and you don’t leave them, given that Kenya is digitizing a lot, but we are forgetting that whole area. So, I think that’s it. I think that’s it. Thank you.
Moderator:
Thank you so much for that, Tevin. I think that, you know, with everything that all the panelists have said, it always goes back to bridging the digital divide. Digital skills, making sure that people are aware of these things and they know how to protect themselves, they know how to use it, and they know what the issues are and how to tackle them, and when it comes to any matter of Internet governance, it’s very important to make sure that people are aware of these things and they know how to protect themselves, and that’s what we’re going to talk about today. We’re going to go back to Jonas, who had issues online, but I think we have some time that we can spare. So, he’s back now, and he was going to tell us about, we were talking about the ways in which cheap labour from the global south empowers contemporary digital products and services. Jonas, can you please tell us about why these conditions are so bad, and how is the Fair Work project working on improving these conditions?
Jonas Valente:
These conditions are bad because platforms find, found a way, I think my connection will, I think I will freeze again. I hope I don’t. Because platforms found a way of sync-conventing digital labour and social protections, and by doing that, companies can hire cheap labour, and that’s why we’re seeing low-pay health and safety issues. So, we’re seeing a lot of platforms and management problems all around the world. A study has estimated 163 million online workers. So, this is a very representative number of people. The Fair Work project assessed that platforms all around the world in those 38 countries, so we analysed and scored those platforms according to five principles. So, pay conditions, contract management, and representation. So, I invite all of you to visit our website, fair.work, and you can see, maybe, platforms from your country and check what they are doing. doing or what they are not doing to meet basic standards of fair work.
Moderator:
Okay, thank you for that. I would like to thank all our speakers both on site and online for sharing their insights, sharing their experiences and the efforts that they’re working on and I would open the floor to four questions both on site and online. I don’t know if we, online moderator, do we have any questions online? If you’re on site and you have a question you may go to one of the standing mics. You state your name, the country you’re from and go ahead with your question. We have one question on site.
Audience:
Hi, my name is Daniele Turra. I am one of the youth ambassadors from Internet Society. I’m from Italy. So as the panelists anticipated, I am understanding there are a lot of stereotypes such as specific legal diversities that are not always respected, also lack of accessibility, also the need to respect privacy and all these different problems and needs are not always really respected and all of that is because of economic patterns and interests worldwide. But some of them, for example privacy, I would argue are also global rights. We can discuss about also being them human rights. I would very be interested to see, let’s say, a taxonomy of specific local needs that are not respected by specific technologies of the global north when it comes to culture, history or political characteristics. So I would like also to understand which are shared also with the global north and which are not. And with not I mean not regarding people originally born in the Global South that lately got to live in the Global North, but specifically populations that plan to thrive in their own country of origin. So the idea is to understand which needs are local and which are global.
Moderator:
Thank you. Okay, do you want Jonas to answer that or it’s open to any of the speakers? Sir? Okay, so Jonas, do you wanna take up this question?
Jonas Valente:
No, I think that what I would like to say is that when we talk about this national and cultural context, what Fair Work Project is bringing is that we have one very serious problem that has been addressed here by other speakers that is the biases, discrimination that is faced by users. But we also need to consider what is behind the digital technology production. And that’s why we highlight this discrimination and the consideration of the local context. And for instance, when Pedro brings the discussion on national regulations, we also need to consider as well the national regulations about work and how those national regulations, the national and local context and the different populations and the diversity of populations and cultural expressions can be considered in its own characteristics in the internet as a whole, but especially in digital labor platforms and global platforms. And that’s why I believe that this discussion that Pedro brought and now that we have the conversation needs to look to those diverse contexts and groups. And at the same time, think about how to incorporate them also not only in the digital and data practices, but in the regulatory. efforts as well.
Moderator:
Okay, thank you. I’m also going to invite Tevin to address the question in a very brief remark. Yes.
Tevin Gitongo:
So, maybe you asked what are local, what are international. So, international I’ll say privacy. I think you’ve alluded to it. It affects everyone, doesn’t matter if it’s global north or global south. We see it in Kenya every day. We have a data commissioner’s office and we actively work with them and the same issues that are raised in European countries are the same issues that are raised in Kenya even when it comes to AI products. It’s why am I getting these marketing messages, how did you take my data, issues of consent, where are you using it, where. Kenyans have become very interesting. They’ve been asking where are you transferring the data to. You know, they’re asking questions that you will find anywhere and this is not just I can say the urban Kenyan, it’s even the rural Kenyan. You’ll go and talk to them and they’re like okay I saw this application, however they told me to do this and I’m wondering why they told me to do this. So, it’s something that everyone is aware of. In relation to local, I’ll say languages because when you’re developing, for example, natural language processes, sadly most of them are geared towards global north. How the English, the pronunciation of the words is very different, the language is being used but it’s time we start looking at local aspects of it, especially languages because that’s the only way you start bridging the digital divide because not everyone will speak fluent English or fluent Swahili and you need to develop products that cater for their needs.
Moderator:
Yes, and that is going to bring me to a question for Pedro. Pedro, is the risk of regarding the search for balance of power relations between countries, is it a risk and how does this affect the Internet as a global network?
Pedro de Perdigão Lana:
Yeah, I would like to build upon what was just said by the previous speaker, that I would use the same example but inverted. I think language is an international issue, because even though we adapt to each country, it’s the same issue that we have around the world. Privacy on the other side, you can have different interpretation of privacy, what’s most important, what’s not. And that’s exactly what is especially dangerous when we are talking about platforms not diversifying what they are doing and how they do not do that in an international level. They prefer some regions to the others. So in a period that international relations are becoming increasingly tense and discourses against external threats are on the rise, it seems very easy not just to expose those true facts about how these relations work, such as talking about how these platforms may be an instrument of expanding the influence of a certain country or even acting directly on their behalf, as we learned with Snowden, but it’s also easy to extrapolate this context to get support for action that presents the international nature of the Internet as a problem in itself. So doing those small things, such as translating the contents correctly, adapting to national legislation, may be exactly what we need to avoid having a splinternet, having the Internet as we know it severely affected in an active way.
Moderator:
Thank you. Thank you for that. We have learned that we have some questions from the online participants and I would like to call on to Nelly, our online moderator, to ask the questions out loud for the audience. Nelly, you may take the floor please. Are you with us, Nelly? Okay, it seems Nelly is not with us and any other question from the audience here, on-site participants. Nelly, we think you’re muted, Nelly. Please unmute your mic and take the floor. Technical, can you please give, can you please help us give Nelly the floor to Nelly, please? Unmute her mic. Okay, if there are no other questions, and it seems we’re rounding up the session today. I’m actually very thrilled again to invite our speakers to share their invaluable recommendations to the following. What should be good? Was that Nelly? Nelly, is that you? Okay, we’re gonna go ahead. If Nelly happens to unmute her mic, we’ll just take questions from her. But until then, we’re gonna, I’m going to ask our panelists here who have shared their insights and experiences for their recommendations regarding the following questions. What should decolonizing digital rights look like? But before I give you the floor, I would also like to strongly encourage the audience to seize this opportunity to share your recommendations again by scanning the QR code that’s displayed, that’s going to be displayed on the screen shortly. And now I would like to welcome Ananya. Please go ahead, tell us what should decolonizing the internet look like?
Ananya Singh:
I think this has just been done because I finished ahead of time. All right. Well, let’s say this, my blood group is B positive. There you go. You have another one of my personal data points. Anyway, being the positive person that I apparently am, I believe that every cloud has a silver lining. So this cloud of data colonialism presents an opportunity for us, an opportunity to create ethical systems which run on the principles of, A, ownership by design, where users are provided with clear and understandable information about how their data will be collected, used, and processed, shared, stored, or erased. It involves obtaining informed consent that is granular and specific, allowing individuals to make informed choices about their data. B, minimization and anonymization. Only the necessary and most relevant data is collected and processed. And wherever possible, such data is kept anonymous and encrypted. This reduces the risks associated with data breaches and unauthorized access while respecting individuals’ privacy. C, there should be an option to be forgotten or easily revoke consent when desired. I know there are options to be forgotten, but the option to revoke consent has been a complicated process so far. D, mechanisms for accountability and redressal in case of data breaches or privacy violations are hard to find. This involves providing individuals with avenues to exercise their rights, report violations, seek remedies for any harms. And this should and must go beyond blocking and reporting accounts. And E, I just want to finally take note of this. The whole entitled attitude that makes data colonialism possible must be done away with. Spelled simply, for example, I was born with a name. My name is a data point. Just because I provided my name to my school on the day of enrollment does not automatically translate into their unprecedented right over an unchecked use of my name for the rest of their existence. Data use is not a right, but it’s a permission. data reuse is not an entitlement, but once again, a permission slip. Thank you.
Moderator:
Thank you. Thank you so much for that, Ananya. And I think we have access to Nellie now, so we’re going to take the question from her online. Nellie, you may unmute your mic, please, and ask the question to our panelists.
Audience:
Thank you for letting me turn on my mic. Initially, the question arose, according to your very interesting discussion, is like this. How can digital literacy and skills training be reimagined to empower marginalized communities and bridge the digital divide, ensuring that everyone has the necessary tools to fully participate in the digital world?
Moderator:
Can you please repeat the question, Nellie?
Audience:
Yes, of course. How can digital literacy and skills training be reimagined to empower marginalized communities and bridge the digital divide, ensuring that everyone has the necessary tools to fully participate in the digital world?
Tevin Gitongo:
Okay, Tevin is going to take the question.
Ananya Singh:
I think just to help Tevin answer the question, I think it basically means how we could use or at least program and structure digital literacy programs, which would, I assume, will help people to better navigate in a world which is more decolonized. So how could digital literacy aid the process of decolonizing the Internet?
Tevin Gitongo:
Yes, I think for that, I’m a proponent of, and I think I keep on reiterating, you have to bring yourself to the shoes of the person. So I’m going to give a good example of, we were having a discussion recently, there is this, in Kenya we have this So, we have a lot of women who are selling food, and we have a lot of women who are selling groceries, and we have a lot of women who are selling groceries, and we have a lot of women who are selling groceries. So, they have this little talk shops, whenever you go, you go buy groceries. And we were thinking how do you enable them, for example, use digital tools to enable the sale of their products. So, that’s what we were thinking about. So, that’s what we are trying to do with our projects, is it’s always us telling them come, but now it’s how do we go to them, and how do you go to them at their level and work with their skill, because they really have a lot of that skill, and just empower that. And I think that’s what the challenge and discussion should be. And it’s also something that we should be thinking about, like, how do you go there, how do you work with them where they are? And I can’t say we have the complete answer to that. It’s a learning process, but I’m a big proponent of find people where they are. Don’t make them come to you, because that’s more burden. You look for them and work with them from where they are, because one of the things that study was showing, even when you were talking to them, was how much knowledge they have. They do have a lot of that. For example, one of them was telling us, you know, you click on this, and I don’t know what I’m clicking on, but it doesn’t make sense when I read it, as Anja just said. It’s the terms and conditions, and it’s, like, 30 pages. Just say I agree, and you move on. But they are cognizant of the fact I’m giving away my data, that they’re cognizant of, so perhaps it’s coming to them, and also breaking it down to a point that they also understand.
Moderator:
I like that you mentioned that, because there’s a recent principle that I learned in digital development that, you know, you have to design with the user. If you’re, because at the end of the day, if you’re looking to benefit them and they need to be actively involved in the process, you need to know what their challenges are, what their perspectives are, what they think is gonna benefit them and include that in the process. And that’s fair too. I would like to continue the recommendations that we’re getting from our panelists about what should de-economizing digital rights look like.
Audience:
Ananya has given hers, so we’re gonna move on to Jonas online, who will share his. By the way, please note that there is, I’m actually sharing my screen and there’s a QR code where you’re supposed to, it should be on the padlock, it should be on the screen. Yes, that one. You can just scan it and, or I’ll send the link in the chat for the online participants as well to make their comments as well. So Jonas, please share your recommendations on what decolonizing digital rights should look like.
Jonas Valente:
Thank you so much. I would say that decolonize digital technologies involve not only decolonize the use of digital technologies, but also the production process. And that’s why we need to incorporate the labor dimension to our decolonization agenda. And this means to ensure not only basic standards of fair work that it’s why, what we are assessing in our project, but a radically and structurally different work arrangement where workers are not exploited, where we don’t have international, national, local and population groups and symmetries. And we are where workers are not exploited anymore. So I believe that we need to incorporate this to our agenda and to quote a Latin American philosopher called Enrique Dussault. He says that it’s not only about decolonize, but it’s about to liberate oppressed people and to create something radically new.
Moderator:
Thank you for that, Jonas.
Shalini Joshi:
Shalini. Yeah. I’m going to be very brief and say that in order to decolonize digital rights, it’s really important to look at who’s being included in the process of creating digital tools. We have to involve hyperlocal communities in creating data sets, something that I talked about earlier as well. We also have to make sure that there are people from marginalized communities who are involved in analyzing the data, annotating the data, in actually creating the technology, because it’s these people who understand the context, the language, and the issues much more than technologists and coders and developers sitting somewhere else. So involving the people in the creation of the technology, making processes more inclusive, ensuring that many, many languages are being included in the way that we analyze data, all of that is really important.
Moderator:
Thank you. Thank you for that. And Pedro, what are your recommendations?
Pedro de Perdigão Lana:
Yeah, I already talked a bit on my last comments, but just to be very brief, I think that platforms should try to diversify a little bit more to adapt to those local cultures and scenarios and countries that are historically more influential in directing how the internet is modeled should actively try to share these powers, these capacities. It’s just not about just decolonizing the digital space, but preserving the internet as you know it, as a global network and a force for good. So, that’s it. Thanks for your attention, everyone.
Moderator:
All right, thank you. And we’ll take our last recommendation from our final panelist, Tevin.
Tevin Gitongo:
Perhaps my recommendation will be to ask ourselves four fundamental questions. And some of them have been alluded. The first one is, who? Who’s developing the systems? Second one is, why are they doing it? In most cases, it’s for economic gain. If you’re being honest, the baseline of this whole conversation is economic gain. And what they stand to benefit. As Anja said, data is the new sun, and let’s call it the politics of data. Everyone wants to be the ruler of data now. Second is, where are they being developed? Are they being developed where the marginalized people you’re targeting are? Because where it’s being developed, because someone sitting in Silicon Valley, to be quite honest, they’re not really thinking of me as a Kenyan using their AI product. I am the last person in their mind. Because of where they are. The last thing is, what? What is it for? By the end of the day. Yeah, thank you very much.
Moderator:
Yes, that’s very true. And I think all our panelists have shared very thought-provoking and insightful experiences and insightful expertise on this topic. But as we conclude this session today, I’d also like to express my sincerest gratitude to our online and onsite panelists for their expertise and thought-provoking contributions. Your insights have been very instrumental to deepening our understanding of the complexities that surround the decolonization of the Internet and technology. And I’d also like to thank the audience, of course, both onsite and our online audience, for your engagement and for your questions and for being here today. Your participations have very enriched our discussions. In closing, I would like us to remember that the journey towards a decolonized Internet and digital landscape, it’s ongoing. It’s not static. It’s not something that’s already established. It’s ongoing and it’s a learning process. It requires continuous reflections, dialogue, and call to actions. He talked about who’s benefiting, what, and economic gain and all of that. And I think that together, we can strive for a digital space that is inclusive and respects and empowers all individuals, all communities, regardless of their background, regardless of their geographical location. We have to work together in order to create a future where the internet truly becomes a force of equality, justice, and liberation. Thank you, and that is it for this session. Thank you all.
Audience:
So we have another session in, by, well, in, I think, 22 minutes. I’ll take it from there. 22 minutes, yes. So we have another session happening here at 1730 Japanese Standard Time. So I hope you’ll stay with us. If you want, you can quickly grab something to drink or eat meanwhile, and if you’re going outside, I would request you to bring in your colleagues and friends to join us for the next session. Thank you very much for attending. Thank you.
Speakers
Jonas Valente
Speech speed
157 words per minute
Speech length
1618 words
Speech time
618 secs
Arguments
AI development and deployment is heavily dependent on human labor
Supporting facts:
- Activities required in data work include collection, curation, annotation, and validation
- Workforce for AI comes from predominantly global South countries
Topics: Artificial Intelligence, Human Labor
Working conditions for AI labor are generally precarious
Supporting facts:
- Workers are subject to low pay, excessive overwork, short-term contract and precarity, unfair management and lack of collective power
- Workers also suffer from sleep issues and mental health problems arising from strenuous work schedules
Topics: Artificial Intelligence, Labor Conditions
Assessment of specific AI companies is underway
Supporting facts:
- They are trying to show that the platform economy can be different
Topics: AI technologies, Platform economy
Cheap labor is crucial for AI development, and many companies utilize digital labor platforms that bypass labor rights and protections.
Supporting facts:
- Companies benefit from cheaper labor as they do not have to pay for health and safety measures or ensure labor rights such as minimum wage or freedom of association
Topics: AI Development, Digital Labor Platforms, Labor Rights
Platforms have found a way to circumvent digital labour and social protections
Supporting facts:
- This has led to companies hiring cheap labour, resulting in low pay and health and safety issues
Topics: Digital labour, Social protections, Exploitation
The Fair Work Project is working to improve conditions
Supporting facts:
- The Fair Work Project has assessed platforms in 38 countries
- Their assessment is based on principles like pay conditions, contract management and representation
Topics: Fair Work Project, Digital labour, Work conditions
Jonas Valente emphasizes on considering national and cultural context in digital technology production
Supporting facts:
- Bias and discrimination faced by users have been addressed by other speakers
- He highlights discrimination and the importance of considering local context
- He brings up the issues of national regulations on work
Topics: Fair Work Project, digital labor platforms, global platforms, bias, discrimination
Decolonizing digital technologies involves not only the use of digital technologies but also the production process.
Topics: decolonization, digital technologies, production process
We need to incorporate the labor dimension to our decolonization agenda, ensuring basic standards of fair work and a structurally different work arrangement where workers are not exploited.
Topics: labor dimension, decolonization, fair work, exploitation
Report
The analysis highlights several important points from the speakers’ discussions. Firstly, it is noted that the development and deployment of artificial intelligence (AI) heavily rely on human labor, particularly from countries in the global South. Activities such as data collection, curation, annotation, and validation are essential for AI work.
This dependence on human labor underscores the important role that workers from the global South play in the advancement of AI technologies. However, the analysis also reveals that working conditions for AI labor are generally precarious. Workers in this industry often face low pay, excessive overwork, short-term contracts, unfair management practices, and a lack of collective power.
The strenuous work schedules in the sector have also been found to contribute to sleep issues and mental health problems among these workers. These challenges highlight the need for improved working conditions and better protections for AI labor. One positive development in this regard is the Fair Work Project, which aims to address labor conditions in the AI industry.
The project evaluates digital labor platforms based on a set of fair work principles. Currently operational in almost 40 countries, the Fair Work Project rates platforms based on their adherence to these principles, including factors such as pay conditions, contract management, and representation.
This initiative seeks to improve conditions and drive positive change within the AI labor market. Another concern raised in the analysis is the exploitation of cheap labor within the development of AI. Companies benefit from the use of digital labor platforms that bypass labor rights and protections, such as minimum wage and freedom of association.
This trend, which is becoming more common in data services and AI industries, highlights the need for a greater emphasis on upholding labor rights and ensuring fair treatment of workers, particularly in the global South. Furthermore, the analysis underscores the importance of considering diversity and local context in digital technology production.
Incorporating different cultural expressions and understanding the needs of different populations are key factors in creating inclusive and fair digital labor platforms and global platforms. By doing so, the aim is to address bias, discrimination, and national regulations to create a more equitable digital landscape.
The analysis also acknowledges the concept of decolonizing digital technologies. This process involves not only the use of digital technologies but also examining and transforming the production process itself. By incorporating the labor dimension and ensuring basic fair work standards, the goal is to create a structurally different work arrangement that avoids exploitation and supports the liberation of oppressed populations.
In conclusion, the analysis highlights the challenges and opportunities surrounding AI labor and digital technology production. While the global South plays a crucial role in AI development, working conditions for AI labor are often precarious. The Fair Work Project and initiatives aimed at improving labor conditions are prominent in the discussion, emphasizing the need for fair treatment and better protections for workers.
Additionally, considerations of diversity, local context, and the decolonization of digital technologies are crucial in creating a more inclusive and equitable digital landscape.
Pedro de Perdigão Lana
Speech speed
168 words per minute
Speech length
1536 words
Speech time
549 secs
Arguments
Regulation is a significant threat to the Internet and may lead to fragmentation
Supporting facts:
- Governmental regulation is often stimulated by distinctions that are rooted in digital colonialism.
Topics: Internet, Regulation, Fragmentation
Larger multinational companies subtly impose their home country’s laws worldwide, irrespective of national laws.
Supporting facts:
- Examples include the DMCA extending the legal systems of American copyright reform globally.
Topics: Multinational Companies, Law, Digital Colonialism
Intellectual property is a crucial topic in the discussions of Internet regulations.
Supporting facts:
- In Brazil, a provisional measure was introduced to create obstacles for content moderation using copyright mechanisms.
Topics: Internet, Regulation, Intellectual Property
Platform diversification in terms of user experience and language accessibility may act as a tool against regulation that may lead to fragmentation.
Supporting facts:
- Diversification allows platforms to adapt to individual national legislation, preventing further tension and negativization
Topics: Platform Diversification, Language Accessibility
Multinational platforms do care about the legal and cultural specificities of the countries they operate in
Supporting facts:
- Generating conflicts with geocultural and legal particularities of the markets means less profits or more costs for these platforms
Topics: Multinational platforms, Legal and cultural specificities, Market
These concerns about legal and cultural specifics go as far as the immediate costs of this adaptation can be considered not too high
Supporting facts:
- The difficulty of measuring the indirect and long-term costs are a problem in this context
Topics: Legal and cultural specificities, Market adaptation, Costs
While platforms translated their main pages about intellectual property policies, not all detailed pages or interlinks were translated properly
Supporting facts:
- One of the platforms had only the most basic page translated
Topics: Translation, Intellectual property policies, Multinational platforms
Intellectual property is itself globally considered a human right
Topics: Intellectual Property, Human Rights
Human rights are not interpreted the same worldwide.
Supporting facts:
- Freedom of expression is a good example. Some cultures see it as a much broader right than others.
Topics: Human Rights
Sovereignty of national legal systems can cause jurisdictional conflicts
Topics: Sovereignty, National legal systems, Jurisdictional conflicts
User policies of platforms ought to be balanced and not biased towards a single perspective
Supporting facts:
- Copyright itself may be stronger or weaker when confronted with other fundamental rights, such as education or access to culture. Platforms need to re-inform their policies around such concepts.
Topics: User policies, Internet platforms, Bias
Internet platforms should diversify and adapt their operations to respect national legislations and cultural contexts in different countries to avoid splinternet
Supporting facts:
- The failure of platforms to diversify and respect national legislation and cultural context leads to favouring some regions over others
- The international nature of the Internet can be seen as a problem if not handled correctly
Topics: Internet governance, Cultural diversity, Balance of power, National legislation, International relations
Platforms should diversify more and adapt to local cultures
Topics: Internet, Digital Platforms, Cultural Diversity
Countries that are historically more influential in directing how the internet is modeled should actively share powers
Topics: Internet Governance, Power Sharing, Diverse Influence
Preserving the internet as a global network and a force for good is important
Topics: Internet Governance, Digital Good, Decolonization of Digital Space
Report
The analysis highlights several concerns about Internet regulation and its potential impact on fragmentation. It argues that governmental regulation, driven by the concept of digital colonialism, poses a significant threat to the Internet. This is because such regulations are often stimulated by distinctions that are rooted in historical power imbalances and the imposition of laws by dominant countries.
One example of this is seen in the actions of larger multinational companies, which subtly impose their home country’s laws on a global scale, disregarding national laws. For instance, the Digital Millennium Copyright Act (DMCA) is mentioned as a means by which American copyright reform extends its legal systems globally.
This kind of imposition from multinational companies can undermine the sovereignty of individual nations and lead to a disregard for their own legal systems. However, the analysis also recognizes the importance of intellectual property in the discussions surrounding Internet regulations.
In Brazil, for instance, a provisional measure was introduced to create barriers for content moderation using copyright mechanisms. This indicates that intellectual property is a crucial topic that needs to be addressed in the context of Internet regulations and underscores the need for balance in protecting and respecting intellectual property rights.
Another important aspect highlighted is platform diversification, which refers to the adaptation of platforms to individual national legislation and cultural contexts. It is suggested that platform diversification, particularly in terms of user experience and language accessibility, may act as a tool to counter regulations that could lead to fragmentation of the Internet.
By ensuring that platforms can adapt to different national legislations, tensions can be alleviated, and negative effects can be minimized. Pedro, one of the individuals mentioned in the analysis, is portrayed as an advocate for the diversification of internet content and platforms.
Pedro presents a case in which internet content-based platforms extended US copyright laws globally, enforcing an alien legal system. Thus, diversification is seen as a means to counter this threat of fragmentation and over-regulation. The analysis also explores the concern of multinational platforms and their attitude towards the legal and cultural specificities of the countries they operate in.
While it is acknowledged that these platforms do care about such specifics, the difficulty of measuring the indirect and long-term costs associated with this adaptation is raised. Furthermore, the discrepancy in the interpretation of human rights across cultures is highlighted.
Human rights, including freedom of expression, are not universally understood in the same way, leading to different perspectives on issues related to Internet regulation and governance. The importance of privacy and its differing interpretations by country are also acknowledged. It is suggested that privacy interpretations should be considered in managing the Internet to strike a balance between ensuring privacy rights and maintaining a safe and secure digital environment.
The analysis concludes by emphasizing the need for active power sharing and decolonization of the digital space. It underscores that preserving the Internet as a global network and a force for good is crucial. The failure of platforms to diversify and respect national legislation and cultural contexts is seen as a factor that may lead to regional favoritism and even the potential fragmentation of the Internet.
In summary, the analysis highlights the concerns about Internet regulation, including the threats posed by governmental regulation and the subtle imposition of home country laws by multinational companies. It emphasizes the importance of intellectual property in the discussions surrounding Internet regulations, as well as the potential benefits of platform diversification.
The analysis also highlights the need for active power sharing, the differing interpretations of human rights, and considerations for privacy. Overall, preserving the Internet as a global network and ensuring its diverse and inclusive nature are key priorities.
Ananya Singh
Speech speed
168 words per minute
Speech length
1898 words
Speech time
678 secs
Arguments
Our personal data is often exploited and used for profit without our consent
Supporting facts:
- Ananya Singh recounts personal experiences of her phone number and email being used without her consent
- She cites from ‘The Costs of Connection’, about how our everyday social acts are translated into quantifiable data for capital gains
Topics: Data Privacy, Data Colonialism, Online Exploitation
There is a need for more transparency and accountability in the handling of personal data
Supporting facts:
- She points out that the terms of service we sign up to every time we join a platform are often unclear, full of jargons and misunderstood
- She questions the contradiction between the personal nature of personal data and its public use
Topics: Data Privacy, Data Protection, Internet Regulations
Data colonialism is a new form of colonialism that aims to capture and control human life through the appropriation of data for profit
Supporting facts:
- Historical colonialism annex territories, their resources and the bodies that worked on them, data colonialism’s power grab focuses on the capture and control of human life.
- Data colonialism justifies what it does as an advance in scientific knowledge, personalized marketing or rational management.
Topics: Data Colonialism, Data Privacy, Data Protection
Individuals should take steps to minimize the amount of personal data they share online or with technology platforms
Supporting facts:
- Limiting social media presence can help with data minimization.
- Think twice before agreeing to terms and conditions that might require sharing of personal data.
Topics: Digital Minimalism, Data Protection, Data Privacy
Ananya Singh believes that every cloud has a silver lining, including data colonialism, which provides an opportunity to create systems rooted in ethics.
Supporting facts:
- Ananya believes in the concept of ownership by design, minimization, and anonymization, an option to be forgotten or revoke consent, and mechanisms for accountability and redressal.
Topics: Data Colonialism, Ethics, Opportunity
Digital literacy programs can aid in decolonizing the Internet
Topics: Digital Literacy, Decolonization
Report
The analysis features speakers discussing the exploitation of personal data without consent and drawing parallels to colonialism. They argue that personal data is often used for profit without knowledge or permission, highlighting the need for more transparency and accountability in handling personal data.
The speakers believe that the terms of service on online platforms are often unclear and full of jargon, leading to misunderstandings and uninformed consent. One of the main concerns raised is the concept of data colonialism, which is compared to historical colonial practices.
The speakers argue that data colonialism aims to capture and control human life through the appropriation of data for profit. They urge individuals to question data-intensive corporate ideologies that incentivise the collection of personal data. They argue that the collection and analysis of personal data can perpetuate existing inequalities, lead to biases in algorithms, and result in unfair targeting, exclusion, and discrimination.
In response, the speakers suggest that individuals should take steps to minimise the amount of personal data they share online or with technology platforms. They emphasise the importance of thinking twice before agreeing to terms and conditions that may require sharing personal data.
They also propose the idea of digital minimalism, which involves limiting one’s social media presence as a way to minimise data. The analysis also highlights the need for digital literacy programmes to aid in decolonising the internet. Such programmes can help individuals navigate the internet more effectively and critically, enabling them to understand the implications of sharing personal data and make informed choices.
Overall, the speakers advocate for the concept of ownership by design, which includes minimisation and anonymisation of personal data. They believe that data colonialism provides an opportunity to create systems rooted in ethics. However, they caution against an entitled attitude towards data use, arguing that data use and reuse should be based on permissions rather than entitlements or rights.
Some noteworthy observations from the analysis include the focus on the negative sentiment towards the unregulated collection and use of personal data. The speakers highlight the potential harm caused by data exploitation and advocate for stronger regulation and protection of personal data.
They also highlight the need for a more informed and critical approach to online platforms and the terms of service they offer. In conclusion, the analysis underscores the importance of addressing the exploitation of personal data without consent and the potential harms of data colonialism.
It calls for more transparency, accountability, and individual action in minimising data sharing. It also emphasises the need for critical digital literacy programmes and promotes the concept of ownership by design to create ethical systems.
Audience
Speech speed
158 words per minute
Speech length
519 words
Speech time
197 secs
Arguments
There are several stereotypes and issues regarding legal diversities, accessibility and privacy, which are not always respected due to economic patterns and interests worldwide.
Topics: stereotypes, legal diversities, accessibility, privacy, economic patterns, global interests
Certain issues such as privacy could also be considered as global rights or human rights.
Topics: privacy, global rights, human rights
Desire to understand which needs are local and which are global, and how populations plan to thrive in their country of origin rather than migrating to the Global North.
Topics: global needs, local needs, population migration
Reimagining digital literacy and skills training
Topics: Digital divide, Empowerment of marginalized communities, Digital literacy
Digital literacy and skills training needs to be reimaged to empower marginalized communities
Topics: Digital Literacy, Skills Training, Empowerment of Marginalized Communities
Bridging the digital divide is vital to ensure everyone has the necessary tools to fully participate in the digital world
Topics: Digital Divide, Inclusive Digital Participation
The journey towards a decolonized Internet and digital landscape is ongoing and requires continuous reflections, dialogue, and call to actions
Supporting facts:
- The discussion with all panelists deepening the understanding of the complexities that surround the decolonization of the Internet and technology
- The process includes questions about who benefits, economic gains, and other complexities
Topics: Decolonization of the Internet, Digital landscape
Striving for a digital space that is inclusive and empowers all individuals, regardless of their background or geographical location is necessary
Supporting facts:
- The idea of the future where the internet truly becomes a force of equality, justice, and liberation
Topics: Digital inclusivity, Empowerment
Report
The discussions revolved around several interconnected issues, including legal diversities, accessibility, privacy, and economic patterns. These topics were seen as not always being respected globally due to economic interests and the perpetuation of stereotypes. This highlights the need for increased awareness and efforts to address these issues on a global scale.
One of the arguments put forth was that privacy should be considered as a global right or human right. This suggests the importance of acknowledging privacy as a fundamental aspect of individual rights, regardless of geographical location or cultural context.
Another point of discussion was the need for a taxonomy that identifies specific local needs and how they relate to cultural, historical, or political characteristics. The argument advocates for better understanding and consideration of these factors to address the unique requirements of different communities and regions.
This approach aims to reduce inequalities and promote inclusive development. The distinction between local and global needs was also highlighted as crucial for effective population planning and reducing migration to the Global North. By focusing on empowering individuals to thrive in their country of origin, the discussion emphasized the importance of creating conditions that allow people to stay and contribute to their local communities.
The importance of reimagining digital literacy and skills training was emphasized as essential for empowering marginalized communities. This involves providing equitable access to digital tools and promoting inclusivity in digital participation. Bridging the digital divide was seen as necessary to ensure that everyone has the necessary tools and skills to fully participate in the digital world.
The discussions also delved into the decolonization of the Internet and the digital landscape. It was recognized that this is an ongoing journey that requires continuous reflections, open dialogue, and actionable steps. The complexities surrounding decolonization were explored in relation to factors such as economic gains and the question of who benefits from the current digital landscape.
Lastly, the need to strive for a digital space that is inclusive and empowers all individuals, regardless of their background or geographical location, was highlighted. This vision of a future in which the internet becomes a force of equality, justice, and liberation motivates efforts towards digital inclusivity and empowerment.
In conclusion, the discussions explored various critical aspects related to legal diversities, accessibility, privacy, and economic patterns. They underscored the importance of addressing these issues globally, recognizing privacy as a universal right, understanding local needs, bridging the digital divide, and advocating for a decolonized digital space.
The overall emphasis was on promoting inclusivity, reducing inequalities, and fostering empowerment in the digital age.
Moderator
Speech speed
169 words per minute
Speech length
3814 words
Speech time
1356 secs
Arguments
AI development and deployment is heavily reliant on human labor
Supporting facts:
- Human labor is characterized by a set of features that make it very precarious and with very insufficient arrangements regarding sets of conditions like pay, management and collectivization
- AI is not so artificial and needs thousands of workers for activities like collection, curation, annotation, validation
Topics: Artificial Intelligence, Workforce, Labor
Work distribution for AI development is primarily in the global South
Supporting facts:
- The distribution of work is an expression of the legacies of colonialism
- Big companies in the global North hire and develop these technologies using a workforce mainly in the global South
Topics: Work Distribution, Artificial Intelligence, Global South
Working conditions in AI prediction networks are frequently precarious
Supporting facts:
- Workers often have to deal with low pay, excessive overwork and job strain leading to health issues
- There are also issues with short-term contracts and precarity, unfair management and discrimination based on gender, race, ethnicity and geography
Topics: AI prediction networks, Working Conditions, Precarity
Lack of collective power for workers in AI prediction networks
Supporting facts:
- Workers often spend eight hours on unpaid tasks
- Colonialism, capitalism regimes and work arrangements contribute to these difficulties
Topics: Collective Power, Workers Rights
Internet content and platform diversification can be an important tool against pushes for regulation that may result in fragmentation.
Supporting facts:
- Governments often regulate the internet under the guise of digital sovereignty.
- Digital sovereignty is often justified by digital colonialism, which extends the legal systems of larger nations to every corner of the globe.
- Larger multinational companies often subtly impose alien legislation that doesn’t follow the standards of national law.
- Intellectual property, like the DMCA, is an example of this behavior.
Topics: digital sovereignty, intellectual property, internet regulation, fragmentation
National platforms often neglect to convey an image that follows national legislation, instead showcasing the internet as something dangerous or-controllable.
Supporting facts:
- Larger multinational companies often subtly impose alien legislation that doesn’t follow the standards of national law.
- Brazil, for example, has seen attempts to control online content via copyright mechanisms.
Topics: national sovereignty, intellectual property, internet regulation, fragmentation
Cheap labor is the cornerstone of AI development, enabling many companies to benefit while workers are not properly compensated or protected.
Supporting facts:
- Companies use digital labor platforms to circumvent social and labor protections, resulting in lack of minimum wage, health and safety protections for the workers.
- These practices are prevalent in the data services value chains, including AI.
Topics: Artificial Intelligence, Cheap Labor, Digital Labor Platforms, Workers’ Rights, Global South
There is a need to improve labor conditions, particularly in the global south, and ensure labor rights are respected.
Supporting facts:
- Companies are circumventing 19th century rights of workers, using digital labor platforms.
- Workers in the global south are particularly affected due to the lack of social protections and labor rights.
Topics: Labor Conditions, Labor Rights, Global South
Gender disinformation in South Asia is being documented for research and policy action.
Supporting facts:
- Women, trans individuals, and non-binary people are regularly targeted in South Asia.
- Disinformation is meant to silence marginalized individuals.
- Most gender disinformation focuses on women in politics and the public domain.
- Attacks can either be in the form of hate speech, misleading information or character attacks.
- NIDAN is developing a data set focused on instances of gender disinformation.
Topics: Gender Disinformation, South Asia, Online Hate Speech
Digital colonialism is about the capture and control of human life through the data extracted from it
Supporting facts:
- Data is considered as the new oil, collected globally for profit
- Data is used to influence everything from advertising to elections
- Data privacy rights are often violated when personal data is collected, analyzed and used without knowledge or consent
Topics: Data Privacy, Colonialism, Data Control
Power to capture and control data remains concentrated in the hands of a few
Supporting facts:
- Data is collected and analyzed by a few powerful entities
- This results in lack of transparency, accountability and democratic control over data
Topics: Data Control, Inequality
Biases in collected data and in algorithms can lead to unfair targeting, exclusion, and discrimination
Supporting facts:
- Training emerging technology on biased data can lead to biases in algorithms
- These biased practices can manipulate and influence individuals’ behaviors, opinions, choices
Topics: Discrimination, Data Bias
Individuals should prioritize data minimization and be cautious in online spaces
Supporting facts:
- Control on sharing personal data online can avoid data misuse
- Practicing digital minimalism and thoroughly reading terms and conditions can also prevent individuals from unwanted data exposure
Topics: Online Safety, Data Minimization
Global businesses need to be sensitive to geocultural and legal particularities where they operate to maintain profitability
Supporting facts:
- Geocultural and legal conflicts usually mean less profits or more costs
- Measuring indirect and long-term costs of adaptation in a fragmentation scenario can be difficult
Topics: global business, legal adaptability, geocultural sensitivity
Intellectual property conflicts with other human rights such as access to education and culture
Supporting facts:
- Pedro de Perdigão Lana believes that the interpretation of human rights varies globally
- He believes some cultural perspectives are being ignored due to unbalanced policies
Topics: Intellectual property, Human rights, Education, Culture
Importance of digital literacy and skills training for marginalized communities
Topics: Digital Literacy, Digital Skills, Marginalized Communities
Large entities must care about legal and cultural considerations
Supporting facts:
- Trainings have to be relevant and make sense to the context of those being trained
- Partnering with the right stakeholders is essential for successful training
Topics: Impactful training, Digital skills, Cultural understanding
Training should be aimed at knowledge transfer, not just ticking boxes
Supporting facts:
- Kenyan developers were able to learn from and teach European developers in creating AI chatbots
- Practical knowledge is often more useful and impactful than theoretical knowledge
Topics: Digital education, Access to information
Affordability and inclusion are important aspects of any impactful digital training or project
Supporting facts:
- Training and education must be available to all, regardless of economic status or disability
- Standardised ICT systems for persons with disabilities are being developed
Topics: Digital divide, Inclusion, Accessibility
Need to consider what is behind the digital technology production
Supporting facts:
- The Fair Work Project highlights discrimination and the consideration of the local context
- There’s a need to consider national and local contexts and diverse populations
Topics: Digital technology, Bias, Discrimination, National regulations, Work regulations
International privacy issues affect everyone globally, regardless of location.
Supporting facts:
- Issues of consent, data usage and transfer are raised both in European countries and in Kenya.
- The data commissioner’s office in Kenya actively deals with such issues.
Topics: Privacy, Data Protection
It is crucial to factor in local aspects, especially languages when developing AI products.
Supporting facts:
- Most natural language processes are skewed towards the global north.
- Not everyone will speak fluent English or Swahili, therefore the need for products that cater for their needs.
Topics: Technology Development, Artificial Intelligence, Language Processing
The importance of adapting to national legislation and translating contents correctly to avoid a splinternet
Supporting facts:
- International relations are becoming increasingly tense and discourses against external threats are rising
- Platforms may be an instrument of expanding the countries influences
Topics: International Relations, Online Privacy, Digital platforms
The issue of digital platforms preferring some regions to the others
Supporting facts:
- Platforms not diversifying what they do at an international level, preferring some regions to others
Topics: Digital platforms, International Relations
Design with the user
Supporting facts:
- Include user in the process to know their challenges and perspectives
- User’s opinions benefit the overall digital development
Topics: Digital Development, User Experience, Inclusivity
Terms and conditions transparency
Supporting facts:
- People usually click ‘agree’ without knowing what they’re agreeing to
- Users are cognizant of the fact that they are giving away their data
Topics: Digital Rights, Data Privacy
decolonize digital technologies involve not only decolonize the use of digital technologies, but also the production process
Topics: Decolonization, Digital Technologies, Production Process
To decolonize digital rights, it’s crucial to consider who is involved in creating digital tools
Supporting facts:
- Inclusive processes are needed within technology creation
Topics: digital rights, decolonization, technology creation
The involvement of hyperlocal communities in creating data sets for digital technology is pivotal
Supporting facts:
- Hyperlocal communities understand context, language, and issues better than distant developers
Topics: hyperlocal communities, data sets, digital technology
People from marginalized communities should be involved in data creation, annotation and technology creation for a more inclusive digital rights
Supporting facts:
- Shalini Joshi advocates for the participation of marginalized communities in the tech development process
Topics: marginalized communities, inclusion, digital rights
Inclusive processes in technology creation involves the inclusion of many languages
Supporting facts:
- Multiple languages should be included in data analysis
Topics: inclusion, languages, technology creation
The journey towards a decolonized Internet and digital landscape is ongoing and requires continuous reflections, dialogue, and call to actions.
Topics: Decolonization of Internet, Digital Landscape, Dialogue, Reflections, Call to actions
Report
The analysis delves into the various aspects of the impact that AI development has on human labour. It highlights the heavy reliance of AI development on human labour, with thousands of workers involved in activities such as collection, curation, annotation, and validation.
However, the analysis points out that human labour in AI development often faces precarious conditions, with insufficient arrangements regarding pay, management, and collectivisation. Workers frequently encounter issues like low pay, excessive overwork, job strain, health problems, short-term contracts, precarity, unfair management, and discrimination based on gender, race, ethnicity, and geography.
This paints a negative picture of the working conditions in AI prediction networks, emphasising the need for improvements. The distribution of work for AI development is another area of concern, as it primarily takes place in the Global South. This not only exacerbates existing inequalities but also reflects the legacies of colonialism.
Large companies in the Global North hire and develop AI technologies using a workforce predominantly from the Global South. This unbalanced distribution further contributes to disparities in economic opportunities and development. The analysis also highlights the influence of digital sovereignty and intellectual property on internet regulation.
It argues that governments often regulate the internet under the pretext of digital sovereignty, which extends the legal systems of larger nations to every corner of the globe. This practice is justified through the concept of digital colonialism, where multinational companies subtly impose alien legislation that does not adhere to national standards.
Intellectual property, such as the DMCA, is cited as an example of this behaviour. To counter this, the analysis suggests that diversification of internet content and platforms can be an essential tool, safeguarding against regulations that may result in fragmentation.
Furthermore, the analysis emphasises the need for documentation and policy action against gender disinformation in South Asia. Women, trans individuals, and non-binary people are regularly targeted in the region, with disinformation campaigns aimed at silencing marginalised voices. Gender disinformation often focuses on women in politics and the public domain, taking the form of hate speech, misleading information, or character attacks.
The mention of NIDAN’s development of a dataset focused on gender disinformation indicates a concrete step towards understanding and addressing this issue. Digital literacy and skills training are highlighted as important factors in bridging the digital divide and empowering marginalised communities.
The analysis emphasises the importance of democratising access to digital education and ensuring that training is relevant and contextualised. This includes providing practical knowledge and involving the user community in the development process. Additionally, the analysis calls for inclusive digital training that takes into consideration the needs of persons with disabilities and respects economic differences.
The analysis also explores the broader topic of decolonising the internet and the role of technology in societal development. It suggests that the decolonisation of digital technologies should involve not only the use of these technologies but also the production process.
There is an emphasis on the inclusion of diverse perspectives in technology creation and data analysis to avoid biases and discrimination. The analysis also advocates for the adaptation of platform policies to respect cultural differences and acknowledge other human rights, rather than solely adhering to external legislation.
In conclusion, the analysis provides a comprehensive assessment of the impact of AI development on human labour, highlighting the precarious conditions faced by workers and the unequal distribution of work. It calls for improvements in labour conditions and respect for workers’ rights.
The analysis also raises awareness of the need to document and tackle gender disinformation, emphasises the importance of digital literacy and skills training for marginalised communities, and supports the decolonisation of the internet and technology development. These insights shed light on the challenges and opportunities in ensuring a more equitable and inclusive digital landscape.
Shalini Joshi
Speech speed
133 words per minute
Speech length
1247 words
Speech time
561 secs
Arguments
AI models have inherent biases, are context-dependent and promote stereotypes
Supporting facts:
- Experiments with generative AI have shown biases towards certain countries and cultures
- High-paying jobs were represented by lighter-skinned, male figures in AI visualizations
Topics: Artificial Intelligence, Bias, Stereotypes
Transparency is needed from AI systems and companies
Supporting facts:
- Companies are often secretive about the data they use to train AI systems
Topics: Artificial Intelligence, Transparency
AI-based translation services overlook hundreds of lesser-known languages
Supporting facts:
- AI models are usually trained with data that uses mainstream languages, often overlooking others
Topics: Artificial Intelligence, Translation Services, Languages
Preserving lesser-known languages is vital
Supporting facts:
- Such languages hold unique traditions and a culture’s identity
Topics: Languages, Culture
Women, trans people, and non-binary individuals in South Asia face online disinformation that aims to marginalize them further
Supporting facts:
- Shalini Joshi focuses on gender disinformation in the Asia-Pacific region, specifically South Asia.
- Online lies and hate speech are often used to silence or intimidate these groups.
- Such disinformation targets both public figures and everyday individuals.
Topics: Online Disinformation, Gender, South Asia
Decolonizing digital rights involves local and marginalized communities
Supporting facts:
- Hyperlocal communities should be involved in creating data sets
- Marginalized people understand the context, the language, and the issues more than technologists and coders
Topics: Digital Rights, Inclusion, Technology Development
Report
The analysis highlights several important points related to artificial intelligence (AI) and technology. Firstly, it reveals that AI models have inherent biases and promote stereotypes. This can result in inequalities and gender biases in various sectors. Experiments with generative AI have shown biases towards certain countries and cultures.
In one instance, high-paying jobs were represented by lighter-skinned, male figures in AI visualisations. This not only perpetuates gender and racial stereotypes but also reinforces existing inequalities in society. Secondly, the analysis emphasises the need for transparency in AI systems and companies.
Currently, companies are often secretive about the data they use to train AI systems. Lack of transparency can lead to ethical concerns, as it becomes difficult to assess whether the AI system is fair, unbiased, and accountable. Transparency is crucial to ensure that AI systems are developed and used in an ethical and responsible manner.
It allows for scrutiny, accountability, and public trust in AI technologies. Furthermore, the analysis points out that AI-based translation services often overlook hundreds of lesser-known languages. These services are usually trained with data that uses mainstream languages, which results in a neglect of languages that are not widely spoken.
This oversight undermines the preservation of unique cultures, traditions, and identities associated with these lesser-known languages. It highlights the importance of ensuring that AI technologies are inclusive and consider the diverse linguistic needs of different communities. Additionally, the analysis reveals that women, trans people, and non-binary individuals in South Asia face online disinformation that aims to marginalise them further.
This disinformation uses lies and hate speech to silence or intimidate these groups. It targets both public figures and everyday individuals, perpetuating gender and social inequalities. In response to this growing issue, NIDAN, an organisation, is implementing a collaborative approach to identify, document, and counter instances of gender disinformation.
This approach involves a diverse set of stakeholder groups in South Asia and utilises machine learning techniques to efficiently locate and document instances of disinformation. The analysis also highlights the importance of involving local and marginalised communities in the development of data sets and technology creation.
It emphasises that hyperlocal communities should be involved in creating data sets, as marginalised people understand the context, language, and issues more than technologists and coders. Inclusive processes that include people from different backgrounds in technology creation are necessary to ensure that technology addresses the needs and concerns of all individuals.
In conclusion, the analysis underscores the pressing need to address biases, promote transparency, preserve lesser-known languages, counter online disinformation, and include local and marginalised communities in the development of technology. These steps are crucial for creating a more equitable and inclusive digital world.
By acknowledging the limitations and biases in AI systems and technology, we can work towards mitigating these issues and ensuring that technology is a force for positive change.
Tevin Gitongo
Speech speed
218 words per minute
Speech length
2718 words
Speech time
749 secs
Arguments
Decolonizing the digital future by making technology work for people and enable a rights-based democratic digital society
Supporting facts:
- Work in GIZ Kenya under the Digital Transformation Center which is a project supported by the German government and Team Europe, working with the Kenyan government.
- Working on building capacities on data protection and IT security, fostering a data-driven economy, and decent job creation in the gig economy.
- Bridging the digital divide by ensuring no one is left behind, i.e., the youth, women, rural, urban, and persons with disabilities
Topics: Digital Rights, Technology, Democracy
Digital training needs to consider cultural context
Supporting facts:
- Need to work with stakeholders who are aware of ground realities
- Trainings without understanding of cultural context are ineffective
Topics: Digital Literacy, Skills Training, Marginalized Communities
Accessibility and affordability are crucial for impactful training
Supporting facts:
- Knowledge transfer should be prioritized over ticking boxes
- Example of bringing Kenyan developers and European developers together for knowledge exchange and development of AI chatbots
Topics: Digital Divide, Democratizing Knowledge
Inclusion of everyone in digital skills training is necessary
Supporting facts:
- Digital systems being developed need to cater to persons with disabilities
- Assistance in developing ICT standards for persons with disabilities for Kenya
Topics: Digital Inclusion, Persons with Disabilities
Privacy concerns related to personal data are universal; it affects people from both the global north and south
Supporting facts:
- Kenyans, both rural and urban, are increasingly aware and concerned about issues related to the protection of their data.
- Kenyans ask questions about why and where their data is being transferred, akin to concerns raised in European countries.
- The office of data commissioner in Kenya actively works on these issues
Topics: Privacy, Data Protection, AI, International Issues
Local languages and the specific nuances thereof must be taken into account when developing AI products
Supporting facts:
- Most natural language processes are designed with a focus on the global north, neglecting the unique aspects of local languages.
- Not everyone speaks fluent English or Swahili, necessitating products that cater to their linguistic needs.
- Addressing local languages in AI product design can contribute to bridging the digital divide
Topics: AI, Local Languages, Digital Divide, Natural Language Processes, Local Issues
Digital literacy can aid in the process of decolonizing the internet
Supporting facts:
- Example of women in Kenya selling groceries and being enabled to use digital tools for their business
- Need to find people where they are and empower their existing skills
Topics: Digital literacy, Decolonization, Internet, Empowerment
Four fundamental questions need to be asked when developing systems
Supporting facts:
- Who’s developing the systems?
- Why are they doing it?
- Where are they being developed?
- What is it for?
Topics: System’s development, Artificial Intelligence, Data politics
Report
During the discussion, the speakers emphasised the importance of decolonising the digital future in order to ensure that technology benefits people and promotes a rights-based democratic digital society. They highlighted the need for creating locally relevant tech solutions and standards that address the specific needs and contexts of different communities.
This involves taking into consideration factors such as cultural diversity, linguistic preferences, and social inclusion. The importance of stakeholder collaboration in the decolonisation of digital rights was also emphasised. The speakers stressed the need to involve a wide range of stakeholders, including government, tech companies, fintech companies, academia, and civil society, to ensure that all perspectives and voices are represented in the decision-making process.
By including all stakeholders, the development of digital rights frameworks can be more inclusive and reflective of the diverse needs and concerns of the population. Cultural context was identified as a crucial factor to consider in digital training programmes. The speakers argued that training programmes must be tailored to the cultural context of the learners to be effective.
They highlighted the importance of working with stakeholders who have a deep understanding of the ground realities and cultural nuances to ensure that the training programmes are relevant and impactful. The speakers also discussed the importance of accessibility and affordability in digital training.
They emphasised the need to bridge the digital divide and ensure that training programmes are accessible to all, regardless of their economic background or physical abilities. Inclusion of people with disabilities was specifically noted, with the speakers advocating for the development of digital systems that cater to the needs of this population.
They pointed out the assistance being provided in Kenya to develop ICT standards for people with disabilities, highlighting the importance of inclusive design and accessibility in digital training initiatives. Privacy concerns related to personal data were identified as a universal issue affecting people from both the global north and south.
The speakers highlighted the increasing awareness and concerns among Kenyans about the protection of their data, similar to concerns raised in European countries. They mentioned the active work of the office of data commissioner in Kenya in addressing these issues, emphasising the importance of safeguarding individual privacy in the digital age.
The speakers also emphasised the need for AI products and services to be mindful of both global and local contexts. They argued that AI systems should take into account the specific linguistic needs and cultural nuances of the communities in which they are used.
The speakers raised concerns about the existing bias in AI systems that are designed with a focus on the global north, neglecting the unique aspects of local languages and cultures. They stressed the importance of addressing this issue to bridge the digital divide and ensure that AI is fair and effective for all.
Digital literacy was highlighted as a tool for decolonising the internet. The speakers provided examples of how digital literacy has empowered individuals, particularly women in Kenya, to use digital tools for their businesses. They highlighted the importance of finding people where they are and building on their existing skills to enable them to participate more fully in the digital world.
One of the noteworthy observations from the discussion was the need to break down complex information, such as terms and conditions, to ensure that individuals fully understand what they are agreeing to. The speakers noted that people often click on “agree” without fully understanding the terms and emphasised the importance of breaking down the information in a way that is easily understandable for everyone.
Overall, the discussion emphasised the need to decolonise the digital future by placing people at the centre of technological advancements and promoting a rights-based democratic digital society. This involves creating inclusive tech solutions, collaborating with stakeholders, considering cultural context in training programmes, ensuring accessibility and affordability, addressing privacy concerns, and bridging the digital divide through digital literacy initiatives.
By adopting these approaches, it is hoped that technology can be harnessed for the benefit of all and contribute to more equitable and inclusive societies.