Online Linguistic Gender Stereotypes | IGF 2023 WS #237

12 Oct 2023 00:45h - 01:15h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Arnaldo de Santana

The analysis delves into the impact of the internet and society on gender norms and stereotypes, highlighting several key points. Firstly, it argues that the internet and society have the capacity to reproduce certain gender norms and stereotypes. These norms and stereotypes can be seen as power structures, with certain groups being placed in positions of power while others are exploited. The assignment of roles based on gender at birth also imposes certain developmental expectations.

The influence of the market on young internet users is another important aspect discussed in the analysis. It is noted that children and teens are heavily affected by market influences online. Specifically, the analysis highlights that young females are expected to act in a certain way to attract attention on the internet. This demonstrates how the market impacts the perspectives and behaviors of young internet users.

On a more positive note, the analysis stresses the need for a more participative and egalitarian development of the internet. It argues that the internet reflects power, violence, and societal standards, and breaking gender expectations and rules often brings about resistance. This highlights the importance of inclusivity and equal participation in shaping the development and structure of the internet.

The analysis also expresses concern about the impact of gender stereotypes on the daily life of the LGBTQI community. For instance, it notes that stereotypes of gender structures react to speech varieties associated with lower prestige groups, and negative characteristics are attributed to speakers based on these stereotypes.

Turning to the realm of artificial intelligence (AI), the analysis acknowledges the potential of AI in bringing something new and different. However, it also cautions that AI could potentially reproduce structures of power and impose certain standards. This raises important questions about the values and biases of the creators of AI and the need for further research.

The analysis also draws attention to the effects of colonialism and power imbalances in internet spaces. It mentions the erasure of memories and lives that colonialism has brought about, imposing a dominant perspective. This highlights the importance of addressing colonialism and power imbalances in order to create more equitable internet spaces.

Furthermore, the absence of international legislation specifically addressing internet hate speech and gender stereotyping is highlighted. This raises concerns about the current legal framework and the need for international laws to combat these issues effectively.

In terms of addressing hate speech and stereotypes, the analysis suggests that breaking stereotypes may be an effective way to tackle hate speech. It points out that stereotypes are perceived as a root cause of hate speech, and challenging them could lead to positive change.

The analysis concludes by emphasizing the need for dialogue and innovation in challenging ingrained stereotypes. By fostering open and meaningful dialogue and promoting innovative ideas, it becomes possible to challenge and change deeply embedded stereotypes.

Overall, the analysis provides a comprehensive examination of the impact of the internet and society on gender norms and stereotypes. It highlights the need for inclusive and participative development, the challenges faced by marginalized communities, the potential of AI, the effects of colonialism, the absence of international legislation, the importance of breaking stereotypes, and the significance of dialogue and innovation.

Audience

The analysis of the given information reveals several key points and arguments related to language diversity, digital media, and societal issues. It is recognised that promoting language diversity in digital media is of great importance, especially for LGBTQIA plus communities, as it contributes to reducing inequalities (SDG 10: Reduced Inequalities). This recognition emphasizes the need to encourage debates on this topic, allowing for a more inclusive and diverse digital landscape.

In the context of digital content moderation, it is argued that the moderation process should consider the promotion of discourse. The example of the word “bicha” in Brazil is cited to demonstrate how its usage can change depending on the context, being employed both in negative contexts and contexts that promote identity and affirmation. This highlights the need for moderators to have a nuanced understanding of language and cultural contexts to ensure fair and inclusive moderation practices.

Another point of concern raised in the analysis is the potential for artificial intelligence (AI) to propagate stereotype thinking. It is suggested that AI systems, if not properly designed and trained, may unintentionally perpetuate harmful stereotypes. This observation aligns with SDG 9 (Industry, Innovation, and Infrastructure) as it emphasizes the importance of considering the impact of technology on societal issues.

On the other hand, the analysis also highlights the potential benefits of AI in countering hate speech or violence. It is argued that AI can be used to create positive narratives that stand against such harmful behaviours, thereby promoting SDG 3 (Good Health and Well-being).

Furthermore, attention is drawn to the vulnerability of young girls on social media platforms. The analysis notes that platforms like TikTok and Instagram are commonly used by young girls to promote themselves, which unfortunately makes them more susceptible to online predators. This highlights the need for content regulation, such as moderating comments and monitoring language used on digital platforms, to protect youth (SDGs 4: Quality Education and 16: Peace, Justice, and Strong Institutions).

In conclusion, the analysis underscores the complex nature of digital media and its implications for various societal issues. It underscores the importance of promoting language diversity, promoting discourse, safeguarding against harmful stereotypes, countering hate speech and violence, and protecting vulnerable young girls on digital platforms. Civil society is also seen as playing a vital role in defending youth, particularly young girls, in digital spaces. The provided insights shed light on the intricate interplay between digital media, language, technology, and societal goals as outlined in the Sustainable Development Goals.

Umut Pajaro Velasquez

The analysis examines the issue of gender diverse content suppression on social media platforms, focusing on TikTok. The study found that gender diverse individuals in Latin America felt compelled to alter their identities and content on TikTok to avoid being targeted by the algorithm. The platform’s algorithm demonstrated a bias against LGBTQI+ inclusive language and hashtags, resulting in the removal or shadow banning of their content. This raises questions about identity ownership in algorithmic systems.

Additionally, the study revealed that gender diverse users felt less accepted on TikTok due to limitations and self-censorship. LGBTQI+ and gender diversity-themed content was only deemed acceptable or visible on the platform when it aligned with established mainstream trends or had the support of influential figures. This exclusionary dynamic on TikTok creates an environment that further marginalizes gender diverse individuals.

In response, the analysis emphasizes the need for social media platforms, including TikTok, to establish clearer community standards regarding gender diverse content. Platforms should strive to create inclusive spaces that respect and protect the digital rights of traditionally underrepresented communities. Participants in the study called for a shift in these systems to protect historically marginalized communities and ensure consistency of standards regardless of identity or content alignment.

Furthermore, the analysis highlights the detrimental impact of online linguistic gender stereotypes on self-identity. Users often struggle to identify with the platform’s gender norms, leading to anxiety and discomfort. Some individuals stop using the platform altogether because they feel unable to express themselves authentically. This lack of acceptance and its impact on mental health and social interactions is a significant concern.

Overall, the analysis reveals the troubling suppression of gender diverse content on social media platforms, particularly on TikTok. It underscores the need for platforms to address biased algorithms, establish clearer community standards, and create inclusive spaces. Additionally, the detrimental effects of online linguistic gender stereotypes on self-identity and mental health are highlighted. The analysis calls for a more inclusive and diverse digital landscape that respects the rights of all individuals, regardless of gender identity.

Juliana Harsianti

Language plays a significant role in shaping individuals’ perception of themselves and others. The grammatical structure and vocabulary of a language can influence thinking, imagination, and reality. For instance, language can affect how people perceive gender and power dynamics. In certain languages like French and Spanish, a mixed gender subject defaults to the masculine form, reinforcing the perception of male superiority.

Moreover, language can be a powerful tool for online bullying, particularly targeting women, girls, and the LGBT+ community. Pejorative language and slurs are frequently used to harass and intimidate these groups, creating an unsafe online environment that discourages their active participation.

Machine translation, although useful, often defaults to gender stereotypes by assigning traditional gender roles to professions. This perpetuates gender inequalities and hinders progress towards equality.

To tackle these issues, promoting gender-neutral and inclusive language is crucial. This involves ongoing efforts and discussions within communities. By doing so, language can become more inclusive and fair, fostering an online world where everyone feels represented and valued.

Another effective approach is incorporating women’s perspectives in online content. Initiatives like “Wikigap” have successfully increased the presence and representation of women on the internet, enriching the overall content.

Moreover, addressing online hate speech requires empathy and community regulations. It is important to acknowledge the impact of hate speech and take appropriate actions to address it. Community regulations and a focus on empathy can help create a safer and more inclusive online environment.

In conclusion, language has a profound influence on perceptions, and it is important to address biases and stereotypes embedded within it. By promoting gender-neutral and inclusive language, incorporating women’s perspectives in online content, and fostering empathy and community regulations, we can create a more equitable digital world.

Dhanaraj Thakur

The extended analysis examines the gender digital divide and its connection to hate speech and AI tools. Research suggests that hate speech, violent language, and misinformation disproportionately affect women, leading to the gender digital divide. This highlights the importance of addressing these harmful practices and creating a more inclusive online environment.

Furthermore, the role of large language models like ChatGPT is discussed. These models heavily rely on English data predominantly authored by men, limiting their effectiveness in supporting non-English languages and perpetuating gender biases. Evaluating the impact of AI tools such as natural language processing and large language models is crucial to avoid reinforcing gender disparities.

Taking an intersectional approach is emphasized for understanding the severity of hate speech and misinformation. Women of color, particularly political candidates, are more likely to be targeted with online abuse and misinformation. Considering multiple dimensions of identity is essential in addressing the gender digital divide and developing inclusive solutions.

The analysis also highlights the gender gap in AI training data, with only 26.5% of CHAT-GPT’s training data authored by women. This disparity poses a significant problem, particularly in the education system and the industry, where gender-biased AI models are being incorporated. Addressing this gap is crucial in preventing the perpetuation of gender disparities.

Social media platforms play a vital role in shaping online experiences. The analysis suggests that these platforms should improve their design strategies to combat harmful content. Giving users more control over the content they receive can help them manage and mitigate the impact of negative content.

Additionally, greater privacy protections can reduce algorithmic amplification and content targeting. By implementing stronger privacy measures, the influence of algorithms in promoting harmful content can be diminished, benefiting the gender digital divide.

Data transparency is emphasized as another key aspect. The lack of insight into social media platforms’ operations hampers the ability of researchers, governments, and civil society activists to understand the issues and propose effective solutions. Platforms should provide more data and information to facilitate better understanding and the creation of impactful solutions.

The analysis also points out the influence of hate speech and gender stereotypes, particularly through online communities like the ‘manosphere’, which affects younger boys. Addressing this influence and educating young men and boys to promote healthier perspectives and behaviors is crucial in bridging the gender digital divide.

Lastly, self-reflection for men, especially cisgendered individuals, regarding their online behavior is crucial. Raising awareness about the impact of hate speech and the spread of false information is essential in creating a more inclusive and respectful digital space.

In conclusion, the analysis highlights various factors contributing to the gender digital divide and underscores the impact of hate speech and AI tools. It emphasizes the need for inclusive approaches, bridging the gender gap in AI training data, enhancing social media design, strengthening privacy protections, promoting data transparency, and mitigating the influence of hate speech and gender stereotypes. Addressing these issues will help create a more equitable and inclusive digital landscape.

Luke Rong Guang Teoh

The analysis reveals several important points about linguistic gender stereotypes in online advertising and social media platforms, which perpetuate gender inequalities and reinforce traditional gender roles. Men are often associated with adjectives like strong, brave, competent, or bold, promoting stereotypes of dominance and logic, while women are associated with adjectives like emotional, understanding, sweet, and submissive, reflecting biased views of women as emotional and submissive. These stereotypes shape societal attitudes and contribute to gender inequalities.

Online advertisements are now personalised and tailored to specific audiences, including gender-based targeting. This means that linguistic gender stereotypes are used in targeted marketing and product positioning. The language used on social media platforms like Instagram also reflects gender biases. A study on Instagram captions found that certain adjectives were exclusively associated with women, while others were divided between genders. These biases impact how individuals are perceived and treated both online and offline.

Despite these issues, some brands are being more careful with gender characterisations, showing mixed gender associations with certain adjectives. This indicates progress in avoiding gender stereotypes in advertising and promoting gender equality. However, the gender divide in the digital world has been increasing since 2019, disproportionately affecting marginalised women such as the elderly and those in rural areas. This divide limits their access to and use of digital technologies, exacerbating gender inequalities.

Research on women and young girls below 18 in relation to the gender digital divide is lacking. Most data focuses on women above 18, leaving a gap in understanding the experiences and challenges faced by younger women and girls. More research is needed to address this gap and ensure their needs are met.

Furthermore, linguistic gender stereotypes online strongly influence women’s career choices. With the majority of jobs worldwide having a digital component, biased language on online platforms shapes women’s perceptions of career paths, limiting their potential and opportunities. This hinders progress towards gender equality in the workforce.

In conclusion, linguistic gender stereotypes in online advertising and social media perpetuate gender inequalities and reinforce traditional gender roles. Efforts are being made to address these stereotypes, but further progress is needed. The gender divide in the digital world is widening, particularly impacting marginalised women. Research on younger women and girls in relation to the gender digital divide is lacking, which must be addressed. Linguistic gender stereotypes influence career choices and opportunities for women, hindering progress towards gender equality in the workforce.

Manjet Kaur Mehar Singh

Discrimination towards the LGBTQ+ community in Malaysian advertisements is a pressing issue that demands attention. The online environment exacerbates these discriminatory practices, and steps need to be taken to address and improve the situation. Inclusive language can play a significant role in mitigating online discrimination, creating a more welcoming online space for everyone.

Promoting diversity through language is seen as a positive approach to combat discrimination by challenging stereotypes and biases. Guidelines should be put in place to promote non-biasness and equality in language usage, while also avoiding gendered assumptions. These guidelines can help individuals and organizations navigate the complexities of language in a sensitive, fair, and inclusive way.

Education plays a crucial role in raising awareness and promoting sensitivity towards language diversity. Starting from an early age, it is important to educate individuals about the power of language and how it can impact others. By fostering an understanding of the importance of inclusive language, future generations can grow up with a greater appreciation for diversity.

Unfortunately, the issue of linguistic bias and stereotypes is not adequately addressed in education in Malaysia. There is a clear need for proper training of educators to ensure they are equipped to promote diversity and equality in language. Without attention to this issue, discriminatory practices persist, limiting progress towards an inclusive society.

Concrete rules and regulations from the government regarding language usage to represent different groups are needed. Having clear guidelines and acts in place will provide a framework for promoting inclusivity and reducing discrimination. Presently, the absence of such rules hinders efforts to address linguistic bias and ensure fair representation.

In the workplace, training and awareness regarding language biasness are essential. By providing education and facilitating discussions on biasness and representation, companies can foster an inclusive and respectful environment. It is important that the expression of marginalized groups in the workplace is not dominated by one group, ensuring that all employees feel seen and valued.

Addressing discrimination towards the LGBTQ+ community in Malaysian advertisements requires a multi-faceted approach encompassing inclusive language, diversity promotion, educational initiatives, governmental regulations, and workplace training. By implementing these measures, society can move towards a more inclusive, equal, and respectful future.

Moderator

The meeting consisted of two rounds: speaker introductions and an open roundtable discussion. Participants had the opportunity to ask questions, which were collected and addressed later. Stella, associated with NetMission.Asia, Malaysia Youth IGF, ISOC Malaysia, and Kyushu University, served as the moderator.

The main focus was on linguistic gender stereotypes and their impact. These stereotypes are generalizations based on someone’s gender that are reflected in language. They can be observed in gendered pronouns, job titles, descriptive language, and conversational roles.

Linguistic gender stereotypes have negative effects. They shape societal attitudes, reinforce gender inequalities, and create expectations and limitations based on gender. They are observed in online advertisements, perpetuating traditional gender roles.

The discussion also addressed challenges faced by marginalized and LGBTQI communities. Gender is seen as a module of power, affecting different groups. Inclusive language, gender-neutral terms, and diversity in language are important for creating an inclusive society. Educating young people about diversity and the impact of linguistic stereotypes is crucial.

The meeting also highlighted the gender gap in AI training data and its implications. Online linguistic gender stereotypes affect self-identity, sense of belonging, and contribute to online bullying. Promoting gender-neutral languages and creating content from a woman’s perspective is encouraged.

The need for algorithmic control on social media platforms to reduce negative content amplification was stressed. Transparency and data sharing by platforms are important for research and finding better solutions.

Overall, the meeting emphasized addressing linguistic gender stereotypes, promoting diversity in language, and combating discrimination and inequality. Legislative action, breaking stereotypes, and changing narratives are necessary for an inclusive society.

Júlia Tereza Rodrigues Koole

The analysis of the data presents several important findings relating to gender stereotypes, hate speech, and recruitment by radical groups. One significant observation is the use of linguistic gender stereotypes to mobilise specific demographics. This tactic involves the exploitation of language to reinforce societal norms and expectations associated with gender. By perpetuating these stereotypes, certain groups are able to manipulate individuals and garner support for their cause. This has been particularly evident in the Americas, with a specific focus on Brazil, where jokes and memes have been used to gamify hate and recruit for radical organizations.

Another noteworthy point is the targeted recruitment efforts made by radical groups, particularly targeting young males. Research conducted in Germany regarding in-service teacher awareness and a study conducted by a cyber psychologist in India both highlight the attempts made by extremist organizations to attract and radicalize young males. These findings emphasize the importance of recognizing and addressing the strategies employed by these groups to prevent the recruitment and radicalization of vulnerable individuals.

The analysis also brings attention to the classification of hate speech and the significance of combating its impact. A task group established by the Brazilian Ministry of Human Rights is actively working towards developing a framework to classify hate speech. This highlights a positive step towards reducing the prevalence and harm caused by hate speech, as it enables a targeted approach to addressing this issue.

Furthermore, the analysis highlights the rising reactionary demographic in Brazil, posing a threat to human rights, particularly targeting female youth leaders and expressing anti-feminist sentiment. The increase in this demographic underscores the need for continued efforts to counter hate speech and discrimination, especially towards women and gender diverse individuals.

The analysis also brings attention to the manifestation of hate speech and extremism through linguistic ridicule and mimicry of local dialects or speech patterns. Extremist groups in Brazil target various dialects, including popular, queer, and formally recognized dialects. This serves as a tool to mobilize youth while ridiculing the validity of these speech forms, often reducing them to derogatory terms such as ‘gay speech’. This highlights the multi-dimensional nature of hate speech, as it can manifest through linguistic mockery and the undermining of certain speech forms.

Online spaces, including social media platforms, study and game communities, can be particularly hostile towards women and gender diverse individuals due to linguistic gender stereotypes. Negative experiences and discrimination resulting from the perpetuation of these stereotypes can drive women and diverse genders away from participating in these online spaces. In cases where individuals decide to remain, they may face increasingly hateful and violent experiences. Addressing and combating online gender stereotypes is crucial to ensure inclusion and equality for all.

The impact of linguistic gender stereotypes extends beyond online spaces. Discrimination arising from these stereotypes can distort self-image and self-worth, potentially leading to various mental health issues. Moreover, these experiences perpetuate the notion that online spaces are hostile and exclusive, particularly for those who do not conform to specific gender expectations. This further underscores the importance of addressing online gender stereotypes to create a more inclusive and welcoming digital environment.

Education emerges as a pivotal factor in tackling hate speech and gender stereotypes. It is crucial for schools to address the main problems within their communities, which may include addressing physiological needs, providing comprehensive sexual education, or challenging societal roles of diverse genders. By investing in the next generation and prioritizing education, efforts can be made to create a more inclusive and equitable society.

Although the issue of gender-based hate speech may not be obvious to everyone, there is a need for increased participation from individuals beyond those who are openly opposed. It is essential to engage individuals who may not be actively involved or vocal about their opposition. Generating empathy and bringing these individuals closer to movements focused on creating a better world is crucial to make progress and foster a society free from hate speech and discrimination.

In conclusion, the analysis provides valuable insights into the use of linguistic gender stereotypes, recruitment by radical groups, the classification of hate speech, the rising reactionary demographic, the targeting of local dialects, and the impact of linguistic gender stereotypes in online spaces. It highlights the importance of addressing these issues through education, increased participation, and efforts to combat hate speech and discrimination. By working towards these goals, a more inclusive and equitable society can be achieved.

Session transcript

Moderator:
Just to let everyone know, we’ll be going one round for each of our speakers, where they’ll have a chance to introduce their work and themselves. And then we’ll move on to the second round with an open roundtable discussion. And please feel free to drop questions in the chat box. Our online moderator, Bea, will be collecting these, and we will address the questions in a question and answer session at the end. All right. I see it’s 8.50. So good morning, good evening, good afternoon to everyone who’s joined. Thank you so much for taking the time to join us. My name is Stella, and I’m currently with NetMission.Asia, Malaysia Youth IGF, ISOC Malaysia, and Kyushu University. So I’ll be moderating today’s session, and it’s great to see everyone. So first off, I’d like to give the opportunity to welcome our first speaker for the session. On my right, that would be Luke Teo. Please take it away.

Luke Rong Guang Teoh:
Thank you, Stella. So I’ll just share my screen. Okay. Okay. So the topic of today’s workshop would mainly be online linguistic gender stereotypes. And you may be wondering, what does that mean? So basically, linguistic gender stereotypes are generalizations or assumptions that people make based on someone’s gender that are reflected in language. And these stereotypes include beliefs about the roles, behaviors, characteristics, and also abilities of individuals based on their gender. Now, these linguistic gender stereotypes can be reflected in different aspects of language, such as gendered pronouns, job titles, descriptive language, and also their conversational roles. And to just narrow down the scope and relate it towards the internet and my work currently, which is focusing on the adjectives, one part of language. So adjectives are one aspect of linguistic gender stereotypes. And according to Castillo-Mayon and Montes-Burgess, certain adjectives are commonly associated with women, for example, emotional, understanding, sweet, and submissive. So as you may assume or you may understand, these adjectives reinforce the stereotype that women are more emotional. On the other hand, adjectives like strong, brave, competent, or bold can also or are often associated with men. And this reinforces the stereotype of men being more dominant or logical. Now these adjectives create gender-based expectations and limitations. And these influence how individuals are perceived and treated both in the online and in the offline world. Such language, as you may assume or may understand, has the potential to shape societal attitudes and contributes to gender inequalities by reinforcing traditional gender roles and norms. Now you may be wondering, so where is the online or internet part of this workshop? Well, we’re getting to there. So linguistic gender stereotypes can also be observed in online advertisements. This idea actually came from myself studying my undergraduate degree at University of Science Malaysia, and I think on the panel today we also have Dr. Manjat, who was my supervisor for that course, and really guided me to make this research possible. So in the early 2000s, with the rise of data-driven advertising and targeting capabilities, online advertisements became increasingly personalized and tailored to specific audiences, including gender-based targeting. Despite increasing emphasis on gender equality in the social development goals of developed nations and its recognition as a fundamental human right, by the United Nations, studies revealed that gender stereotyping in advertising continues to endure. And according to Boyd 2021, linguistic gender stereotypes in advertising are used in targeted marketing and product positioning. As a result, to focus on a specific group of buyers, the producers persuade buyers by using the right choice of words regarding the product. So moving to my research, which was conducted with a small respondent group of 43. They were all aged between 21 to 22, so can be considered as Gen Z youth. And a total of 183 Instagram captions were selected from companies that I won’t name. And from those captions, 151 adjectives were shortlisted. And these are some of the adjectives that we asked the respondents their views on, and which gender or which genders do they feel that these adjectives describe the best. So what did the results show? Well, the results show that the majority of respondents have similar gender connotations for all of the 15 adjectives. And most of the adjectives have at least 50% of respondents each answering the same gender association. So this is a brief picture or overview of the results that we were able to get. And the participants of the questionnaire hold slightly more gender biases towards these adjectives compared to the qualitative study on previous literature that my team and I read through. However, there are instances where the respondents were very transparent about their gender biases, like for the adjectives sparkling and floral, in which almost all the respondents think those adjectives conclusively represent only women. So they thought that by using the adjectives sparkling or floral, you can sell them or you don’t even use it to describe men. However, there are also situations where the participants have ambiguous… gender associations with the adjectives, like for the adjectives sophisticated and romantic, in which the respondents’ gender biases are about evenly split between men, women, and both genders. So what about the way forward? There are similarly mixed gender adjectives on both Instagram pages, which might be because those brands are slowly attuning to a more careful approach to gender characterizations, and right now opening up to the spectrum and different sorts of how people would like to identify as. And as for the perceptions of gender stereotypical adjectives you utilize in Instagram captions, the respondents conform to the gender stereotypes. And they also seem conflicted in opinion for others and have repulsed the gender stereotypes associated with the adjectives for the rest. So seeing how language and culture are inextricably intertwined, it would be great prominence to include the role of language in bridging the gender-digital divide. I’d just like to end with a quote, the tie of language is perhaps the strongest and most durable that can unite us. And I think I’ve taken up my time for now.

Moderator:
Thank you for listening, and I’m looking forward to the rest of the discussion. Thank you very much, Luke, for your brief overview and your research. I think it’s very interesting to see how we can relate to our own experiences. I think most of us would have seen perhaps different kinds of advertisements that you might get from your different gendered friends. And so that was perspective from our Asia-Pacific youth. Now let’s move on to our next on-site speaker. We have with us Arnaldo de Santana. Sorry if the name is incorrect. But yes, please, your seven minutes starts now.

Arnaldo de Santana:
Thank you. Thank you, everybody. I’m Arnaldo. I’m from Brazil, representing the youth of Latin American Caribbean. Now I’m researching, but I’m also a lawyer and an internationalist. And I am researching about LGBTQI community. and some stereotypes that we face daily. At first, I came here to talk about some of the issues that we face there that mostly are linked to the specificities that market puts on us. So, gender can be read as a type of module that gives power to some groups and put others in a position to be, I don’t know, exploited. Also, as we are talking about the stereotypes, I’d like to bring here the meaning of the scripts of gender that we face daily in our society. So, if you are a girl or assigned as a girl when you are born, you have to do and fulfill some of the developments that society made to you. And if you go all the way into another perspective, you are not great to the society. The same way goes to the meaning when you are not, when you are assigned as boy. So, a linguistic stereotype is a way that people react to speech varieties associated with lower prestige groups and attributing negative characteristics to the speakers. And it all goes through a gender structure perspective. So, who holds the power and who can put this power to impose something in society. As minorities, we face some problems daily, and especially nowadays that we face some of the retrocess on society, it’s really important to talk about this. I’m here not opposing any slides because I feel like it’s more great to bring the possibility to all of us to talk about the development of internet that does not bring some stereotypes to our days, and to try to make a more participative way of building what we do. Also, I will reference some of my friends that are developing some researches about the gender stereotypes, linguistic stereotypes, and how does it impact the twins. There are people aged between 8 and 16 years, there are children and teens, and how does the market influences their perspectives online. So, we have some norms that are developed by society, internet, it is reproduced. So, if I am a girl on the internet, I have to develop my way to catch attention, especially if I am trying to get on the market to influence. And this makes… I feel that it talks a little bit about what we have today with the development of some… industries of media that brings children to work as performers and we face it daily. Nowadays in Brazil we have some discussion about how can children that works since really early ages handle the way of having so much money and I feel that I’m going a little bit out of the topic but talking about this we can face some ways that our structure and our society and also the reflection of society on internet because talking about internet we talk about also power and the face of violence but about the patterns standards and when you have low ways of speaking and when you put yourself on the way that you break this rules that are encrypted you go through a way of trying to go beyond the stereotypes. I feel like this will be the first statement. Thank you everybody and

Moderator:
thank you so much Ronaldo. Right on time. So thank you very much for the perspective coming from a different completely different region so now we’ll be going on to the my other end where we have another on-site So, we would like to hear next from Julia Teresa Rodriguez-Kuhl. So, yes, please, go ahead.

Júlia Tereza Rodrigues Koole:
Thank you, Stella, for giving me this opportunity to speak here at the IJF. My perspective and my narrative will be different. I will change the scope from male to the female and think about the gender stereotypes that are used beyond simple prejudice. They are weaponized to mobilize mostly specific demographics. In the Americas, and especially Brazil, the white male demographic, who is from 14 to 35 years old, they are mobilized by the usage of linguistic gender stereotypes in an attempt to recruit them to radical and terrorist groups. These memes and jokes and many types of content are used as a way to gamify hate. You first propose a game, simply a joke. The perpetrator, not you, the perpetrator, submits that joke in a public venue on the Internet. We have seen a lot of activity on younger platforms and also more related to the gaming community. Using those jokes to first spot someone who is prone to prejudice or prone to violence with that prejudice. Because we all might face and deliver actions based on prejudice. And it’s another step based on many researchers, Rakesh, a cyber psychologist from India, of the Rashtriya Raksha University. He is specializing in studying cyber terrorism through psychology and showing that ensuing on participating on these activities bring a reward, a psychological and chemical reward on the male audience who is trying to diminish, demobilize and attack the female youth, mostly. And also there is a study in Germany, a university that fails, I fail to recall the name right now, that when trying to make in-service teachers aware in the German universities, the majority of the audience is touched by the dynamics in the educational programs that exist on the subjects that are trying to convey an ethic program, an ethic guideline to the teachers, but there is a minority where the activities have no effect, they don’t even get it that it is an activity to bring awareness to female questions, female problems and also an attempt to stop misogynistic behavior and attempts of diminishing the power of women. And then my research studies a task group in… the Brazilian Human Rights Ministry that tries to typify what is hate speech and it’s a really important movement, it’s a really important action taken by a government to try to categorize what is hate speech. And we were trying to see if this task group was seeing this gamification behavior in the youth and it’s trying to, and recognizes that movement, that trend, it is happening amidst our youth, our male youth, and we found that although it didn’t specifically targeted terrorist groups who are sought to radicalize the youth with the internet, linguistic stereotypes, they can recognize that linguistic stereotypes and the internet can make, can both participate in an infrastructure and a design of a platform that facilitates hate. And so we have a demographic that enjoys what they’re doing, that they are apathetic to the awareness, unawareness discourses, and that they are being co-opted to organized groups and to try and pass demobilized protests, attack specific individuals, people that stand out, young youth leaders, young female youth leaders. And this is all connected, leading to a point of a rising reactionary demographic in my country, giving us a difficult and violent and sad environment. And I would like to encourage anyone who wishes to know better or is dealing with that situation in their country to reach out to me and many others who are trying to strengthen human rights in the world..

Moderator:
Thank you so much, Julia, for your sharing. It’s very interesting to see how we progress from a more introductory stance into how linguistic gender stereotypes can be in real life to perhaps more extreme cases. And so I’d like to open just briefly for a short one question. If anyone has a question from our online participants who’ve been with us, or from on site, just a quick question that you may have for our youth researchers for what they’ve presented so far about their efforts in researching online linguistic gender stereotypes, or if you have any general question that you’d like to see brought up in the roundtable immediately after our next few speakers. So yeah, a quick check around the room. Yes, please go ahead, the mic would be…

Audience:
Good morning, I am Wilson Guilherme. I am a non-binary person from Brazil, and I am part of the part of the Youth Brazil delegation. I think I have a comment and a question. First, the comment which is how important it is to encourage debates about language diversity in digital media. and how much this especially affects LGBTQIA plus people. An important point about language is to think about the moderation of digital content, which above all needs to be done based on the realization of discourse. In Brazil, for example, we have the word bicha, which can be used in negative contexts, but it can also be used in context to promotion of identity and affirmation. Does language when correlated with moderation can mitigate violation? But to the same extent, I can foresee violence when it is related to the reframing of concepts. My question then for the panel is how Reconcili contains moderation agendas with spaces and narratives from vulnerable communities such as LGBTQIA plus people and black people, for example, the language of Brazil, Pajubá. Thank you. Thank you. Thank you very much. I think it’s a very interesting discussion and thanks to the panel for initiating this kind of very streamed kind of discussion. My point of view is this is very true that social media is very much a kind of mirror of our regular socialization life, our social life or thought we are thinking, but it is probably even more reflective on social media rather than the mirror. So let me share the fear. The fear is the sometimes we see how social media is being cultivated by anti, you know. kind of various stereotype thinkings or how they, you know, propagate the violence against the various gender minority groups. And I don’t know, do you have any information on if you can shed some light afterwards about how the artificial intelligence also propagating this kind of, you know, stereotype behavior and kind of, and then the violence against the other sexual minority groups. The one second thing is, do you see any possibilities creating narratives in terms of this kind of hate speech or, you know, this violence that actually can be codified and, you know, I don’t know, so, you know, making some kind of positive narratives or narrative against these kind of violences or narrative against this kind of gender stereotypes, so whether it is possible through artificial intelligence again. Thank you very much.

Moderator:
Right, thank you for comments and questions from the floor, so I’ll let Julia go ahead.

Júlia Tereza Rodrigues Koole:
I would like to address Wilson’s question and the same report which would be in English. Report of recommendations to tackling hate speech in extremism in Brazil has a section about hate speech grammatic and in Brazil, for context, we have many grammatics. We have the formal grammatic, we have a popular grammatic and we do have also a queer grammatic and there is a targeting of those extremist groups. to mimic and ridicularize this grammatic which is highly based on the West African influence of the people who were kidnapped to Brazil in our colonial past but their heritage lived on our grammatic and our form of speech and this is being targeted as also a mean, a project to mobilize the youth saying, having jokes and also protests against the approach or any reconnaissances that this grammatic might have in any public venue maybe a social media, maybe a television media, maybe a radio program which they deny the validity of this way of speech because it’s true, it’s sincere and it’s the way that we found out to identify each other and to reorganize ourselves as a community and this is being also targeted and weaponized saying roughly as the gay way of saying things or the gay speech but it’s much more than that and it’s much more complex because it is not only us that use that kind of speech it’s just, the thing is that it doesn’t matter what it is there is a drain of what the content is they don’t care about the content they care about the group that uses such content will ridicularize, they will satirize that to spot people and spot people who might have a sensitivity to hate speech and also be more gullible to think that they are changing society for the better by persecuting a group because of the way they speak. So yes, this report also recognizes this strategy of illegal groups to act and enact their wills and projects for the Brazilian community but we are not restricted here to my country. This is a specific example to enlighten the audience about the many aspects of online linguistic gender stereotypes.

Moderator:
Thank you so much, Julia, for your answer. So just for the second question, it will be addressed by our later speaker but first we’ll move on to our next speaker we have with us. So we have with us, joining us from online from Malaysia, we have with us Dr. Manjit Kaur. Please, if we could have the online speakers on the screen. And yes, your seven minutes starts now.

Manjet Kaur Mehar Singh:
Good morning, everyone. Am I loud and clear? Yep. Okay, thank you very much for giving me the opportunity to share some views here. Okay, regarding the theme of today’s talk, the online linguistic gender stereotypes, what I would like to focus on is we cannot deny that these discrimination issues exist online. Okay, but what we need to look into here is maybe what is the way forward? How can we address these problems? How can we improve the situation? So therefore, we need to focus, for example, on inclusive language. So inclusive language has to, how do I say, acknowledge the diversity that the genders present. So like in Asian context, for example, in Malaysia, we talk about male and female, and when you talk about the LGBTQ group, there are also some issues here, sensitive issues, because due to the, how do I say, the racial composition of the country, Malaysia being a Muslim country. So when it comes to LGBTQ rights, so in terms of how they are represented in advertisements, for example, online, there are sensitive issues and also some kind of discrimination that exists. So what is needed is actually inclusive language, that is actually sensitive to all the groups of people, no matter which gender category are you in, and also promotes actual, what do I say, equal opportunity for all of them. That’s basically very, very important. So basically, how can we ensure that this can be, how do I say, imposed or or implemented in the online setting? Firstly, is to choose gender-neutral terms. It is sometimes very, very difficult for us, like for example, a presentation done by Mr Liu just now, when you’re promoting products, doing advertisements for perfumes. If you do a survey, you will notice the kind of adjectives that are used. Are they more inclined to feminism or do they, how do I say, promote masculinity? That is also based again on who is the product targeted for. However, how can we come up with a situation, a kind of a framework that addresses gender neutrality? Even when you’re promoting a female-based product or a male-based product. These are the issues that need to be considered. These are very, very important issues. Next, what I would like to say, especially also in the context, let’s say, workplace context. Workplace context in terms of when you talk about sensitive language, differences between male and female. How can you reinforce the diversity? Earlier, I mentioned about gender equality. But to have this phenomenon, gender equality, to be implemented 100% at the workplace or to have a situation whereby this gender equality, it’s totally impossible. It’s totally impossible. So, we need to work to the best. So, what is the best? Introducing and enforcing diversity. So, if you look at the whole picture of online linguistic stereotypes, what if we define that or cluster that as promoting diversity? Thank you. That’s also one part of the coin. One, we say it’s actually, how do I say it’s not fair to a particular group. So, linguistic, online linguistic stereotypes in terms of classification of gender can also be considered as something that can be a kind of like harassment, but at the same time, you can also look at it as promoting diversity, diversity through language. The use of adjectives to describe feminism, to describe women, to describe male, to describe the third group or the fourth group. So, at times, we cannot say that it’s being stereotypical. It’s also promoting diversity. You need the existence of it, but how it is used, how it is addressed, that’s very, very important. To start with education, to create the awareness among the youngsters, not to misjudge it, but to respect the diversity that is presented through the language to explain or to label someone as what he or she is. So, that’s very, very important. So, this will actually contribute to a sense of belonging for all the groups of people in terms of, let’s say, I see a product which is being advertised, and it is described with usage of certain words that I’m happy about, and you have another person. also looking at the same product from a different perspective in terms of how the product is described. So how do you create here a sense of belonging for both parties in terms of, I’m going to use the product, but I’m not happy with how it’s described. Another person is happy with how it’s described. How do we create a sense of belonging for both parties here? So it boils down back again to early age education, in educating people, the youngsters, to how to be more sensitive towards how you want to represent the product, to how you want to teach the youngsters to use words more responsibly, but at the same time to also respect the diversity that comes with each gender, how each gender is labelled, how each gender is described, and so on. But at the same time, we can make it more unbiased towards one gender. That is very, very important to avoid the gendered assumptions. We always have these gendered assumptions, that when you want to sell a women-based product and male-based product, you have to use certain words to describe that particular group. But diversity can be there, but at the same time, you need to ensure there is no biasness. That’s very, very important. So, what I would like to also focus on here is usage of… I mean, coming up with a guideline, a general language guideline. how you can ensure that when you have the diversity in terms of the linguistic aspects used online, you are able to ensure there is no discrimination, biasness, and at the same time you also promote the diversity of using a very diverse linguistic elements.

Moderator:
So sorry to cut you here, Dr. Manjit, but thank you very much for in the interest of time, we’ll move on to the next speaker before we’ll come back to your points on I think the general guideline, the linguistic guideline, which I think is very interesting, but we’d like to move on perhaps cross-region to our next online speaker. If you would please go ahead. Okay, sure, thank you.

Umut Pajaro Velasquez:
Hello everyone. First of all, I would like to dance for the invitation to this panel. Well, my focus during this conversation is going to be more related to the user experience or gender-based people in social media and how gender stereotypes actually affect the way the content displays in social media. Especially I focus myself, my research in TikTok because I wanted to understand the intersection of being Jude on social media when you are also gendered, a person that identifies as a gender diverse, for example, non-binary, queer, or other identities outside the binary male and female. One of the things that I realized making this This research is that most of the platforms that are using more content moderation actually weaponize the content of people that identify as gender diverse. In a way that the shadow binding or block that content more than often that they are actually people that identify as a non-gender as a non-gender does. So I try to ask to the people exactly, I came up with 53 interview from different people from Latin America that were then telling me their experience using the platform and what they had to do to like align their identities to TikTok in a way that they actually can spread themselves in a pretty much in a normative way to follow the expectation on the platform, the expectation on algorithm in order to still be presenting the platform without any problem and without being targeted by the shadow binding or by censoring the content or something like that. Most of the censor or the content came from the use of a specific hashtag related to the language that we use as a LGBTQI community or as a gender diverse people online. Most of the cases that I came to study says exactly that when they use some specific question so in specific words, the content were sometimes less visualized or just take down on the platform without any reason. And when they asked why the content was taken down on the platform, there wasn’t an explanation on why exactly they do that. They only say it was again, they can’t do all the platform. So, that is when I came up with the question of how exactly we can do a platform, how we can actually own a platform or for an identity in algorithmic system that actually is not promoting all identities on site or on site of it, yeah. And it’s a hard one to answer, I have to say that, but most of the people say that actually inside of that, they never fully align their identities and they never fully feel accepted inside of the platform because of the restriction or the self-censor that they had to do being like Jews in their everyday life. So probably most of the content that you see about LGBTQI people inside of all this platform are actually more related to a trend that it was imposed by someone that’s already less relevant in the platform because when someone is already relevant into the platform and actually create content related to LGBTQI people or gender diversity, that’s when… that content became somehow acceptable inside of the platform. When you try to propose another things that they don’t consider, it could be like a, it could be part of a train or something like that. That content is, it doesn’t show as much as the other and the content actually became a way to restrict the way they present themselves online. This came on with one of the many things that they say they had to leave this platform needs to improve and be more clear about how they, how what are the community standards in terms of not only languages, but also the content that they are allowed there because sometimes the content they actually portray in their profiles is so similar to the gender binary. But somehow that content, that content is not showing in the same way or it just stay down on the platform without any reason. So that was one of the many things that they say. Also another thing that a lot of people then I try to like conveying like a general recommendation in after the many tours that I have with them is that we need to find as a community a way to shift this system toward a space or in which identification on part of these historical marginalized communities and underrepresented communities in a way that is actually made more sense that the mediation made by the different algorithms ensure the digital rights of the community. free of desperation or the being without sensory or fear of constant like cleaning their spaces or healing the spaces or construction of the identities in the sense of what is a space as normal.

Moderator:
One way to… So sorry to cut you in here, but so we’re just for the interest of time. Right, thank you so much. It’s right on time for seven minutes. So we’d like to move on to our next speaker for their next seven minutes. We have again joining us from online, Dana Raj. So yes, please go ahead as well as with your presentation.

Dhanaraj Thakur:
Okay, hello everyone. Can you see and hear me okay? Yep, all okay. Yes, great. Good. Thank you so much. Thank you to the organizers for the session and hello everyone. My name is Dana Raj Thakor. I am the research director at the Center for Democracy and Technology. I’m from Jamaica in the Caribbean, but I’m based in the United States. CDT is a tech policy organization based also in the United States and that focuses on human rights and digital spaces. So there are two main points I want to make with regard to the overall theme of this session. First, with regard to the issues around how language can be used for hate and to promote violence, which our previous speaker already alluded to, and how gender stereotypes can be also leveraged and used in language to promote false and misinformation are key aspects of the online information environment and contribute to the gender digital divide. The second point I want to make is that we often think of artificial intelligence tools like natural language processing tools and large language models in the ways that they can be used to address these problems, for example, to clean up the kind of hate speech and violence and misinformation that’s targeted at women and other gender identities. And I argue that this actually makes the problem worse. So to talk a bit more about language of hate and misinformation, mis and disinformation. This kind of violence, rhetoric, violent language, as well as mis and disinformation, as I mentioned, is predicated on gender stereotypes, which we heard previous speakers describe in better detail earlier, but all of those often have disproportionate impacts on women. And there is research around the world, many different countries, to show this to be the case. There is less research that focuses on non-binary and trans people, but the research that exists actually shows that the problem could be even worse for those groups of people. One important aspect of this is to take an intersectional approach, to not just look at gender, but other dimensions of identity. And when we do that, we find that there are subgroups that actually are more targeted with this kind of violent speech and more targeted with this kind of mis and disinformation. And this leads to several different kinds of impacts, one of which is that a negative impact on the gender digital divide, it actually makes it worse. And again, there’s research in different, particularly in Global South, that shows this. It undermines the political participation of women and other gender identities. It has serious economic health impacts, mental health impacts, and it has significant impacts on freedom of expression and chilling effects. In other words, it suppresses the speech of the people that are targeted, very often women in public life. One example I want to use is some research that we did to help illustrate this. This was focused on women of color political candidates. In the 2020 U.S. election, women of color is a term used in the U.S. to describe women of non-European descent, so Asian-American women, Latina women, African-American women, and others. We looked at data from Twitter during the 2020 elections, and we looked at a representative sample of all the candidates that ran at the federal level, at the national level in that election. And we found a couple of things with regard to women of color candidates. So here, I want to emphasize this intersectional approach to illustrate how these kinds of hate speech and misinformation are targeted at particular groups of women, so not women in general. What we found was that women of color political candidates were twice as likely as other candidates to be targeted with misinformation, twice as likely as other candidates, including white women and white men, and so on. There were four times as likely as white candidates to be subject to violent abuse online, violent speech online. And there are more likely than others to be targeted with a combination of false information and online abuse. I use this example to illustrate this problem of the severe kinds of impacts that particular women face online because of this kind of, the way language is used in this kind of hateful and violent way, as well as to propagate gender stereotypes to promote false information about women. So the other issue I wanted to talk about was the use of AI, which someone in the audience asked about. And I’ll focus on large language models. Large language models, think ChatGPT, are essentially a machine learning technique to look at large amounts of data, in this case, text, and make predictions about what kinds of text the user wants to see. So if you think of ChatGPT. you might put in a prompt, what day is it today? And based on all the training data it has available, it can make a guess. And to be clear, that’s all large language models do. They make guesses, very good guesses, but all they’re doing is making guesses or predictions. They’re not thinking, they’re not human, they’re just making guesses. The challenge for us is when large language models are applied to non-English languages. Most models, like Chad GPT and many other models, are based on data that’s available online. So they look at the entire internet and the web and draw data from that. As we know, the majority of the web is in English, even though the vast majority of the world does not speak English. So this is this kind of paradox and problem. So what that means is that there are many languages in the world which are referred to as low-resource languages. And I use a quote because I’m not sure that’s the appropriate term to use. But among computer scientists, they refer to them as low-resource languages. In other words, there’s not enough data available for those languages that can support the use, the training, of these large language models. Examples include Hindi, which is a very big language, Amharic, Telugu, Zulu, and so on and so on. These are not small languages in terms of population size, but they don’t have that much data available online. So because these languages are low-resource, the use of large language models in those cases won’t be as effective. And this is critical because it has implications for the use of these models to address some of the problems I mentioned earlier. The violent speech targeted at women and non-binary and trans people, and the misinformation targeted at women, particularly those in public life. What happens when we’re using non-English languages? These models, as a tool to solve the problem, will fall short. And here is the final point I want to end with, that in many of the countries where we talk about low-resource languages, many of the countries are in a global salt, where the digital divide exists, that is fewer people are online. There is a significant gender digital divide, which means that men are more likely to be online. Men are more likely to be online, they are producing more content online, which is the content that large language models use. So we have a vicious cycle that’s happening. The models are using content in these non-English contexts that are produced by men to propagate further stereotypes that can undermine and create further problems for the addressing problems like violence, feature and gender, mis- and disinformation. So I will stop here for now, and then we can talk further in subsequent discussions. Thank

Moderator:
you very much. Right, thank you very much, General Raj, for your sharing. It’s really interesting to see we addressed the question earlier brought up by the floor on what it looks like now about using AI to address this issue. And I really like that you mentioned how the gender linguistic stereotypes essentially is a vicious cycle in which the issue about the majority of content being English would also relate to the global south, global north divide. And that also ends up, you know, repropagating, reproducing that female and male traditional gender stereotype of which would be more likely to be online. And so we have time for our

Juliana Harsianti:
final speaker, just for the first round, seven minutes, also joining us from online, Julia. Hello. Thank you, Stella, Luke, and Nat Mission for inviting me to this interesting conversation. My name is Juliana. I’m from Indonesia. I’m actually, I was a bit myself as an independent researcher, but this time I will use my head as a translator for the Global Voice. Global citizen journalism platform but also working on the multilingual internet. Okay, I will start with the fact that language has such an important role to build the perception and image. Why? Because language can be seen as a form of magic that impacts the world. What we say and how we use language affects our thinking, imagination, and our reality. My language, Indonesian, doesn’t have a gender like Spanish or French, so I grew up without knowledge about how the gender language has the impact to build the perception. But, well, it changed when I started to learn French and then Spanish. At the time, I realized that the gender has some crucial impact on how the people who use the language have a perception of themselves. In those both languages, the gender automatically changes into masculine when the plural subject has the mixed gender. When I talked to friends who speak those two languages, they said it makes them think that masculine or man has a better position in the community or more superior than feminine or women. Besides the gender, language also has nuance. I will take the example in English. There are some words which have a negative connotation and are applied only to women and girls. Bossy, for example, has a pejorative meaning and is targeted to women and girls who want to try to lead the community or the group. That makes people think the women and the girls act like boss. It never happens in the men who want to try to lead in the community or the group. The pejorative meaning also happens in other languages who don’t have gender grammar. In Indonesia, for example, sadly, this pejorative language and the slur words are used quite a lot in online bullying when targeted women go to an LGBT group and use this kind of certain stereotype. And what’s the meaning in digital world? As I mentioned before, the pejorative and the slur language has been used to attack women, girls, and LGBT people when they are active in the internet. Several studies have been taken that online bullying will make them less active in the internet. This means a negative move when it comes to minimising gender digital divide. So it means women, girls, and LGBT people will be less active and afraid to speak up in the digital world. Another case is about the function of the internet, especially in the translation machine. There are some words when it translates from other languages to English, it automatically translates with the masculine subject. For example, if I want to translate the subject as a doctor, the result in English is he is a doctor. But when it comes to a nurse or a secretary, the subject is feminine or a woman. Later today, we now have the chat activity that has been mentioned in Dana’s presentation. And other large language models, we use AI to scrap and train the source from the internet. Why has this become a problem? Because I afraid in the future it will be decriminalized certain race, gender, and language if we couldn’t start to promote more gender-neutral and inclusive language. And what we could do as a community? We can provide a constant input, discussion, and reflection that could make the language more inclusive and gender-neutral in digital and real life. I appreciate the translation machine now is more gender-neutral and not associate some words or working or job occupation with certain gender. And those as a result from the community input whose constantly give the feedback into the translation machine company. As the closing, I believe the language is dynamic and I believe it still grow during the time in real and digital world. It needs constant work from the communities and who also give input about make the language more gender-neutral, more inclusive, as it could more fair for everybody. Thank you and waiting for the discussion.

Moderator:
Thank you so much, Juliana. Very interesting to hear from your perspective in the industry of translation. So we’ve heard from all our speakers for the first round. And I think I’d like to go to perhaps our second round, begin our second round of round table discussion. So looking at, perhaps I could start off with Dhanaraj for a question for you regarding what you feel on what measures can be taken to improve. Or any questions that you feel that coming from your perspective having researched, you know, the impact or the potential, well, in your case. that your case is against the potential of using AI. What do you think it needs to be discussed more, particularly?

Dhanaraj Thakur:
Yes. Thank you for the question. I think this topic is precisely what needs to be discussed more, particularly within the industry, around what I’d call the gender gap in training data. I mentioned the problem in the global south of the gender digital divide. There was a recent study from the University of Pittsburgh that looked at the training data that’s used for CHAT-GPT. Let’s take that as an example. The training data that’s used for CHAT-GPT generally and found that only 26.5% of that training data contained data that was authored by women. So the vast majority, almost three-quarters of the data was authored by men. So I often think then about the implications of this. If we think about how CHAT-GPT is being considered now and incorporated, for example, in schools and in the education system or models like CHAT-GPT and the kind of gender gap in the training data that exists and what implications that will have for youth going forward. I think many of the other speakers have already pointed to some of these kinds of problems. I think, therefore, it is the questions, particularly at a positive level, in the education system, in industry, about the gender gap in training data.

Moderator:
Right. Thank you so much, Dhanaraj. So since you mentioned schools, so we have an educator with us on our panel. I thought I’d like to hop over to Dr. Manjit for your thoughts on, well, you mentioned earlier that you were looking for perhaps a suggestion about a general language or a linguistic guideline. How do you foresee this can be related to what Dhanaraj mentioned about the data that’s being used to train these large language models?

Manjet Kaur Mehar Singh:
Hi there again. Okay. Just now, what I mentioned was on the learning guidelines. So basically… coming from a country in Southeast Asia, Malaysia, which is very, how do I say, governed by, how do I say, religious rules, you know, Islamic country, okay. So, there are a lot of things when it comes to this kind of biasness, language usage, the stereotypical issues, are swept under the carpet, okay. So, what is actually needed is coming out visibly, being more, how do I say, open, explicit in terms of learning, okay, and development. That’s very, very important, okay. So, it starts with education from the beginning, because this is an issue which is, that exists, but it’s not being addressed in the context of Malaysia, for example. So, it has to start with education, and it has to start with the educators themselves. If the educators do not believe in, how do I say, having equality, and then, at the same time, promoting diversity, it will be a failure. So, educators themselves must be trained on how they’re going to learn, and how they’re going to make their students learn and develop on this, okay. So, next one is, there are no clear rules and regulation or policies set by the government on these matters, okay. So, it should be led from the top. That’s very, very important. It should be top-down. So, when they are visible, clear rules and acts on how language is used to represent a particular group. So, there are some rules that people can fall back on. For example, if you go to advertising companies nowadays, if there’s no auditing done, and there’s no clear rules informed to them, okay, nobody will care, okay. So, that is very, very important in terms of leading from the top, having some rules. some policies in place. And next one, at the workplace itself. Workplace, for example, advertisement industry, for example, and all the other industries which are product-related to us particular groups, you know, they should have, how do I say, training at workplace for their employees, for their staff, to talk about these openly, to discuss about this matter, to have people of all groups to sit down together and deliberate on the matter openly. There should be no, how do I say, criticism against one particular group. Okay, one particular group is neglected or marginalized. Okay, it shouldn’t be dominated, for example, by the male only, okay? So these are the things that are very, very important, the training and the awareness at workplace, if learning and development did not work at the school. So this is where it happened.

Moderator:
Thank you so much, Dr. Manjit, for your perspective. And hopping over to some of our on-site panelists, maybe we can get Arnaldo, what do you think of the question on what was mentioned earlier about, you know, the potential negative case against the use of AI currently to address this issue?

Arnaldo de Santana:
All right. I really think that it is something that need to be, that we need to go under, to go to research more about it, because I see that AI has a perspective of improving and bringing something different and something new, but also reflects on the people that develops it. So if we have some structures that are structures of power, and we reproduce it to impose what is correct, or what’s not correct, it might not be something that is. quite applicable. I was thinking about also the perspective, the question that Will sent us of all the marginalized talking and languages such as Pajubá. And it remembered me about the Portuguese that is a variation of Portuguese that came with the people that were colonized. And also about the indigenous people talking in languages that were like borrowed from the world and especially in Brazil that we don’t know how to talk, how to teach and how to keep it going. So I feel that we have to develop something especially in internet that provides our existences to be more participative and that we do not erase our memories and our lives just because there is something and the colonialism is there to put in perspective what might be and who has the power. And this is always the cisgendered white male from Europe.

Moderator:
Right, thank you so much, Arnaldo. So that’s enough for the question. We’re moving on to our next question which would be how can these online linguistic gender stereotypes result in negative experiences for youth both online and in the real world? So I’d like to start with our onsite speaker, Julia if you would be able to share on this, yeah, on the question.

Júlia Tereza Rodrigues Koole:
It is hard to pinpoint or to sort out where to start. Because so many possibilities and they are not. they’re never positive, probably. For my line of study, we have minimum damage outcomes, and also we have outcomes that end lives, and that end groups too, because this mobilization of online linguistic gender stereotypes on the internet can drive away many women and many gender diverse people, trans people, specifically from spaces. That is probably the most recurring negative effect. The ones who power through, the ones who do decide to move on and face the discrimination and be there, and do not think, do not decide that they don’t want to be there, because there, meaning any social media or platform group, study group, or a game community, if they don’t decide to move on, they decide to stay in that community, they will over and over experience a gradative hate and a gradative more violent experiences, which can result into distortion of self-image, of self-worth. It can result to the many… mental diseases and also it will always end up building in their minds that that is not a space for them. And that or they should be under a specific expectation of what is gender and how they should behave and how they should talk.

Moderator:
Right, thank you so much, Julia. So you mentioned the perception of self-worth and I think it’s really good to ask. So how, for our speakers, how can such online linguistic gender stereotypes affect users’ perceptions of their self-worth and value? And the secondary follow-up question would be what implications would this have on our current digital, gender digital divide? So I’d like to start this round of question and answer off with your opinion from Umut.

Umut Pajaro Velasquez:
Okay, well, my research is about the effects that this kind of use of gender stereotype language has on the self-identity of the people. Most of the people say to me that they actually never get to fully feel identified with the platform that they were using because most of the time they had to mold themselves to something that they aren’t. So they came up with kind of problems or anxiety, just how to present themselves online in a way that actually they don’t go against the community standards. And so came with some issues or sometimes get some people actually stop using the platform because they never feel. were there and they felt left behind all the conversation and they became like a issue in the way they socialize with the rest of their partners or the rest of community because they actually, they can’t fully express themselves on their plan. So we see that actually the platforms that’s in the community standard or the way they moderate the content are seen to be like not harmful, but actually they are when it comes to gender diversity because people are not adapted to the normaties or the roles expected by the gender binarity are affected in a way that they can’t fully express themselves in the platform. So that came up with consequences to their mental health. Right, thank you so much.

Moderator:
It’s very, I mean, I think it’s very enlightening that you mentioned that they never feel fully identified. And I guess the sense of belonging, which was also mentioned earlier by our panel, it’s really important to consider. So for the same question, I’d like to go over to perhaps Juliana, what your thoughts are on, how can such online linguistic gender stereotypes affect self-worth and value?

Juliana Harsianti:
Okay, maybe because I will talk about the, the slur and the negative words has been addressed to women and girls, and then it will be affect, as I mentioned, it will affect how they decide to act active in internet. So yes, as Umut said, this is important to address this kind of online bullying with the certain negative words, because I know it’s understand it’s the languages difference. nuance in different cultures. But if some language has a certain negative impact or negative meaning in some culture, maybe we can think we can promote more inclusive or more gender-neutral in language. So it can be with people have more safe to express themselves in the internet. And the second one is about the provide the content in internet with the women and the girls perspective and profile. Before the LGBT era, Wikipedia has the work, has organized several Wikigap. Wikigap is the translate and create the content about the women. So the internet will be more content and more profile about the women and by the women. So I think this is my opinion.

Moderator:
Right. Thank you very much, Juliane. I mean, like you mentioned, real-life examples of what’s generally attributed online. And I think we have an intervention from you.

Luke Rong Guang Teoh:
Add on to that point. And what we’ve been seeing is that the gender divide has been increasing ever since 2019. So despite the pandemic forcing a worldwide digital transformation and the emerging technologies and rapid advancements have still resulted in women being left behind. And global statistics, in my opinion, do not do this issue justice as the gender divide worsens when considering further marginalized women like the elderly or women in rural areas or other parts of the community. And according to UNICEF, more than 90% of jobs worldwide have a digital component. And with most of the data on the gender digital divide being on women above 18, what we’re seeing is there’s not enough research done for women below or young girls below 18. And basically my point is that these online linguistic gender stereotypes will most definitely affect their perceptions of what jobs or careers are quote-unquotable, they’re able to choose or supposed to be for them. Thank you.

Moderator:
Right, so thank you very much from our speakers for a round two of sessions. So we’re coming up to the last six minutes. I’d like to ask if there are any questions from our on-site participants and online as well. Right, go ahead, please.

Audience:
Hey, okay. So thank you. My name is Renata, I’m from the Youth Brazilian Delegation. First of all, thank you for the panel, really, really, really interesting. I want to actually make some sort of a tangent comment and to discuss a bit about platform algorithms, especially in visual platforms, maybe something related to what Mr. Vasquez mentioned before about TikTok and Instagram. We see that young girls using platforms to promote themselves, using their own bodies as commodity and being vulnerable to predators. So I was wondering what the panel might think that we can do to protect our youth in digital platforms considering this and how we can moderate comments and the language and how civil society can act in defense of our youth, especially young girls in these visual platforms. Thank you.

Moderator:
Thank you for the question. Maybe, Dana Raj, would you have any comments on that?

Dhanaraj Thakur:
Yes, thank you for the question. I’ll make maybe three quick suggestions or thoughts. So one is there’s a lot that… the platforms themselves can do. So for example, you mentioned TikTok, Instagram and others in having a better design of their platform to allow the youth and users of the platform to address, to better control the kind of bad content or push back against bad content that they might receive. There’s also the privacy and or targeted kind of model that all these platforms use. Having greater privacy protections on these platforms can reduce the degree of targeting and therefore the degree of what I call algorithmic amplification that you’ll see or that you’ll observe on these platforms. So for example, if it is a case of young girls either being exploited or things like that, the thing to which the algorithms will promote that kind of thing can be reduced, particularly on the platform side, if there are changes to the design and changes to how the incentives that they have in place. And the last thing I’ll say is that a lot of what happens on the platform is still unclear because as researchers, as governments, civil society activists, we don’t have a lot of insight. We don’t have sufficient insight into what’s happening on the platforms. And what’s important there is that the platform, the social media platform provide more data in a safe and secure way for researchers to better understand what’s happening because then we ourselves could come up with better solutions to address some of the problems that the person in the audience raised.

Moderator:
Right, thank you very much. Thank you very much, Dhanaraj, for your comment on that. I guess we can see that hopefully in the future, we definitely do need to have more representation from private sector on such an issue. So I’d like to move on to a question we have from an online participant. Thank you very much, Tsukiho Kishida, for your, so I’ll just read it out. Thank you for sharing important perspective. You’re a master student from social linguistic and research cyber bullying in terms of. communication. You’re interested in the presentation. Your question is, it’s difficult to judge whether some hate speech happens from gender bias because there are many factors in a context. Under this situation, should we tackle hate speech from gender stereotypes? So perhaps we’ll have a youth perspective on this and then we’ll hop over to Donna Rogers again or anyone else from the panel who wants to take this question. Maybe Umut, could we get your input on it? Okay. What should we do to tackle hate speech from gender stereotype?

Júlia Tereza Rodrigues Koole:
Firstly, take care of youth. We need to bet on next generations. Our age group, like 20 years old to 50, 70 years old, are already dealing with too many problems that stem from education in the first and the second infancy. There are many ways to do that and each school should study what are the main problems in the community. Sometimes girls have problems in addressing their physiological necessities. Other communities have more problems talking about sexual education, other communities have other problems, talking about the social place of the woman and the man and other gender diverse people. But also, you as an undergraduate can study also how to generate empathy in people who are now disconnected from this scene. What can you do to get them closer to you and to the topic and to the subject, because we have also a really apathetic population in our age, in the undergraduates, in the majors, in the PhDs, that they don’t want to progress. They don’t want to go any further because they think it’s obvious, like everybody reserves equal rights, but how can we captivate the audience who isn’t opposed but not actually involved in the development of a better world?

Moderator:
Thank you, Julia. So we’ll just hop off to one of our online speakers, Dhanaraj, and then we’ll follow with Arnaldo’s intervention. Thank you.

Dhanaraj Thakur:
Great, thank you. I fully agree with Julia’s response. And I just wanted to add that, and in fact, I think Julia had mentioned this earlier, there is a group of younger boys, men that are influenced heavily in what researchers call a manosphere, this kind of bubble of hate speech. and gender stereotypes that drives a lot of hate that comes from them. I think a big issue here then is for young men, men, boys, particularly cisgendered men like myself, to reflect and consider the impacts of hate speech and or false information that we might share online. And as I said earlier, there has to be a degree of empathy. But I think starting with young men and boys is important.

Arnaldo de Santana:
So I’d like to add also that although we don’t have any legislation that works internationally to talk about these patterns and how to directly put something as hate speech and as gender stereotype, I feel that one can be used to identify another. And probably in the future the way to tackle it must be breaking the stereotypes. But nowadays I feel that it’s not necessarily viable. We have so much written stereotypes that we need daily to break and innovate that I feel that we need more time to talk about it, to innovate it. And I feel that one can be used to identify and try to be better in the future.

Moderator:
Thank you so much, Ronaldo. So just quickly reading out Umut’s response in the chat, probably changing the narrative of gender stereotype that is under attack to generate a response that does not leave doubt with what is actually said, that what is actually said is hate and not freedom of expression and that it affects the human’s rights. women are gender diverse people. So really quickly, Juliana, if you could keep your comments under one minute.

Juliana Harsianti:
Okay, I’ll be short comment because others already mentioned by Julia and Dennis. I think, yeah, it’s been quite challenges to how to beat the headspace in online space because when some women and girls will attack in online space, some people will say that it’s just your feelings, so you just don’t take it for granted or just don’t take in your mind. But I think it will be empathy or some, not regulation as the law at the country, but more community regulation, how it could be shared or how it could be talked in this community and take some action from the field, from the community member in online. Thank you.

Moderator:
Sorry to cut you here because we’re over time by five minutes. Thank you everyone for joining in our panel. If I could just get everyone to maybe come in for a picture, if you could have your video on, Dana Raj, Umut, Dr. Manjit, and Juliana. Juliana is fine. And if anyone else wants to join from the audience is also okay. And yeah, we’ll just get a quick picture with everyone. All right. Thank you. Thank you so much, everyone. So, yeah, thank you so much for your sharing and everything. I think it’s really great that we had an opportunity to discuss this. We see a lot of future for this topic and maybe we’ll see everyone at a regional IGF or at the next IGF next year. So thanks again and if you’re interested to network with any of the speakers after, please do feel free to contact the session organisers and to look out for more from netmission.asia. We’ll definitely be continuing the discussion on this topic, leading for a youth perspective from the youth ourselves. So thank you once again to our speakers joining us from all across the world and for all our participants and our panellists here.

Arnaldo de Santana

Speech speed

115 words per minute

Speech length

1022 words

Speech time

533 secs

Audience

Speech speed

135 words per minute

Speech length

597 words

Speech time

266 secs

Dhanaraj Thakur

Speech speed

177 words per minute

Speech length

2070 words

Speech time

702 secs

Juliana Harsianti

Speech speed

127 words per minute

Speech length

1105 words

Speech time

522 secs

Júlia Tereza Rodrigues Koole

Speech speed

96 words per minute

Speech length

1496 words

Speech time

931 secs

Luke Rong Guang Teoh

Speech speed

154 words per minute

Speech length

1129 words

Speech time

440 secs

Manjet Kaur Mehar Singh

Speech speed

128 words per minute

Speech length

1557 words

Speech time

727 secs

Moderator

Speech speed

182 words per minute

Speech length

1982 words

Speech time

654 secs

Umut Pajaro Velasquez

Speech speed

137 words per minute

Speech length

1167 words

Speech time

512 secs