Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27

9 Oct 2023 00:00h - 01:00h UTC

Event report

Speakers and Moderators

Speakers:
  • Connie Ledoux Book, Chair of the National Association of Independent College and Universities and President of Elon University
  • Divina Frau-Meigs, UNESCO Chair for Knowledge and Future in the Age of Sustainable Digital Development and Professor at the Sorbonne Nouvelle University
  • Lee Rainie, Incoming Founding Director of Elon University’s new center on the digital future, previously longtime director of internet and technology research at Pew Research Center
  • Janna Anderson, Director of the Imagining the Internet Center and Professor of Communications at Elon University
  • Alejandro Pisanty, Internet Hall of Fame member and Professor of Internet Governance and the Information Society at the National Autonomous University of Mexico (UNAM)
  • Siva Prasad Rambhatla, Researcher, Speaker, and Former Professor of Anthropology and Leader of the Centre for Digital Learning, Training and Resources at the University of Hyderabad, India
  • Francisca Oladipo, Vice-Chancellor at Thomas Adewumi University, Nigeria, and Professor of Computer Science
  • Wei Wang, Member of the IGF Dynamic Coalition on Data and Artificial Intelligence Governance and Teaching Fellow at the Fundação Getulio Vargas (FGV) think tank in Brazil
  • Eve Gaumond, Research Associate at the University of Montreal’s Public Law Research Center
  • Renata de Oliveira Miranda Gomes, IGF Youth delegate to IGF 2023, selected to represent Brazil
Moderators:
  • Connie Book, Chair of the National Association of Independent College and Universities and President of Elon University
  • Dan Anderson, Special Assistant to the President, Elon University

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Siva Prasad Rambhatla

The analysis highlights various important aspects relating to the impact of technology on education. Firstly, it emphasizes that technology is a medium that is guided by humans, and it has proven to be extremely useful in facilitating education during the COVID-19 pandemic. The supporting facts provided in this regard include the fact that humans play a crucial role in feeding into technology and guiding its development. Moreover, it is acknowledged that technology has been instrumental in enabling educational continuity while traditional in-person learning has been disrupted.

However, another significant finding of the analysis is the existence of a digital divide that poses challenges to education. This digital divide is characterized by disparities in access to technology and online education resources. The research highlights the fact that not everyone has access to the necessary equipment and broadband connectivity, thereby hindering their ability to fully participate in online learning. An illustrative example is given where students had to resort to climbing trees to receive internet signals. This digital divide is particularly pronounced during the COVID-19 pandemic and it disproportionately affects individuals from disadvantaged backgrounds, exacerbating existing inequalities.

To address the educational needs and promote inclusivity, it is argued that education should be more inclusive, multicultural, and locally relevant. The analysis stresses the importance of adopting AI learning models that are designed to be inclusive of diverse perspectives and cultures. Furthermore, it highlights the need to recognize that subject learning cannot be universal and should be tailored to the specific cultural contexts and needs of different communities.

The analysis also sheds light on the challenges posed by generative AI, particularly in the context of copyright and plagiarism. It is pointed out that generative AI technology has the potential to bypass traditional learning processes and facilitate easy content generation, which can have negative consequences on the creative thinking ability of learners. This aspect raises concerns about copyright infringement and plagiarism, indicating the need for safeguards and ethical considerations in the use of generative AI in education.

On a positive note, the research suggests that AI technology can fill gaps in the shortage of teachers and instructors, and it also provides opportunities for innovative course design. However, it is emphasized that the design and implementation of AI technology should be approached with caution to fully harness its potential. This implies considering ethical implications, promoting transparency, and ensuring proper oversight to mitigate potential risks and biases that may be embedded in AI algorithms.

The analysis underscores the existence of a real and persistent digital divide, which is influenced by socioeconomic and cultural factors. It is observed that individuals with access to infrastructure and resources benefit more from digital advancements, while socioeconomic and cultural backgrounds contribute to the perpetuation of this divide. The presence of international groups is found to slightly reduce this divide, indicating the potential of collaboration and global initiatives to address the issue.

It is also highlighted that biases and discrimination in AI algorithms pose a significant challenge. The analysis acknowledges the existence of biases and discrimination in AI algorithms and emphasizes the need to address these concerns. The research does not provide specific supporting facts in this regard, but it implies that efforts should be made to identify and rectify biases to ensure fair and equitable outcomes.

A noteworthy observation from the analysis is the importance of governmental intervention and the involvement of private firms in bridging the digital divide and countering exclusions and biases. The research suggests that governments and private firms should invest in initiatives to reach larger sections of society and ensure that technology is accessible to all, regardless of their socioeconomic or cultural background. This would require strategic planning, substantial investment, and collaborations between various stakeholders to create a more inclusive and equitable educational landscape.

Finally, the analysis highlights the need for academics to propose alternatives to address biases in the digital medium. Further research and discussions are needed to explore innovative approaches and strategies that can mitigate biases and promote fairness in the use of technology in education.

In conclusion, while technology has played a valuable role in education, it is important to address the challenges posed by the digital divide, generative AI, biases in AI algorithms, and the need for inclusivity and local relevance. Governments, private firms, and academics all have a crucial role to play in ensuring that technology is harnessed ethically and equitably to enhance access to quality education for all.

Renata de Oliveira Miranda Gomes

The presence of digital platforms has significantly increased in higher education, particularly during the COVID-19 pandemic. These platforms have incorporated artificial intelligence (AI) to revolutionise the learning process and facilitate new ways of exchanging knowledge. One such tool, chat GPT, has emerged as a valuable resource in enhancing learning experiences.

The speakers highlight the advantages of incorporating digital platforms and AI in higher education. Firstly, the ease of access and availability of digital platforms have made learning more accessible, especially during the pandemic when in-person classes were disrupted. Additionally, the incorporation of AI has allowed for innovative learning methods and the exploration of new ways to deliver educational content.

One of the major concerns expressed is the gap between students and educators in accepting these new platforms. Some resistance stems from the fear that these platforms may facilitate plagiarism or promote shortcuts in assignments. However, this stance is neutral, indicating a need for further dialogue and understanding between students and educators to address these concerns effectively.

Despite the concerns raised, chat GPT emerges as a promising tool for learning. It has the potential to save time by generating bullet point summaries or highlights of reading material. Moreover, its use can foster the development of critical thinking and analytical skills among students.

The speakers emphasize the importance of collaboration between educators and students in the effective use of AI in education. They highlight the significance of users influencing AI’s functionality and tailoring it to meet specific learning requirements. This collaboration can lead to a more beneficial and effective integration of AI in education, ensuring its positive impact on achieving SDG 4: Quality Education.

Furthermore, the inclusion of AI in initial learning processes is seen as an important step towards transforming education. The state of Piauí in Brazil has taken a notable stride by including AI in its high school curriculum, making it the first state in Brazil to do so. This initiative demonstrates the potential for AI to enhance teaching and learning methodologies at an early stage.

Overall, the speakers express a positive sentiment towards incorporating digital platforms and AI in education. They acknowledge the potential benefits of these technologies in improving access to quality education and fostering a more innovative and effective learning environment. With further collaboration, dialogue, and understanding, the successful integration of AI in education can be realised, ultimately contributing to the achievement of SDG 4: Quality Education.

Lee Rainie

Elon University is taking a stand in upholding the essential principles for the Internet and Artificial Intelligence (AI), which are crucial for safeguarding human rights, autonomy, and dignity. The university is diligently following these principles, which bring time-tested truths to the age of AI. By doing so, they ensure that the development and use of AI technology align with ethical considerations and respect for individual freedoms.

As the influence of AI spreads, universities like Elon recognize the need to study and disseminate insights about how this technology impacts people. They understand that AI is rapidly surpassing our cognitive capacities and becoming a prominent part of our lives. Therefore, it is essential for higher education institutions to promote new literacies and best practices to empower individuals and equip them with the necessary skills to navigate this AI-driven world.

In the age of AI and smart technologies, human traits such as critical thinking, sophisticated communication, teamwork, and emotional resilience are becoming increasingly valuable. These unique qualities distinguish humans from AI and need to be honed. Universities like Elon acknowledge this and emphasize the importance of identifying and exploiting these distinctively human traits and talents. By doing so, individuals can find their place in a world where AI is becoming more integrated into various aspects of society, including the workforce.

It is crucial to recognize that AI should serve humans and not the other way around. This principle is advocated by experts like Mr. Rainie, who emphasizes the importance of domesticating technologies to serve our needs and enhance the well-being of individuals and communities. Acknowledging and implementing this principle ensures that AI technology is developed and utilized in a manner that prioritizes and respects the interests, autonomy, and dignity of human beings.

In conclusion, Elon University’s commitment to upholding the principles for the Internet and AI is commendable. Their efforts in studying the impact of AI on society and promoting new literacies and best practices are crucial in preparing individuals for an AI-driven future. Recognizing the distinctively valuable human traits in the age of AI and advocating for AI technology to serve humans are essential for maintaining a balance between technological advancement and human well-being.

Audience

The discussion centered around the use of Artificial Intelligence (AI) in education and its potential impact. One argument highlighted the gap in expectations between university administrators and students regarding the use of AI. The law faculty at Leiden University expressed opposition to AI implementation, revealing a negative sentiment. On the other hand, the argument in support of using AI with caution emphasized proper attribution and the need to address misinformation. It advocated for alerting students about the dangers of misinformation and displayed a positive sentiment towards AI in education.

Another concern raised during the discussion focused on the potential for AI to worsen the digital divide, particularly among marginalized groups. This concern was exemplified by the significant digital disparities in countries like Bangladesh. It was feared that AI would primarily benefit technologically advanced individuals, further marginalizing those without access. This argument conveyed a negative sentiment towards AI, suggesting that it could exacerbate inequalities.

The discussion also emphasized the importance of embracing technology in education and ensuring AI is accessible for lifelong learning and marginalized groups. It stressed the need to integrate AI in lifelong learning while addressing the challenges faced by certain demographics in accessing AI-based public services. This perspective showed a positive stance towards AI, advocating for inclusivity and reduced inequalities.

Additionally, the lack of sensitivity and ethical standards in AI development by STEM professionals was criticized. The argument highlighted a negative sentiment towards the apathy or lack of interest among STEM workers in developing AI ethically. This raised concerns about the ethical implications of AI development and the need for stringent ethical standards.

Furthermore, there was a call for diversifying AI engines beyond corporate control. This view expressed a neutral sentiment, advocating for the exploration of open-source alternatives and diversification of AI engines. The aim was to move away from the dominance of corporate entities in AI development.

In conclusion, the discussion on AI in education highlighted various arguments and concerns. While there was an expectation gap between university administrators and students regarding AI, there was also support for using AI with caution and proper attribution. The potential exacerbation of the digital divide and the importance of inclusivity and accessibility in lifelong learning were significant considerations. Additionally, the lack of sensitivity and ethical standards in AI development by STEM professionals raised concerns. There was also a call for diversifying AI engines beyond corporate control. These insights shed light on the complex considerations and diverse opinions surrounding the use of AI in education.

Connie Book

In their discussion, the speakers emphasise the importance of taking into account human well-being and inclusivity in the face of artificial intelligence (AI) advancements. They argue that while AI can bring about many benefits and innovations, the focus must always be on the welfare of individuals and society as a whole. To achieve this, they stress the need for strong policies and regulations to guard against the negative consequences that AI can potentially have.

The speakers advocate for digital inclusion, asserting that access to AI technologies should be a right for all, particularly within educational institutions. They believe that universities and colleges play a crucial role in ensuring that AI is not only accessible to everyone but also integrated into the educational curriculum. They call on the higher education community to become active advocates for digital inclusion, providing opportunities for individuals to gain knowledge and understanding about AI.

Furthermore, the speakers assert that teaching and learning are experiencing significant transformations as a result of AI. They highlight the importance of academic leaders in shaping these changes by creating policies and designing new approaches to education that incorporate AI technologies. Faculty members are encouraged to adapt to these advancements and collaborate in the development of innovative teaching methods.

The need to prepare learners for the ongoing AI revolution is another key point addressed by the speakers. They stress that education must go beyond imparting theoretical knowledge and focus on equipping individuals with practical skills that enable them to adapt to the rapidly changing landscape of AI. They believe that by fostering a mindset of lifelong learning and providing hands-on experiences, individuals can be better prepared for the challenges and opportunities brought about by AI.

In conclusion, the speakers highlight the importance of prioritising human well-being, inclusivity, and education in the era of AI. They call for the implementation of strong policies, digital inclusion, and collaboration within the educational community to ensure that AI advancements benefit everyone and do not leave anyone behind. They urge universities and colleges to lead the way in incorporating AI technologies into the curriculum and preparing learners for the ever-evolving AI landscape. By doing so, they believe that individuals can be empowered to thrive in a world marked by accelerated change and innovation.

Francisca Oladipo

The analysis focused on several key aspects of artificial intelligence (AI) and its impact on education, ethics, policy-making, diversity, and continuous learning. The speakers argued for the role of universities in providing a comprehensive AI education that goes beyond technical skills. They stressed that AI students should be encouraged to study subjects like philosophy, finance, healthcare, and social sciences to develop a well-rounded understanding of AI’s applications in various fields.

In terms of ethics, the speakers acknowledged the importance of safeguarding against the abuse and misuse of AI. They emphasized the need to promote ethical AI practices and educate individuals on the ethical implications of AI. It was suggested that ethical AI education should be incorporated into AI curricula and training programs to ensure that future AI professionals possess the knowledge and skills to develop responsible AI solutions.

Another key point raised during the analysis was the importance of engaging with policymakers. The speakers highlighted the need for continuous advocacy to effectively communicate the potential benefits and challenges of AI to policymakers. They also stressed the need for collaboration between AI experts and policymakers to develop responsible AI governance frameworks that address societal concerns and ensure the ethical and safe use of AI technologies.

Promoting diversity and inclusion within the AI field was another noteworthy argument made by the speakers. They highlighted that AI has applicability across all fields and is not limited to computing. Thus, it was suggested that the AI field should be more inclusive and diverse, encouraging participation from individuals with diverse backgrounds and perspectives. The speakers emphasized the importance of including arts and humanities in AI education to foster social good and ensure that AI technologies benefit all segments of society.

Lastly, the speakers underscored the significance of continuous learning in the rapidly evolving landscape of AI. They pointed out that AI is evolving rapidly, and professionals in the field must keep pace with the latest advancements and developments. Continuous learning was identified as a key factor in staying updated and maintaining the relevance of AI professionals.

In conclusion, the analysis highlighted the multifaceted dimensions of AI education, ethics, policy-making, diversity, and continuous learning. The speakers advocated for universities to play a central role in providing comprehensive AI education, incorporating ethics into AI curricula, engaging with policymakers for responsible AI governance, promoting diversity and inclusion in the field, and emphasizing the importance of continuous learning to keep abreast of the evolving AI landscape.

Wei Wang

The analysis provides a comprehensive overview of the implications of artificial intelligence (AI) and emphasizes the necessity for legal considerations. One of the key findings is Wei Wang’s research, which primarily focuses on global AI governance. Wang’s work acknowledges the crucial impact of AI on higher education and underscores the need for legal frameworks to address this issue.

Another critical aspect highlighted in the analysis is the data supply chain of AI, which intersects with three legal areas. Data protection emerges as a priority, as AI services rely on personal data for training. The analysis mentions that investigations have been conducted globally to examine the use of personal data by AI services. Notably, Italy has been at the forefront of such inquiries.

Furthermore, AI services raise concerns regarding research integrity and content safety. The analysis points out the challenges posed by fake citation links in AI services, which can compromise the credibility of research findings. Additionally, there are worries about the use of unverified information in machine learning processes. These concerns highlight the need for safeguards to maintain the integrity of research and ensure content safety.

The analysis also draws attention to the impact of AI services on copyright law. Specifically, it argues that AI services challenge our traditional understanding of fair use. Litigation experiences related to AI services have raised questions about the fairness of generative AI services in terms of copyright infringement. This observation underscores the need to reevaluate and adapt existing copyright laws to keep pace with advancements in AI technology.

In conclusion, the analysis highlights the importance of legal considerations in relation to the implications of AI. It emphasizes the need for data protection, research integrity, content safety, and fair use in copyright law. These findings provide valuable insights into the various legal aspects that must be addressed to harness the benefits of AI while ensuring ethical and responsible AI practices across diverse domains.

Divina Frau-Meigs

The speakers in this discussion emphasise several key points about AI. Firstly, they argue that there is a need to resist the panic and fear surrounding AI systems and the possibility of them developing super intelligence that surpasses human intelligence. Instead of succumbing to these concerns, they advocate for a human-centred approach to AI development. By keeping humans at the focus of AI technology, it can be harnessed to benefit society rather than posing a threat.

Moving on, the speakers assert that media and information literacy are crucial in understanding AI. They highlight the importance of education that familiarises individuals with media and information, narrowing the knowledge gap and enabling them to acquire the necessary competencies to comprehend AI. By enhancing their literacy in this area, people can make informed decisions and be better equipped to engage with AI technologies.

Another pertinent point emphasised by the speakers is the need for proper guardrails in AI education. While some guardrails are currently proposed by AI systems, there is an acknowledgment that they can be bypassed. Therefore, universities are encouraged to develop their own solutions to provide teachers and learners worldwide with the necessary guardrails. This will help establish a responsible and ethical framework for AI education.

Furthermore, the speakers stress the importance of source reliability and ethically sourced data in AI. They note that currently, there is a lack of ethically sourced data and a lack of consensus on the use of data scraping and models. This highlights the need for a careful and thoughtful approach to ensure that AI systems are built on reliable sources of data and adhere to ethical considerations.

Lastly, the speakers advocate for a focus on explainable AI. They argue that it is crucial to have access to the motivations behind the creation of AI systems and to validate their operations. By having transparency and explainability, AI technologies can be more trustworthy and accountable.

In conclusion, this discussion underscores the importance of taking a human-centred approach to AI development, fostering media and information literacy, implementing proper guardrails in AI education, ensuring source reliability and ethically sourced data, and prioritising explainable AI. By addressing these key points, individuals and society as a whole can navigate the realm of AI in an informed and responsible manner, maximising its potential benefits while mitigating potential risks.

Alejandro Pisanty

In an article discussing the role of universities in the era of Artificial Intelligence (AI), Alejandro Pisanty highlights the importance of approaching this technological advancement in a rational manner and resisting panic. He firmly believes that universities should serve as the depositaries of rational thought. Pisanty argues that in order for universities to adapt to the AI age, they need to ensure their relevance. He suggests that they play a major role in the mainstream of things and develop a solid academic system with reasonable infrastructure and faculties.

However, the article also raises concerns about brain drain in universities. Pisanty points out that higher-paying jobs in AI development at companies are attracting researchers away from academia. This brain drain is seen as a cause for concern, as it affects the quality of education and research at universities. Researchers also tend to move to places where they can actually conduct experiments and get their work published.

Regarding ethical considerations in AI, the Institute of Electrical and Electronic Engineers (IEEE) is developing a set of standards for ethical AI. However, translating these ethical codes or laws to AI developers is proving to be challenging. The difficulty lies in avoiding subjectivity and effectively implementing ethical standards in the development of AI systems.

Furthermore, Pisanty highlights the need to resist panic in the face of AI advancements. He suggests the development of tools to analyze conduct online, as problems in the digital realm often have a human and social element. Pisanty himself has developed a tool for analyzing online conduct, emphasizing the importance of addressing online misconduct proactively.

Universities also face the challenge of addressing the lack of pre-university ethical and mathematical education. It is seen as crucial for universities to cultivate ethical consciousness and mathematical competence among students, as a lack of these fundamental skills poses a significant challenge to education.

In conclusion, universities are encouraged to approach the AI era rationally and resist panic. The article emphasizes the need for universities to ensure their relevance in the AI age by playing a major role, developing a solid academic system, and addressing the challenges posed by brain drain. The development of ethical standards for AI and tools to analyze online conduct are also deemed essential. Additionally, universities must focus on cultivating ethical consciousness and mathematical competence in students to meet the demands of the AI age.

Eve Gaumond

The use of Artificial Intelligence (AI) in education has both positive and negative impacts. On one hand, AI has the potential to greatly improve the quality of education. It can provide students with personalised learning experiences, tailored to their individual needs and learning styles. This has the potential to enhance student engagement and motivation, leading to better learning outcomes. The ability of AI to analyse large amounts of data can also enable educators to identify areas where students may be struggling and provide timely interventions to support their learning.

However, on the other hand, there is a lack of data that supports the notion that personalised learning actually increases retention of information. While AI may be able to deliver content in a customised manner, it does not necessarily guarantee that students will retain the information more effectively. Some argue that the hype around educational technology (EdTech) can be akin to “modern snake oil” – promising transformative effects without concrete evidence to back it up. In fact, there are concerns about the negative impacts of EdTech, such as increased screen time, decreased social interaction, and the potential for data breaches that compromise student privacy.

Another important aspect to consider is the regulation of data collection and usage in education. The ‘datafication’ of students’ lives, starting from an early age and continuing throughout their academic journey, has raised concerns about the potential encroachment on students’ privacy and autonomy. The collection, storage, and analysis of vast amounts of data about students can have a discouraging effect on their engagement in meaningful formative experiences. It is crucial that policies and regulations are in place to prevent harm and protect students’ freedom in the context of data collection and usage in education.

In conclusion, while AI has the potential to revolutionise education by improving its quality and providing personalised learning experiences, there is a need for critical examination of its impacts. The positive effects of AI in education are not guaranteed and should be constantly scrutinised. Additionally, regulations must be in place to ensure the responsible and ethical collection and usage of student data. It is essential for stakeholders in higher education to understand AI sufficiently well to ask relevant questions and make informed decisions about its implementation.

Session transcript

Connie Book:
of Elon University in North Carolina, USA, and chair of the National Association of Independent Colleges and Universities in the United States. That organization represents 1,000 private and independent colleges. This is my second time at IGF and the 12th time that Elon University has sent a delegation to this important global gathering. Our engagement at IGF since 2006 has been through our Imagining the Internet Center. It is Elon’s public research initiative focused on the impact of the digital revolution and what it does and impact on individuals and institutions. We have a booth over in the village and our team is recording video interviews at IGF. And I encourage you to take a few moments to stop by and share your thoughts with us at some point this week. Today’s launch event highlights the urgent issues related to artificial intelligence and higher education. We are releasing a substantive position statement titled, Higher Education’s Essential Role in Preparing Humanity for the Artificial Intelligence Revolution. If you work at a college or university, you know how timely and important this statement is. The statement introduces six holistic principles and calls for higher education community to be included as an integral partner in AI development and AI governance. The statement provides a framework for leaders at colleges and universities around the world as they develop strategies to meet the challenges of today and tomorrow. At Elon University, faculty are adapting the statement as they create policies on AI and design new approaches to teaching and learning. In writing this statement, we worked with higher ed leaders, scholars, and faculty members from around the world to synthesize ideas from authoritative sources on AI. I want to thank everyone who spent time considering this statement and contributing their thoughts and support. Today, more than 130 distinguished academic leaders and organizations from 42 countries are initial signatories to the document. And we invite you to join them. Study the document on our website and sign on if you wish. There are printed copies available for those in the room today and our moderator will post a link for remote participants. Let’s briefly look at these six principles. First, principle number one, people, not technology, must be at the center of our work. As we adapt to AI, human health, dignity, safety, and privacy must be our first considerations. Two, digital inclusion is essential in the age of AI. We must be an advocate and ensure people at our universities and colleges and beyond gain access to these technologies and be educated about AI. Principle three, digital and information literacy is no longer optional for universities. We must prepare all learners, no matter what their discipline, to learn and act responsibly with AI and other digital tools. Digital literacy gives us power and that must be part of every post-secondary education. Principle number four, teaching and learning is already undergoing dramatic change because of AI and we must carefully navigate the use of these tools in education, using them transparently and wisely, and protecting the interest of students and faculty members. Principle number five, we are just at the beginning of the AI revolution, so we must prepare all learners for a lifetime of growth and help them gain hands-on skills to adapt to accelerating change. Principle six, this final principle has to do with AI research and development, research conducted in higher education institutions around the world. These powerful technologies carry great rewards and great risk and therefore great responsibility. We need strong policies in place to guard against negative consequences of digital tools that could go beyond human control. These are our core principles and this sets the stage for a great discussion by our distinguished panelists today. After their remarks, we will open the floor for all to share their thoughts on higher education’s role in advancing the future of humanity in the AI age. Let’s begin with Mr. Lee Rainey, who spent the past 24 years as Director of Internet and Technology Research at the Pew Research Center in Washington, DC. We’re very excited that Lee has joined Elon University to lead our continuing research on imagining the digital future. Lee, please get us started today.

Lee Rainie:
Thank you so much, President Book. It’s a pleasure to be here and to be associated with this really important initiative. We believe that the six principles for the internet and artificial intelligence in our global petition are essential for maintaining human rights, human autonomy and human dignity. The principles bring time-tested truths to the age of artificial intelligence. There is evidence of plenty that societies advance as their educational systems emphasize how people’s adoption of new skills can help them become smarter as people discover new ways to create, connect and share as diverse populations are given the wherewithal to control how new technologies are used and as people adjust their lives to the emerging practices that the new technologies afford, including lifelong learning. As President Book noted, we at Elon University think that institutions of higher education can be the vanguard of civil society forces that enable beneficial changes for humanity. Since the earliest universities were created centuries ago, they have cultivated the grandest purposes of humankind, discovering and advancing knowledge, training leaders, promoting active citizenship and, yes, critiquing the societies around them and sounding warnings as troubles loom. Importantly, we know that as technology revolutions spread, one of the major jobs of universities is to pass along the best ideas and most effective strategies for learning new literacies, especially to other institutions and those involving children in particular. Clearly, we are at a singular moment now as AI spreads through our lives. In the past, tools and machines were created to enhance or surpass physical capacities of humans. The advent of AI for the first time brings technologies that enhance or surpass our cognitive capacities. This revolution will cause a big sort that will force us humans to identify and exploit the traits and talents that are unique to us and make us distinctively valuable. What will be the differentiators between what we can do and what our machines can do? How can we domesticate these technologies to make sure they serve us and not the other way around? At Elon, we are planning to be in the forefront of universities studying and disseminating insights about how AI is affecting people. We have an ambitious agenda of fresh research that will build on several decades of exploration of digital trends and future pathways for digital innovation. In fact, we are gathering data right now in a survey of experts and a separate survey of the general population in the United States to explore how both groups’ views about possible benefits and harms of artificial intelligence are going to unfold in the coming years. We will be releasing those findings in early 2024. Beyond that research, these are some of the questions that will guide our work in the age of artificial intelligence metaverses and smart environments. What are the new literacies that people would be wise to learn? They might include things like media and information literacy, the accuracy and inaccuracy of information, judging it and making the right decisions based on it. Data literacy, privacy literacy, algorithmic literacy, creative and content creation literacy. In addition, we at Elon seem destined to explore how well we are doing to hone our singular valuable human characteristic, means things like problem solving, hierarchical decision making that makes pattern connections and makes decision trees about how to move forward. Critical thinking, sophisticated communication and the ability to persuade which machines can’t yet do. The application of collective intelligence and teamwork, especially in diverse environments. The benefits of grit and a growth mindset. Flexibility, especially in fluid creative environments and emotional resilience. In the end, big issues await exploration. What are the signposts and measures of human intelligence? What are the qualities leaders must possess? How do people live lives of meaning and autonomy? What is the right relationship between us and our ever more powerful digital tools? Our past studies have shown that there are a wide range of answers to questions like those, and yet there is a universal purpose driving people’s answers. They want us to think together to devise solutions that yield the greatest possible achievements with the least possible pain. Thank you so much for your interest. Please feel free to reach out to me here or find me in our booth in the exhibit hall. If you’re interested in furthering this campaign, signing our petition and maybe getting involved with us, we are always on the hunt for new partners, new collaborators and new ideas. Again, my thanks, President Book.

Connie Book:
Thank you, Lee. We now have two distinguished speakers who are joining us remotely. First is Professor Davina Frau-Miggs, who helped with the research and writing of this statement and connected us with thought leaders around the world. She teaches and researches at Sorbonne Nouvelle University in Paris and has been quite active for years with UNESCO and at IGF. Dr. Frau-Miggs, you’re up.

Divina Frau-Meigs:
Hello, everybody. Thank you very much for having me so far away. It’s two o’clock in the morning in Paris, but it’s really worth it to be with you and for me to return to IGF as I saw it being born since I participated in the World Summit on Information Society in 2005, representing academia for the Civil Society Bureau of the summit. And I’ve worked on these topics ever since and followed ever since, from the beginning of social media in 2005 to what we could call now the beginning of synthetic media. And this is maybe one of the tags I will take. I wanted to thank before that Dana and Daniel Anderson, as well as Lee Rainey and the Elon University for including me in drafting the document and fine-tuning it. And I wanted to stress the importance also of IMCR, my NGO, the International Association for Media and Research, that is a UNESCO observer status NGO, which has supported fully all members of the statement and added a statement of its own. And I think, I hope that one of the impacts of this big statement by us all and contribution to IGF will also encourage other entities to make their own because we each and all have to appropriate what we feel is going on with the internet and make sure that the cultural diversity of our universities continues so that we don’t fall under two problems. One which would be a kind of homogeneity brought by the control of some sources and some types of AI models in the world and therefore creating more digital divides. And the other one, which is something I think we all feel, is that as researchers we have to resist the panic, the current panic about AI systems and the fact that they could produce a super intelligence that is more intelligent than us. I think we all agreed, as we discussed and went around the world, that this has to remain human-centered and that actually the humanities have a possibility of being back, not just STEM, as fields because more than ever we need to be human-centered and get down to what it really is to be human. So I represent also, it’s true, a network of researchers at UNESCO called MILIT, the Media and Information Literacy and Intercultural Dialogue network of universities where we also try to think these items. We push, of course, for media and information literacy first because it permits a kind of familiarity that allows us then to move to AI literacy. So one of the focuses of how to go about it for us would be to go with familiarity so that people don’t have the feeling there’s a huge gap before getting all these competences. So as to prevent the panic and, on the contrary, leave a space for understanding and for adoption, we need to lift fear and anxiety. And for that, we have to go also at policy level. And I think for us, we would emphasize, and that’s the nice thing about the six items that we’ve put in there, they can all be unpacked. They can all be unpacked and updated. So if I were to unpack and update our work, continuous way, I would say that one of the most important things is proper guardrails for teachers and students. And we know, and research has shown, that the guardrails proposed currently by AI systems, tech companies, can be bypassed. So this is a problem. And we as universities have to come up with our solutions for teachers and learners worldwide. Also, we need explainable AI. It’s probably one of the most important elements, because we have to have access to the motivations for creating AI systems, for funding AI, for the validity of the AI, the fact that the scraping of the data has to be lawful, unbiased, safe, because that’s how we can make proper decision-making. And we know at the moment that there’s no really ethically sourced data. They’re not consensual. The models of data scraping are not consensual, especially in certain parts of the world like Europe, where I come from, and where we have a feeling that there is a lot of violation. And for us at university and in research and teaching, source reliability and ethically sourced data are crucial. We must, we can’t let go of fake information, fake news, including those proposed by synthetic media that are coming up, without being scared about what happens with proliferation of pseudosciences. And this undermines the whole remit of our university and our research approaches. So I would call on a lot of reflection on source reliability, because we probably are facing a new kind of source, a source that is not a primary source, nor a secondary source, with the intelligence AI models. So these are elements that I wanted to put into the discussion. And soon, at the moment it’s under embargo because it’s not out yet, but UNESCO will release, during Media and Information Literacy Week, at the end of October in Jordan, will release the approach, its approach on AI and media and information literacy. And I hope you’ll see that it buttresses everything that is being done here. Clearly at IGF level, we would support, I think all of us, the creation of a body on information and AI, information and AI, with all stakeholders, and especially, of course, universities and researchers, because we probably are the best place to facilitate the relatively asymmetrical dialogue right now between the edtech companies and the AI edtechs, which are becoming extremely proprietary, extremely commercial, and what we would like to have as independent research spaces that are universities and policy-making spaces. So definitely at IGF, you guys who are there could push for the creation of a global body of this kind, but this is actually more or less being delineated at UN, but IGF could be a very good space for continuous discussion about these items that I’ve underlined, like source reliability, AI explainability, and of course, all of this within our human, very human rights. Thank you very much.

Connie Book:
Thank you, Frau Meggs. Lots to consider there. Thank you for those thoughtful remarks. We are honored today to be joined by Internet Hall of Fame member Alejandro Pisanty. Dr. Passanti is a legendary leader in global Internet governance circles. He is a professor of Internet governance at the National Autonomous University of Mexico. Dr. Passanti, please give us your thoughts on the future role of higher education in the AI age.

Alejandro Pisanty:
Thank you, Professor Buch. Can you hear me well? Sorry, it is awful manners to begin a speech by correcting the previous speaker, but legend. That’s Divina from Mikes. That’s Elon University. Legend. That’s Jana Anderson and Lee Rainey. And I don’t want to continue with the list because it’s very long. I’m very honored, and I hope, Professor Buch, that you realize how highly many of us think of the effort that Elon University has done. You really made a world-worthy mark with Jana Anderson’s and Lee Rainey’s work with the Internet Governance Forum. They have done so much from having students over, documenting by video things that no one even thought were worth recording, and now they are that document to their deep thoughts and understanding, their identifying leaders, bringing young people. I followed a few of them, of your former students who have become really brilliant media analysts or figures or communicators. So they have increased the aura of Elon University to immense heights. This is really, really wonderful. So thank you for supporting this work. Thank you, Dr. Pisanti. That’s very nice. Thank you. That’s really amazing. I come from a very large university. It’s very hard for us not to look at things through a lens of size, and Elon is especially remarkable when we see that you have done far more than universities like mine with probably 20 times as many students as you have. We have two zeros to your numbers. I want to enter now the subject matter of this speech, make it very brief, and try to make it concise. First, I join Divina from Maximum. It’s one of my most admired figures in this world. From the era that she has mentioned, from the early times of the World Summit on the Information Society, when people like her and IEMCR were championing these alternative views to state-controlled media or to the large private interests. At the time, it was mostly media and carriers, network operators, who needed opposing, and we have now a much broader spectrum and a much more complex one because we simultaneously need to oppose and platformize many of the entities that are now considered troublesome. I want to join her statement in particular of resisting the panic. I think that the first thing that universities have to do, universities and schools all over, is sober up and tell everybody, sober up, calm down, cool down, look at this rationally. What are we, if not as post-depositories of rationality, rational thought, not of the truth, but of the way of approaching whatever becomes the truth and letting it be built on fact and reason. That’s, I think, the very first thing. I have a second question here for the universities. I want to thank Jana Anderson and Lee, and Jana particularly because she made much of the follow-up, for sharing with me drafts, early drafts of the statement that has now become the statement for this. And I was a little bit shocked at the beginning because I thought it was conceiving the universities in a very partial and small role in the corner of things, where they should be part of the mainstream and even the leading edge of things. First world universities, let me abbreviate things by calling just advanced economy or first world universities, are seeing now what we have suffered in developing countries for decades, if not centuries, which is a brain drain. One of the things that you are so concerned about comes from the fact that AI development pays a lot better in companies than it does in universities. Universities were sort of the Santa Santorum, where even the winters were weathered out. Even the several AI winters were weathered out by universities, where this slow research kept going on. Algorithms were developed. The mathematics was developed, not only the computational technique, but the basic math of neural networks was developed in academia. And we’re suddenly out of our best people because they are working for companies which have not only large funding, but the other thing that drives researchers, the opportunity to actually do it. When our researchers leave, when our PhD students leave for the US or for Europe or Japan, they’re not only looking for a place which will pay a better salary, but they’re looking for a lab that is actually equipped for work, where they can actually do the measurements, do the experiments, get them published. It’s significance, it’s impact, it’s actually doing the thing that moves them. And you are suffering the same thing now. There’s just a new echelon of that. So the question here, and I’ll stop with that question for this intervention, is the most expensive thing we have in developing countries, is the highest cost we incur in, is the cost of not doing. The cost of not having developed a solid academic system with tenure, with infrastructure, with diversity. The cost of not developing a government that is rationally driven, that creates policies with continuity on an evidence basis, that invokes rights, invokes pragmatism. We never know where we actually are. So rights are invoked as a way of pulling the handbrake, instead of finding the way of calling rights, not for the other guys to go faster, but for us to be able to go as fast or faster. So that cost of not doing is now being clearly manifested in the shortcomings that the universities are trying to overcome with this statement. Thank you.

Francisca Oladipo:
Thank you, Dr. Pisanti. Really interesting. Calm down, cool down. So next we have Dr. Francisca Oladipo, Vice Chancellor and Professor of Computer Science at Thomas Adewame University in Nigeria. Dr. Oladipo? Thank you very much, and thank you for the opportunity, Ellen University. Speaking from the perspective of an African university and an African researcher, we were probably just still catching up with the rest of the world. But then you look at it with an emerging technology, or like everyone else, experiencing something new for the first time, there is that risk of a wrong adoption, or even possibility of abuse. And so I believe that universities, most of our roles should be centered around the educational aspect of artificial intelligence of AI. So if you look at not just interdisciplinary education, but also interdisciplinary collaboration, AI is applicable in practical every field. So AI researchers should not think of just collaborating with subject level experts, but students in the field of AI should be made to study other subjects like philosophy, finance, healthcare, social sciences, to give some basic kind of domain knowledge. And universities also need to promote ethical artificial intelligence and do a lot of education around ethical AI, because students are, you know, to kind of guard against that abuse and misuse. And then there’s a lot of questions in the society about the role of AI in education and on the educational space. So not just educating the student, but also there’s a need to educate the society, generally, maybe through seminars, or handbills, or, you know, to have a town and gown on artificial intelligence. The curriculum these days needs to be centered around AI, because whether we like it or not, it’s going to be with us for a very long time. I mean, it’s always been here, but the awareness is now higher. So most of the curriculum, whether it’s in the humanities, or in the arts, or sciences and technology, even medicine needs to build around AI to ensure that AI literacy for everyone. Universities, we need to do a lot of advocacy to engage with policymakers. The issue of we can contribute our expertise to responsible artificial intelligence in governance, but how can we effectively do this if we don’t engage with policymakers and do a lot of public outreach? We must continue to promote more diversity and inclusion. In Nigeria, we see AI as more of, oh, it’s for you computer people, but it is no longer the case. Students in arts, they use chat GPT now to get answers. They use other online AI tools for one reason, to listen to research papers and so on. So there is always that indirect application of AI across every field. And so we need to be more inclusive to embrace everyone and not make AI look more like it’s for computing people. When we talk about AI for social good, the people primarily at the center of ensuring social good are mainly in the arts and humanities. They’re the ones that study behavior. They’re the one that look into issues and how factors affect people due to different reasons. So it is important that these people are also included in the study of AI. There is a need for every one of us to engage in continuous learning. The fast pace at which AI is emerging now with the large language models and before we know it, something new is out there. We all need to continue to learn to keep up, keep abreast and be able to educate others. Thank you all very much for this opportunity. Again, I’m sorry, it’s 1 to 3 a.m. in Nigeria, and pardon me. Yes, it’s very, very late. I know, Dr. Aladipo, thank you.

Connie Book:
Now joining us remotely from India is Dr. Sivaprasad Rambihatla. He is a retired professor and leader of the Center for Digital Learning, Training and Resources at the University of Hyderabad. Dr. Rambihatla.

Siva Prasad Rambhatla:
Very, very good morning or good night, good afternoon, wherever we are. I must thank Professor Diana Anderson for this opportunity. I let me, because I’m an anthropologist, so I don’t know if I’m going to be able to answer all the questions, but I’m going to try my best. I look at it differently. Technology is a medium which we, as humans, feed into it. We, as humans, guide it. Our biases are also put into it. When I am looking at the field of education, education is one of the challenges that makes access to a large number of people who have been denied on account of their poor economic condition. If you look at the statistics of education in many countries, especially in the Global South, because we must remember there is a large disparity between the Global South and the Global North. In the Global South, those who have no access to education are from the disadvantaged sections. During COVID-19, digital technology, especially using online education technologies, played an interesting role. After that, AI and other technologies are really useful. What we find is that this itself has thrown up new challenges for academics. When I say new challenges for academics, you find a major problem that lies in the digital divide. Access to the equipment, access to the technology, and many of the people, especially children and others, during COVID-19, they never had broadband connectivity. Some of them were climbing trees to catch the signals. It was such a horrible thing. Online courses also need to be designed and articulated in a way that captures the minds of the learners. That is also a big challenge. What we find is the lack of skills and the ability to design courses using multimedia or even the kind of new technologies that people are using. That is where we find designing them in an imaginative way to keep the attention of the learners is an important thing. That is where we even try to train the teachers or the persons who are designing the courses. That is where capacity building was one of the important things that we need to undertake. We need many specialists, including experts from the visual media, to sensitize the online content and course developers. This is where AI technology is trying to fill the lacunae of a shortage of teachers or instructors. The moment you design it carefully, it can fill the gap. It only fills the partly knowledge gap. The challenge posed today is from the generative AI, especially charge and this challenges the use of issues of copyright, laserism and other issues. There are some tools developed to capture whether the content is taken from the other sources, online sources. That is where the problems of others are mentioned. Copyright, the debt of sovereignty, the kind of importance and security. This is where they are impeding the creative thinking among the younger learners . They try to bypass the process of learning. They can ask for a copy and the content writing becomes easier. It does not help them to think. The challenges are real and they require multidisciplinary approaches. Another important thing is education has to be inclusive and multicultural . It has to be more local. We need to have local AI models of learning. That is local AI models of learning because the subject cannot be universal . Most of the things are local . We need to make people learn better . Thank you very much.

Connie Book:
Thank you. Next to speak is a doctoral student at the University of Hong Kong School of Law and a member of the FGV think tank in Brazil. Dr. Wang?

Wei Wang:
Thank you so much. Thank you so much for having me here. Thank you so much for everybody for coming. I am so sorry that I cannot be physically with you in Japan but I am excited to be here virtually. I will probably brief you with some legal aspects but before moving to the legal aspects of AI’s implications upon higher education , I think I will have some very general points as well. As the chair has mentioned, we are currently in a dynamic on data and AI governance . Our first research report on global AI governance , probably tomorrow. If you are interested in this topic , you can probably get a hard copy as well. Some of my colleagues propose a data supply chain of artificial intelligence. This supply chain is relevant to three legal aspects. The first is data protection for sure. As you may know, some AI services are using personal data for training. We need a lot of data protection globally to investigate those AI services like Italy. I think it is our first authority to investigate these AI services. The second area is so-called content safety. The most significant is what we call machine learning. For example, if you use some AI services , their citation links are fake . It would definitely produce a lot of challenges . For example, research integrity . I am currently volunteering . I think it is a good mechanism . It is a contractual mechanism for copyright. There is a lot of litigation in terms of AI services . I think that would be a big issue . Those services are challenging our perception of fair use in copyright law. Many years ago, there was a book . The judges thought it was fair use. What about the generative AI services in the near future? I think these are three areas . Thank you so much for having me.

Connie Book:
Thank you. We will now hear from law researcher of the University of Montreal. Her research focuses on the impact of artificial intelligence on higher education. She is currently working on that research here in Japan. Good morning.

Eve Gaumond:
Thank you very much. I would like to thank you for inviting me to comment . I would like to build upon three elements contained in the statement . The three elements are improving teaching and learning , and increasing literacy. I will walk through these three elements in order to make the following point. It is crucial that people who develop and deploy AI in higher education understand it sufficiently well to ask relevant questions . Let’s start with enhancing learning and teaching. AI has the potential to improve the quality of education. It can help create personalized learning experiences . Students can learn at their own pace, focusing on their strengths or weaknesses. It can also be used to contribute positively to the students-teachers relationship. There are some educators who report that they use data analytics to reach out to students that are suddenly disengaging from the classes. But it is far from guaranteed. These positive impacts are far from guaranteed. Even though AI promoters say that personalized learning increases retention of information, there is no data that supports that claim. Oftentimes, EdTech looks like modern snake oil. Modern snake oil can have real negative impacts. The datafication of students’ lives can discourage them from engaging in meaningful formative experiences. It is especially worrisome when we know that data starts being collected as early on and continues following them through high school and university. Some students, for instance, can refrain from writing essays about controversial topics out of fear that it might limit future opportunities. They avoid learning the formative experience of engaging with challenging ideas. College students can refuse an invitation to go to the bar on a Monday night because geolocation data can be used to predict their likelihood of success at school or predict if they are at risk of dropping out. It can influence their admission to grad school or their scholarship application. It can prevent people from engaging in meaningful formative experiences. Remember when you were in college, these are things that promote increased human flourishing. What if an immigration officer can access immigrant student classes attendance data, for instance? Is it really what we want for higher education? Is it really fully promoting the development of the human personality as international human rights law says it should? I don’t know. But these are questions we ought to be asking. And this is why it is so crucial that professors, university administrators understand how AI and data works so that they can ask relevant questions. What kind of data is being collected? What is it used for? Who can access it? Only professors or third parties as well? And if third parties can access it, what for? So, yeah, this is why I believe that the statement is so interesting and so important and particularly principle 4.1 and 3 because they can contribute to protect students’ freedom. That’s it. Thank you, Evi.

Connie Book:
Our final panelist is Renata de Oliveira, Miranda Gomes. She is an IGF 2023 youth delegate representing Brazil who recently earned a master’s degree in communication at the University of Brasilia and she’s here with us today. Welcome, Renata.

Renata de Oliveira Miranda Gomes:
Thank you. Thank you so much. Good morning. I’d like to thank the opportunity to participate in this panel as a youth representative. I am part of the Brazilian youth delegation this year and I have been studying for some time how we use Internet and specifically digital platforms to communicate science. I’ll be mindful of my time here and pass to the main point that I wanted to bring to the debate and it’s how new digital platforms are extremely present in higher education and I believe that the COVID-19 pandemic actually showed us this quite significantly. During a time of social isolation, we had to quickly adapt to a new way of learning and exchanging knowledge and AI was certainly very much part of it but the thing is I believe that there is still a gap between students and educators when we think about the acceptance of new platforms and ways of learning and I’ll give an example which resonates a bit with what Professor Oladipo mentioned just now. For example, chat GPT can be used as a tool for learning in multiple ways and I am aware and agree with arguments that point that chat GPT can facilitate like plagiarism or cutting corners when producing assignments. However, and I was discussing this with some friends from the Brazilian delegation, that chat GPT can also make our lives easier. For example, at a post-graduate level, we are faced with a lot of challenges and we have to adapt to the new technology and we have to adapt to the new environment and we have to adapt to the new environment and we have to adapt to the new environment with long, long lists of reading materials. And although CHAT-DBT does not substitute comprehensive reading and understanding of text, it can certainly aid by producing perhaps bullet point highlights and aid us in gaining some time actually. So it can also be a tool to develop critical thinking and analytical skills. So my argument here is that educators and students should work together. And the principles here presented are proposing to find solutions that can help all parts involved. Specifically, I wanted to point out principle number five, learning about technology is an experiential, lifelong process. And new platforms such as AI depend much more on the users than on the software itself. So it is crucial that we educate ourselves and work collaboratively to ensure that it can be the best possible. So this is why I believe that these spaces of debate are so important. In Brazil, the approximation between AI and education is going beyond the scope of higher education also. For example, the state of Piauí recently announced that it is working to including AI in the state’s high school curriculum. So it will be the first state in Brazil to do so. So this is a great way to begin the dialogue of good platform usage from the initial learning processes. So I think this is pretty much what I had to say to bring to the debate for now, but I look forward to discussing it further with you. So thank you for the opportunity.

Connie Book:
Thank you, Renata. We do now want to engage the community here with us and broaden our conversation. So we’re going to open it up for questions. There are microphones at the table. So the floor is yours. Does anyone have any questions? Yes. Say your name and your association. Certainly.

Audience:
My name is Christa Tobler. I’m a professor of European Union Law at two universities, Basel in Switzerland and Leiden in the Netherlands. I would like to react to the point made by the youth delegate just a moment ago. I can absolutely underwrite that. There is in my experience, this gap in expectation. I can see that, for example, at my Dutch university, Leiden University, the law faculty at the moment is trying to formulate an AI policy. They’ve not yet quite managed it. But for the time being, they said, actually, we are against using it. My students, of course, are from a wholly different generation. They’re all digital natives. They know how to use these things, and they want to use them. So I can see the gap that you’re talking about. And I personally, in one of my courses where people have to write an essay, have taken the approach suggested by our own department that deals with these matters, which has said that one way of doing it is to alert students to the possibilities and the dangers, especially in the legal field. You may all be well aware of the fact that a lot of wrong legal information is provided by these models. So you alert them to them, but you also tell them that, yes, you can use it, because it makes no sense to say no. It’s just not realistic, in my opinion. So I have followed the approach of telling them, yes, you can use it, but with proper attribution. So in your papers, you have to state whether or not you have used AI and how you have used it. I think this is a better approach, because as I just said a moment ago, it’s totally unrealistic to expect that people will not use it. It’s also not clever, because as you said, quite rightly, there are positive elements in these systems, and we should use them in a positive sense. So thank you for your contribution. Renata, I believe, was your name, is entirely reflecting what I have seen in my work. Thank you. I think we had another question here, yes? Thank you very much. This is, my name is Nazmullah Hassan. I came from Bangladesh. You work with an NGO called ActionAid Bangladesh. So I take my liberty, actually, to bring down a little bit of root of the discussion, since I work with the community and the excluded group and marginalized communities. So I was thinking, in our country, there is a huge digital divide, so in between the urban and rural, in addition to that, even also in the different age groups and generations, and also based on their sexual identity. Let’s say male or female, you know, men and women. So I was thinking, since still there is a huge digital divide, so we are talking about the AI in universities. So if it is becoming more and more kind of pertinent technologies in our lives, so how the divide will increase, and how the people will be excluded and more marginalized. Some people will be, you know, so super tech people, and they will be using the AIs and other technologies and getting more and more opportunities, access and rights, everything. I imagine public service will be based on AI in future. So then people like us, you know, in our countries, global south, and living in a very interior place, how they will have their basic rights, let’s say education and health and other spaces. Whether, you know, do we think of how the AI can be also, you know, as a part of our lifelong learning? You know, so sometimes we are thinking, you know, technology will come, definitely we need to embrace the technology, this is for sure, but how it can be also, you know, people can acquire the knowledge and the skills by their lifelong learning. What are the educational institutions are taking that kind of tools, curriculum, or developing that tools and curriculum for the community, or the excluded groups, so that they are also not being left behind. They are also taking this, you know, becoming this part of this new technologies. So I don’t know who could reflect on that, but this is actually the point came in to my mind. Thank you very much. Thank you.

Connie Book:
Would anyone like to react to that? Yes. Yes, feel free to open the mic. Shall I? Yes. Yeah.

Siva Prasad Rambhatla:
I think this is exactly what I have been talking about. The digital divide has many shares of it, because it has something to do with the socioeconomic backgrounds, cultural backgrounds, and also the nearness and away from the towns or cities, and infrastructure. And those who have this infrastructure, they are the ones who will benefit, and those who do not have will not benefit. So the digital divide is real. In fact, there are people now, they say that it has come down. It is true. It has come down as the, you know, the kind of availability of the international groups in some of the remote areas, but still there are problems. And how do we, in fact, it is, that is one aspect. Second aspect is that we have something, what we call the algorithms that are written, and the biases themselves reflect the kinds of discrimination exclusion, because the moment you perpetuate them, because whether it is generative AI and other kinds of forms are real challenge, because that is where, how do we counter these biases? How do we counter these exclusions are the challenge. This is where academics have to think about alternatives of this thing, because one way is using the traditional medium, but the traditional medium reachability is lesser, whereas the technology that we have can reach large sections, but then governments have to intervene, governments have to invest, and even some of the private firms have to invest. This is the alternative.

Connie Book:
There is no other way. Thank you. Thank you. Any final question? Yes. Yes. I think the microphone’s right there. They’ll turn it on for you. Yeah, yeah. Hello. Okay.

Audience:
My name’s Julia. I am a youth delegate for Brazil. I’m here with my colleagues, and I am very proud to participate in Renata’s and the group’s presentation panel, for she is also a colleague of us in our youth delegation. But jumping to my question, I asked myself during this presentation, how are the participants see and act to work sensibility and empathy on the ethics perspective of using AI, and is there a connection to using like different engines for AIs, like not feeding only to a corp, like, oh, we work about AI, but let’s see different engines and different groups and corporations that have worked, like the open source and the closed source engines, and like diversifying, because if there is that sense in using the diversity of engines to help building sensibility and to see for the ethics problem. I see it as a problem, because there is a lot of apathy or uninterested STEM academics or STEM operators, not necessarily academics, only workers that are uninterested in developing and working with AI in a ethical or a moral, over ethical and moral standards.

Connie Book:
Thank you. Dr. Pisanti, that is right up one of your observations. Would you like to respond to that?

Alejandro Pisanty:
Yes, thank you. There are, at the last counts a few months ago, around 1,300 ethics codes for AI around the world that have been collected, and there must be 10 times as many that have not been collected anymore. No one cares. Some of them are very solid. They were built from the ground up, starting from an inventory of ethical systems by the Institute of Electrical and Electronic Engineers, the IEEE, which is now developing a set of standards for ethical AI that can be used by companies and governments for developing, for guiding the development of systems and for guiding the assessment of systems. One problem these have is that it’s very hard, first, to avoid subjectivity. You look at the whole big 30 pages of ethically aligned AI that is age appropriate for children, and in the end, it’s a value judgment. Someone has to make a value judgment, whether something is appropriate for 13 years old and not 13 and a half years old. So that’s one problem. The other one is that it’s very hard to bring these codes, or the law, by the way, because some people say that codes are a way to, ethical codes are a way to avoid the law, not have the strict legal observance. But either way, it’s very hard to bring this down to the person who’s actually doing the coding, who’s actually selecting data and saying how you, how you actually develop the system and put data into it. That has to have a large part of contribution from the universities. We’re in exercises. We challenge our students at all levels, the people who are doing the hard computer science, coding, and so forth, and all the way to people, as was mentioned, students using ChatGPT for their essays. We have to work on that, and we cannot solve that at the university level alone. If our students arrive from high school, from pre-university education, without this ethical, and without the mathematical competence, there’s a huge challenge for universities to compensate for 18 years of non-education. This, again, goes to the cost of not doing it. And one other contribution here. As I said, I second Divina from Mike’s statement of resisting the panic, but I don’t only say, okay, please, calm down. I think that we can develop tools. I personally, I’m going to bring in a little plug for a tool I have developed, which is not for AI, but can be extended, which is when you look at all the panics around the internet and also the ways that the internet is seen as a panacea, as a saving all, you can actually see that most of the things that we either like very much or dislike very much that are happening on the internet have a human, social, pre-online or offline component, and a disruptive, sometimes radically revolutionary change that brings through the internet. It’s like phishing or Wikipedia, you know, the bad and the evil and the good. They are all, either phishing is simple fraud, hugely enabled by the internet, and Wikipedia is, you know, plain human, warm-hearted cooperation, the will to share knowledge made big. So we have six elements there, identity, scale, identity, trans-jurisdictional border crossing, barrier lowering, friction reduction, and the management of humankind’s memory and forgetfulness. We can analyze every conduct that we like or dislike online or every project, divide it into these pieces and reassemble it, and then decide, where do you want your ethical code? Where do you want your police? Where do you want, totally change human minds. Human minds will not, if you don’t change human minds, you will not stop having fraud. You will not stop people trying to cheat people and people falling for cheats. So let’s not blame the internet and let’s not blame artificial intelligence or it’s a very small niche thing called chat GPT without looking at this broader picture, and as I said, rationally. This may be too Cartesian, we still need some fluffiness and some fuzziness, but this is the kind of tool we can have. Final point, universities can contribute to this in an institutional way. We have been providing our individual academic contributions, the technical contributions, the institutions have their own role that transcends the activism that sometimes comes with situated academic social science and bridge with the technical community that’s actually doing all this development. Thank you.

Connie Book:
Thank you, Dr. Pisanti, and we couldn’t agree more, and that’s why we think that having an articulated set of principles to begin the work of higher education, and I love Dr. Frau-Miggs encouraging each organization to make it their own so we have that diversity of thinking with this set of principles. So we’ve reached the end of our time, and I’d like to conclude with an invitation. Please go to our webpage and see the list of signatories and consider adding your name. This will give our statement more reach and credibility. Our site will provide updates as the statement reaches new audiences and begins to influence institutions around the world. Thank you all for your participation in our event today and your support of this important initiative. Thank you. flaws

Alejandro Pisanty

Speech speed

166 words per minute

Speech length

1840 words

Speech time

665 secs

Audience

Speech speed

175 words per minute

Speech length

1052 words

Speech time

360 secs

Connie Book

Speech speed

137 words per minute

Speech length

1418 words

Speech time

620 secs

Divina Frau-Meigs

Speech speed

137 words per minute

Speech length

1130 words

Speech time

495 secs

Eve Gaumond

Speech speed

126 words per minute

Speech length

581 words

Speech time

276 secs

Francisca Oladipo

Speech speed

151 words per minute

Speech length

675 words

Speech time

269 secs

Lee Rainie

Speech speed

147 words per minute

Speech length

861 words

Speech time

351 secs

Renata de Oliveira Miranda Gomes

Speech speed

177 words per minute

Speech length

605 words

Speech time

205 secs

Siva Prasad Rambhatla

Speech speed

127 words per minute

Speech length

995 words

Speech time

470 secs

Wei Wang

Speech speed

103 words per minute

Speech length

403 words

Speech time

234 secs