Open Forum #30 Harnessing GenAI to transform Education for All
Open Forum #30 Harnessing GenAI to transform Education for All
Session at a Glance
Summary
This panel discussion focused on the impact of generative AI on education, particularly from a global perspective. The panelists, representing diverse backgrounds including academia, law, and policymaking, explored various aspects of AI’s integration into educational settings.
Key topics included the challenges of detecting AI-generated content in academic work, with conflicting views on the effectiveness of current detection tools. The discussion highlighted concerns about academic integrity and the need to adapt assessment methods to account for AI use. Panelists emphasized the importance of teaching students to use AI tools responsibly and ethically, rather than simply trying to prevent their use.
The conversation also addressed the potential for AI to exacerbate existing digital divides between the Global North and South. Panelists stressed the need for equitable access to AI technologies and the importance of capacity building in developing countries. They discussed strategies for integrating AI into curricula and teacher training programs in the Global South.
Legal and ethical considerations were explored, including copyright issues related to AI training data and the need for clear guidelines on AI use in academic settings. The panel also touched on the potential benefits of AI in making educational resources more accessible in developing countries.
The discussion concluded with reflections on how Africa can benefit from and contribute to AI development in education, highlighting initiatives like the Nelson Mandela African Institutions of Science and Technology. Overall, the panel emphasized the need for a balanced approach to AI in education, recognizing both its potential benefits and challenges.
Keypoints
Major discussion points:
– The use of generative AI in education, including benefits and challenges for teachers and students
– Detecting AI-generated content in academic settings and issues around academic integrity
– Intellectual property concerns related to training and using generative AI
– The digital divide between the Global North and South in access to and use of AI tools
– Strategies for ethically integrating AI into education, especially in developing countries
Overall purpose:
The goal of this discussion was to explore the impacts of generative AI on education from multiple perspectives, including technical, ethical, legal, and policy viewpoints. The panel aimed to consider both opportunities and challenges, with a focus on implications for developing countries.
Tone:
The overall tone was analytical and solution-oriented. Panelists offered critical perspectives on current approaches but also proposed constructive ideas for moving forward. There was a shift towards the end to focus more on opportunities and capacity building in the Global South, ending on a more optimistic note about the potential for AI to enhance education globally if implemented thoughtfully.
Speakers
– Jingbo Huang: Director of United Nations University Research Institute in Macau
– Antonio Saravanos: Associate Professor of Information System Management, New York University
– Eliamani Isaya Laltaika: Judge of High Court of Tanzania, Faculty member at Nelson Mandela University
– Mike Perkins Sfhea: Associate Professor and Head of Center for Research and Innovation, British University Vietnam
– Mohamed Shareef: Director of Government and International Relations at OCSICA, Former Minister of State from Maldives
Additional speakers:
– None identified
Full session report
Expanded Summary: The Impact of Generative AI on Education – A Global Perspective
This panel discussion, organized by the UN University, brought together experts from diverse backgrounds to explore the multifaceted impact of generative AI on education, with a particular focus on global implications. The panelists, representing academia, law, and policymaking, delved into the challenges and opportunities presented by AI integration in educational settings worldwide.
Use of Generative AI in Education
The discussion began with the acknowledgement that generative AI is already being widely used in educational contexts. Mohamed Shareef presented findings from the Maldives, where educators are already utilizing AI tools but often lack the necessary knowledge and training to do so effectively. He noted differences between K-12 and higher education teachers in their approach to AI, with the latter group showing more openness to its use. This highlighted a crucial need for capacity building and professional development in AI literacy for educators.
Antonio Saravanos shared his teaching approach, which focuses on helping students understand AI’s capabilities and limitations. He emphasizes the importance of reframing generative AI as a tool for deeper understanding, rather than simply a means of producing answers. Saravanos encourages students to critically evaluate AI-generated content and use it as a starting point for further research and analysis.
The reliability of AI detection tools in academic settings emerged as a contentious point. Mike Perkins argued strongly against the use of current AI detection tools, explaining that they are not sufficiently reliable for accusing students of plagiarism. He highlighted the potential harm to students’ academic careers and the risk of false positives, emphasizing the need for more nuanced approaches to assessment and evaluation in an AI-enabled world.
Generative AI and the Global South
A significant portion of the discussion centered on the implications of generative AI for the Global South. Mohamed Shareef raised concerns about the potential for AI to exacerbate existing digital divides between the Global North and South. He used a vivid metaphor, comparing generative AI to “Red Bull” that gives “digital transformation wings”, highlighting the risk of widening gaps between those with and without access to these powerful tools.
To address these challenges, Antonio Saravanos advocated for the development of local AI tools and solutions in the Global South to avoid dependence on technologies from the Global North. This approach could help build local capacity and ensure that AI solutions are tailored to specific regional needs and contexts.
Mike Perkins proposed an AI assessment scale framework as a potential tool for ethically integrating AI into education in the Global South. This framework provides a structured approach for educators to introduce AI tools in a manner that maintains academic integrity and promotes equitable access, considering factors such as AI literacy, infrastructure, and cultural context.
Eliamani Isaya Laltaika, representing the Nelson Mandela African Institutions of Science and Technology, emphasized the need for partnerships between Africa and the Global North to build AI and STEM capacity. He highlighted the potential benefits of AI in making educational resources more accessible in developing countries, noting how AI could help overcome barriers to accessing international research publications.
Changing Educational Approaches for AI
The panelists agreed on the need to adapt educational approaches to effectively incorporate AI. Saravanos argued that educators should focus on teaching students to use AI effectively, rather than attempting to ban its use. This approach acknowledges the inevitability of AI in education and the workplace, preparing students for a future where AI literacy will be crucial.
Perkins stressed the importance of redesigning assessments to focus on skills that AI cannot replicate, such as critical thinking, problem-solving, and creativity. This shift in assessment strategy could help maintain the relevance and integrity of education in an AI-enabled world.
Laltaika called for the development of ethical guidelines for AI use in education at all levels. Such guidelines could help address concerns about academic integrity, copyright, and equitable access to AI tools.
Shareef advocated for the integration of AI and digital literacy into curricula across disciplines. This approach would ensure that students are prepared to navigate an increasingly AI-driven world, regardless of their field of study.
Legal and Ethical Considerations
The discussion also touched on important legal and ethical considerations surrounding AI in education. Laltaika highlighted the need to revisit copyright frameworks to address the use of copyrighted material in AI training. This issue raises complex questions about intellectual property rights in the age of AI and requires careful consideration to balance the needs of content creators and AI developers.
Unresolved Issues and Future Directions
While the discussion provided valuable insights and potential strategies for integrating AI in education, several unresolved issues remain. These include:
1. Developing effective and ethical methods for evaluating AI-generated content in academic work
2. Ensuring equitable access to advanced AI tools in the Global South
3. Balancing copyright protections with the use of copyrighted material in AI training
4. Retaining skilled AI professionals in developing countries
The panelists suggested several action items to address these challenges, including:
1. Developing ethical guidelines for AI use in education at institutional and national levels
2. Integrating AI and digital literacy into curricula across disciplines
3. Redesigning assessments to focus on skills AI cannot replicate
4. Investing in IT infrastructure in developing countries to improve access to AI tools
5. Establishing public-private partnerships and collaborations between the Global North and South for AI capacity building
In conclusion, the panel emphasized the need for a balanced approach to AI in education, recognizing both its potential benefits and challenges. The discussion highlighted the importance of global collaboration, ethical considerations, and adaptive educational strategies to ensure that the integration of AI in education promotes equity, enhances learning outcomes, and prepares students for an AI-enabled future.
Session Transcript
Jingbo Huang : Good. Channel 2. Okay, let’s start. PowerPoint, please. Okay, so welcome. Good afternoon, everyone. I hope everybody had a good lunch and a good coffee break. And so today’s session is about generative AI and education. How does generative AI to transform education for all? So we take a different approach. We take the approach of a system, system approach, because for the issue of generative AI, there are different perspectives to look at it. There’s technical, there’s, so from our end, it’s more of a whole society and multi-stakeholder approach. So I’ll explain to you why I say it this way. But first, before I talk about that, I have to introduce my organization. So that’s my job. So my name is Jingbo Huang, the director of a United Nations University Research Institute in Macau. So how many of you have heard of a UNU, UN University? Oh, one, two. Great. Three. We’ve heard of them. Wonderful. Thank you. Great. So UN University headquarters is in Tokyo. We have 13 research institutes in 12 different countries. We are the UN. We also have an identity which is academic. So that’s why we do research and we do training and education at UN University. 13 research institutes in 12 countries. And we have different institutes covering different expertise. The one, as you can see, on the map. So those are the locations of our institute. So our institute that I’m heading is UNU Macau. It specializes in digital technologies and sustainable development goals. Recently, in the past, we have been around for more than 30 years. Recently, we have been working more toward AI, AI governance, AI ethics, extra, including in addition to digital tech with women, gender, cybersecurity, growing up online, extra. So we have a huge portfolio. And if you’re interested, we can talk later. And today’s approach, as I mentioned, is a system approach to talk about generative AI and education, multi-stakeholder. So when we talk about generative AI, we certainly will look at the system itself. But beyond system, and usually the system where the technical background is usually less important, I would consider so, than people. So people have to be in the center. So let’s look at the people picture. So on the right side, you can see that there are teachers, certainly. And teachers have to be, they have been using, taking advantage of the tools and trying to develop personalized education tools and using generative AI. And we also have learners, the students. And maybe nowadays we talk about lifelong learners. It’s actually everybody has been using it. And we also look at the schools, school administrators, for example, universities now with generative AI, which transforms how we learn and how we teach drastically and what kind of a curriculum would be relevant. How do we train people for the future generation? So those are the questions that the university administration needs to think about. And if we look at the bigger outer ecosystem, we also need to look at the policymakers, Ministry of Education, for example, and also the regulators. So today, of course, there are parents. I think some of you being parents and you also understand, you know, sometimes you want to know what your children are doing and online and with the generative AI. So this is, also we have the technology company and they are actually the ones who develop the technologies. With this people map. And I’m very happy to introduce to you our panelists because we represent actually all the roles here. We also have researchers. So this is what I mean by a system approach or a whole society approach to discuss the generative AI and education topics. So I would like to introduce you to our wonderful panel. So it’s alphabetical order. First one is Antonio Saravanos and later you will see him online. He’s a professor, associate professor of information system management from New York University. And sitting next to me is Dr. Eliamani Lataika and he’s a judge of a high court of Tanzania. He’s also a faculty member of Nelson Mandela University and he’s from Tanzania. And then we have a professor Mike Perkins. Sorry, how to pronounce it. Yeah, that’s fine. Okay. He’s an associate professor and head of Center for Research and Innovation, British University, Vietnam. And we have Mr. Mohammed Sharif and he is the director of government and international relations, OCSICA. And he’s also former minister of states from Maldives. Unfortunately, Dr. Kaohsiung cannot make it today with us. So we’ll stay with the five people panel today. So the first one I would like to ask our panelists. Let me take a seat. So we will have our presentation and then later I would highly encourage you to interact with us, ask us questions and share your practice and your best practices with us. So later I will invite you to speak. And so first I would like… Can you still hear me? Yeah. So mine is breaking up. I cannot hear myself. The first set of questions I would like to ask Mohammed. Yeah. And Mohammed has been a state minister, a researcher, a higher education administrator and now a private sector leader. So recently you have been working closely with educators of K-12 in Maldives. Would you please share how the educators in Maldives use generative AI in their classrooms and what concerns and
Mohamed Shareef: difficulties do they encounter? Thank you. Well, let me start by thanking you, I think, for your presence here. I know there are many, many sessions going out there, but you’ve come here to hear from us. So as Jingbao alluded to, I had the opportunity during the last kind of year, year and a half, to interact with educators in the Maldives, mostly K-12 educators, but also faculty from the main universities in the Maldives. Over the last year and a half, there has been an increasing interest of educators in generative AI. Now, AI for practitioners like myself and many of you, I’m sure, you know, generative AI is really sparked this interest and everyone’s looking to see how they can be supercharged with AI. And the same is true in the Maldives. Now, Maldives, you may not know, is a small island developing nation. It’s an upper-middle-income nation, so it’s not a least developed nation, but challenged through technology adoption. Now, when we look at over the last kind of year, I interacted with about 270. And the first thing I asked is, are you familiar with what is it? And about 50% of them said, yeah, I have some idea what generative AI is. Maybe they’re not so familiar with what generative means, but they kind of have a sense of, okay, this is something they need. This is something they want. So already, but then I asked, do you use it, right? And what I found was quite surprising that nearly 85% of educators in the Maldives already use it, but not as I hoped they would, but they use it. And for that reason, what I found was, so what do you do? I asked them. Oh, AI can make beautiful slides. This is the first thing, because creating slides is a big headache, I guess, for educators. And with AI, you can just give it your notes and it will create the bullet points. And if you have better AI, it will even… put it all into PowerPoint or whatever tool you want. So it is the idea that teachers are already taxed in terms of the time they have. But what I found interesting was that about 15% of educators, both in higher education and in K-12, were already using generative AI on a daily basis to teach or to aid them in their teaching duties. And this was surprising because I didn’t really expect that they would be using this outside of, say, casual kind of exploratory work. But this is quite surprising. But what is even more surprising is that when I asked them, what are your concerns? Because I thought they would be concerned. I thought they would be concerned with generative AI because generative AI could replace them. But when I asked them, right, K-12, K-12 teachers, their concern was that they don’t have the knowledge to leverage generative AI. And they don’t have the training opportunities to operate themselves. And their second concern was, how can we access, the access to AI is limited? And then their third concern was the accuracy. Now, they are the teachers, so they know when they create something. They know when it’s not accurate. But imagine a math teacher trying to teach something English. So they are really concerned about the accuracy of AI. And at the bottom of the list, they say, yeah, maybe only 2% of the respondents that taught this 270 educators told me they have any concerns about being replaced. I think because at the top, they already marked to make it right. There is no risk of replacing us. But they see very neatly there is. But when I asked the higher education, there is a contrast. The top concern for them is plagiarism and cheating. This is their concern. But plagiarism and cheating is like the fifth or sixth or K-12 teachers. For them, it’s about that. But for the higher education, the faculty, for them, the concern is, how am I going to assess these guys when they are going to be using AI’s work as they are all trying to pass AI work as their own work? So there is definitely a lot of concern. But there is a lot of general demand for it, like the Maldives, two things that educators are looking for. One, AI, how to keep children safe online, cybersecurity. So these topics have been high in demand. And I think they go hand in hand together. So this is from a developing country. So I see a lot of scope in how, especially the fact that educators are putting at the top of their concerns their own capacity and the opportunities students have with plagiarism and how can we actually assess students who use AI. Thank you.
Jingbo Huang : Thank you, Mohamed. And this is very interesting. If the technician, please, can you please bring the Zoom? So we will invite our second speaker. Since Mohamed mentioned the plagiarism and the faculty members, how are they going to assess their students? So let’s invite Dr. Saravanos, Antonio Saravanos, to talk about, as a professor, researcher, and computer scientist, you can probably easily discern that your students submit work produced by generative AI, TATCPT, for example. How do you teach your students their generative AI judgments so they can better use generative AI to enhance learning? Antonio, please.
Antonio Saravanos: So you bring up an excellent point, right? Unfortunately, it’s quite easy to detect the use of TATCPT or another artificial intelligence, specifically an NLP, natural language processing, at the novice level, at the student level. For example, thinking back, this semester, I was teaching an intro to programming course. And I would repeatedly see submissions where the solution used elements of the language I was teaching Python and elements of Python that we hadn’t yet covered. And then when you have a discussion with the students, it’s clear that they don’t really understand what the material is. So it’s easy to catch them. And there are many, many ways to catch the use of AI. For example, when students submit essays, you see them citing resources that don’t exist. And that’s quite common for TATCPT to just make up references. So I think someone more experienced, right, can kind of catch them out. So as an educator, I recognize the rise of gen AI tools like TATCPT, right, is both a challenge and an opportunity in an academic environment. So from the teaching approach that I have adopted focuses on reframing the challenges as opportunities in order to empower students and guiding them to use gen AI not as a shortcut for producing answers, but as a tool to deepen their understanding, creativity, and problem-solving abilities. Because whether we want to or not, right, when they go into industry and they leave the university, they’ll be relying on this tool. So they need to be able to use it effectively. So I guess I have many dimensions to this, and we’re a bit short on time. So I would say my foundation begins with helping the students understand the capabilities and limitations of the generative AI to begin with, right? So the first thing is to make sure that they understand that AI tools aren’t like this omnipotent source of knowledge and they understand that there are inherent flaws. So we need to begin with that. And then once they have that, we can move forward. So to illustrate this, I’ll present case studies in class where the AI outputs contain some mistakes, biases, fallacies, right, and then these examples become teaching moments. So first emphasizing the importance of the human element. So I may have students generate a solution to a coding problem with ChatGBT, and then the class goes over and critiques the solution with me, identifying mistakes. But you could even generalize this exercise, right? So with respect to students, anything where you have a gen AI response being compared to some authoritative source like a peer-reviewed article, and then highlighting discrepancies and the challenges, right, to identify what the AI might have produced, being flawed or incomplete responses, right? So this is what the AI gave us. How do we tell that there’s a mistake there, right? So generating these metacognitive abilities, thinking critically, is where it’s at. Hopefully this answers the question. Thank you, Antonios.
Jingbo Huang : And so Antonios has been incorporating, embracing generative AI in his teaching. So let’s move on to Mike. So one of your research interests is academic integrity. And would you please share with us some strategies for detecting AI-generated content in academic settings? Are the current tools effective? What are your insights on the responsible and ethical use of gen AI tools in academia? Thanks very much.
Mike Perkins: I’m just going to start off by saying, you know, how can we detect it? You can’t. And I’m going to disagree with what Antonios has said there. Educators cannot effectively detect the use of gen AI tools. There have been several studies which have demonstrated this. Earlier this year, a University of Reading study found that 94% of test submissions which were produced using gen AI sources were not detected during the marketing process. I’ve carried out experiments earlier than this. We created a series of gen AI produced assessments. using GPT-4. We then submitted these into the piles of all of the faculty marking them. We gave them the generative AI detection tools and we said, just tell us if you spot any tools that have been used, any assessments that have been created using AI. Performance extremely low in terms of people being able to pick this up. Some of the comments that you do hear people saying, and Antonio’s mentioned about Chat GPT making up fake sources. Originally, Chattyptee 3.5, yeah, that was true. It’s getting less and less true now. And now we have new tools such as Google Research released last week, which actually carries out an agent-based search, creates a literature review from real web sources, and will produce a full literature review for you. So this sort of story that AI tools, you can always tell that they’re going to, when we can detect them, it’s simply now not true. I would really strongly recommend to say, if you think you’re spotting a piece of work that you think is being created through gen AI, you may be wrong. Now you might say, well, okay, what I’ve got, I’ve got some, I’ve got a, I’ve got an AI detection tool. I’ve got zero GPT, I’ll turn it in. Also, wrong. Other research that I’ve carried out and many researchers in the academic and technical field, there’s now actually a consensus that these tools are not suitable for accusing students of committing plagiarism, as we say. Now you might say that, well, these tools, these software companies tell me that they’ve got a 98% accuracy rating. Okay, so you have a thousand students. How many students are you going to accept that you falsely accuse of plagiarism? And you mark them as zero, you make them redo a course, they maybe fail an assessment, they maybe have to drop out of university. Is that acceptable to you? Certainly not to me. And the research that I’ve been carrying out, it really highlights time and time again, that it’s actually the students who are at most risk of being in a precarious situation at their institution. Maybe they’re neurodivergent, maybe they’re English as a second language speakers. And these are the students who write in this style that people say, oh, that’s Gen AI. People write in lists, or people, you know, write in a certain sort of structured way, that yes, sometimes Gen AI tools do kind of replicate. But this is because they’re standardized forms of producing text. And especially when you’re an ESL speaker, you have often been taught in this particular way of using certain words in a certain format. So what you end up doing is you say, these students have been caught using Gen AI tools and they’re cheating. And they haven’t. But then they suffer some really severe consequences. We’ve also got to really consider broader issues of inclusivity and equity and access for these tools. Because you can make Gen AI output, even if this is detected as Gen AI produced, with a few simple techniques, you can turn this into text that is not going to be detected through any AI text detector. And we carried out this research. We created pieces using Gen AI tools. We tested them against the seven most popular and most research backed AI text detectors. And we found out that simply they were very low accuracy to begin with. 44% accuracy rating for unchanged text. But if you’re a student who’s wanting to cheat to get away with something, that’s not how you use AI. You don’t just copy and paste your prompts and throw that at the teacher and say, there you go. But if you do, you’re probably a struggling student who needs more support. It doesn’t need to be told, you’ve cheated, we’re going to throw you out of university now. But you give me 15 minutes and a piece of text, and I will make that text completely undetectable. Might be a thousand words, might be 2,000 words. We demonstrated that with a few simple prompts in terms of integrating these directly into our created prompts without manual editing, just by saying something like, write this in a more complex way. Add some spelling errors to this. Make this less complex. Make this sound more human. Add some verseness to it. Change the sentence length. Change the paragraph length. What you’re doing here is actually causing temperature changes in the underlying model. Now, if you have API access, you can actually set temperatures for the model, and you’ll find a higher temperature will give you a higher variation. We’re talking about stochastic models here, which try and predict what’s going to be the next word in the sequence. But if you just change that up and you add in some additional words, you rewrite some sections, you’re not going to get this detection that is going to be acceptable in really any formal academic integrity process. Now, if you take a look at the Guardian or Observer yesterday, I was quoted in there actually talking about this subject, and it’s a really interesting article, which talks exactly about these challenges. It’s the students who get falsely accused, and these are the ones who are struggling. Or it’s students who do admit to, you know, taking some shortcuts, but is that their fault? If they’re using ChatGPT or other Gen AI tools to do the assessment, why haven’t you changed your assessment? Why haven’t you changed your assessment to account for these tools? They’ve been out for two years now. What’s going on? So, I think there’s some really big changes that need to be made in education more broadly to recognize these tools and see how we can actually integrate them. Thank you. Thank you, Mike. Thank you. So, we’ve been talking about academic integrity. Now, let’s move on to our lawyer, Eddie Amani. We’re an expert in intellectual property. Gen AI tools can be trained by items protected by IP. There is a significant legal uncertainty whether AI tools, their training, use, and outputs represent IP infringements. What are some implications on education? Thank you very much.
Eliamani Isaya Laltaika: Thank you very much, Dr. Wong, for that question. Just before I go into the IP question, I want to appreciate the last speaker, the professor, for really opening this up. As a judge, I was trying to imagine getting a case where a student is suing the university for being accused of using Gen AI to produce their thesis and they need their PhD. I cannot graduate because this work is from Gen AI. I’m a judge. I need to do justice, not only to the student, but to the university and to the universe. And, surprisingly, the professor has said it’s impossible to detect. That’s kind of an interesting entry point into what I want to speak, because we are trying to make the AI thing so much hyped to the extent that we lose the things that we were working for and trying to reinvent the wheel. Copyright has been at the center of education. Copyright is at the loggerhead of Gen AI for several reasons. First, there is a saying in my language, which means, whenever you come across something very impressive, you should be sure that someone has toiled to make it so. So, any time you put a prompt, chat GPT or any other generative AI, and you get a wonderful text that suits your expectations, meets your expectations to the extent that it cannot even be detected, like a professor said, you should be sure that it borrows heavily from what existed before. So, the time that has been used to train chat GPT and other AI is allegedly time that copyrighted work was heavily violated. So, many people think, yes, this looks like a chapter in my book. I’m a professor of sociology, and somebody has just put a prompt, and the whole five, six years of my work. So, the case is going to court people saying chat GPT has violated copyright. So, that has an impact. education. Secondly, there’s an issue of attribution and ownership. Academia are known all over the world, academics, for generously acknowledging some other people’s work. You’ll find I was reading one paper when I was coming, and this professor has cited so many works in the world. One page, half a page is footnotes, you know, trying to say this is from so and so. We are not seeing that in Chatterjee PTT. It does not say this comes from Dr. Huang’s university in Macau, no, no, no. And thirdly, overtly restrictive laws that are coming as a result of Chatterjee PTT and other AIs. Now we are being very active. We kind of try to go away from the established principles of copyright towards extremely restrictive laws. What can be done to bring or to strike the right balance? And these are just my opinions, they are not binding. Some people think whatever a judge says is binding. Unfortunately, I’m speaking just as an academic, because like I was introduced, I still teach. I was appointed into the bench from the academia and I still retain my position. Number one, we must revisit copyright frameworks to ensure that there is some sort of enumeration for those that have been used to train AI. It should encourage open licensing. If your work is being used out there, you should be. I very much support monetary possession to authors and creatives, because if we don’t do that, we are shooting ourselves in the leg. Computers and AI will continue to certify our literary and cultural tests for a long time. I also think there is a need to establish ethical guidelines at every level. You should have ethical guidelines at the university level, at the ministry level, like the former minister has wonderfully stated. He has spent a long time with educators, because he’s speaking like one of them on the challenges of preparing slides and stuff like that. I also think there should be promotion of public-private partnership, because the government cannot succeed alone by trying to enact laws. It’s only upon learning from those who are directly affected by these laws and regulations that we get out. I need to say just a little bit for the next two minutes or so, that AI is a blessing in this guise in developing countries. In Tanzania, for example, getting money from the state and then they publish in international journals. You cannot access them. I’m seeing through Google a paper published by my professor, but I am required to pay to access it. It’s data from my country. It’s knowledge of my professor. He was trained by taxpayers from my country. The company has restricted this knowledge from me, so there was no equity. Now I can see AI coming forcefully and say, okay, I don’t want to promote these laws. I want equity. Yes. Thank you very much.
Jingbo Huang : He was ready, started to do the issue related to generative AI in the global South. The second round of the questions, we will focus more on the perspective of global South. As you know, in the UN, leave no one behind is a central value. Let’s look at gen AI from the global South perspective. I’ll go for the second round. The first question will also go to Mohamed. You used to be a policymaker in Maldives. Would you please reflect on how generative AI created digital divides between global North and global South? What policymakers should consider to help reduce the divides and promote more equitable access to gen AI, particularly in the small island countries like Maldives? Gen AI, the potential of gen AI is undoubtedly huge.
Mohamed Shareef: Today, everyone expects, actually, and should rightly expect gen AI to support us in achieving sustainable development goals. If not for AI, how would we have survived the pandemic? AI is already supporting us in the darkest of our times. What I fear most is that AI is like a new front that’s open for transformation practitioners who are working in the global South. This is a new front on the war on digital divide because we are already facing a lot of challenges trying to catch up with the rest of the world. Now, suddenly, there is like a red bull in the mix. For me, gen AI is like a red bull. It gives digital transformation wings. Suddenly, it’s printing. Those who can have this red bull, how are you going to catch up with them? In the global South, we just get the grip of this red bull. What can we do? There’s actually three aspects for digital divide when we talk about digital divide. It’s multifaceted, but the three aspects we talk about are access to technology, economic disparities, and the educational gaps. All these three things have a huge advantage. The developed world is investing their wealth today in AI and in particular, in generative AI. Not only that, there’s a huge educational gap in that because they are investing in the ecosystems in this developed part of the world. We are losing even our smartest brains in the developed world. This further exacerbates the educational gap. Then again, access to this is extremely limited. I’m sure you already heard 26% of the globe still remains offline and 50% of that is in the Asia-Pacific. The island communities are particularly impacted. What would I suggest as somebody who’s been for a long time a practitioner in digital transformation in the developing part of the world? I would say we’ve got to find a narrative where even the developing countries or even the least developed countries need to prioritize investment in IT infrastructure. In the Maldives, over the last five years, we’ve gone from having just one submarine cable in the rest of the world to five submarine cables. We’ve gone from geostationary internet to real internet. We’ve invested a lot to make sure that we are connected and we are connected in every way possible from under the water and from the sky down. But then you’ve got to make sure AI and digital literacy in these international curriculums. This is extremely important, but this is extremely hard as well. I am actually working with the higher education in particular to develop AI modules that are multidisciplinary and taught for every student. From nurses to finance specialists, AI needs to be taught. And then we’ve got to actually develop an ecosystem where we can retain our smartest rather than lose them to the West. So governments and the private industry need to partner. We alone cannot do it as His Excellency has pointed out, right? And of course, nation states as well as educational institutions should come up with policies for ethical use of AI. We cannot just jump into AI without proper national as well as institutional guidelines that safeguards our data and our privacy. And finally, we cannot do it alone. This is why we are very glad to be working alongside institutions like the UNU. We’ve got to work together. If we are going to have to actually bridge the AI divide and not let it divide us even further. Thank you.
Jingbo Huang : Since Mohammed already mentioned the capacity building, so the next question for Antonio would be, if you were teaching teachers from the Global South to integrate generative AI tools to their classroom, what would you tell them about the technical nature of the GEM-AI tools to help them understand that GEM-AI, understand the benefits and limitations. So we’re talking about the teachers education in the Global South.
Antonio Saravanos: So an excellent question. I think I would begin by highlighting that there are two sides to the tool. So on the one hand, we have the solution being used by teachers to make them more productive. And I think that was also mentioned by other panelists as well. So generating slides, generating these types of resources, perhaps assessments and so on. So in that sense, it’s quite powerful. And then there needs to be training for that. And then the other half is, okay, so how can we use it in assignments for the students, right? So how can the students be using it to make them more productive and so on? So one is understanding that there are the two dimensions, right? The other, again, mentioned by other panelists is the digital divide, right? Luckily, there are free tools that one can use, but it’s important also to recognize that they’re restricted. So I think this goes a bit tangentially, right? But again, it would be quite wise for academics to kind of work together to figure out ways that they can gain access to the more advanced solutions and also develop their own local tools that they can use that might not be as limited. So a lot of the work is open source. So can we run our own AI solutions locally with support and so on and learn and develop in that area, right? So not as to be left behind and so on. If I were teaching teachers from the Global South, I would begin by highlighting the technical nature, right, of these tools. So highlighting their benefits and their limitations, right? I think a good starting point, right, is what is Gen AI in general, right? What can be used to generate, right, text, images, music, code, right? So they get a good perspective of everything that it’s possible to do. Because sometimes one is kind of limited to use cases that they’ve heard from others and so on. So I think a good overview is a great starting point. And then highlighting the advantages and disadvantages, right? So it can summarize, explain, create content, but it doesn’t think or understand, right? Even though it kind of makes one think that they’re thinking, it’s not really like a human, right? As was mentioned by another panelist, it’s just guessing, right? Probabilities, like what should come next? So in that sense, talk about what the open solutions are and what the paid solutions are, right? So one can use DALI for images. Google Colab has an AI solution to generate code, right? So there’s a lot out there. So what’s there? And then not everything will be appropriate for every instructor. It kind of depends on what subject matter you’re teaching, right? And how the AI works, as was mentioned before, right? So it may not be as easy to catch someone plagiarizing using chat GPT in sociology, but it may be easier, right? If it’s an intro to programming course. So it really depends on the context a bit, right? So I see I’m running a bit short on time. So I think I’ll stop there, but happy to expand on the conversation offline if anyone is interested. Thank you, Antonio. May I please ask our technicians to put out the PowerPoint back again?
Jingbo Huang : And so the next question is for Mike, and you developed the AI assessment scale, which allows gen AI to be integrated into educational assessments while promoting academic integrity and the ethical use of these technologies. Sure. Thank you very much. So I was earlier just telling you about how really not feasible to say, well, we’re just going to tell that students have used gen AI tools.
Mike Perkins: But especially not if you’re in the Global South, because these tools also cost a lot of money. And the most accurate tools, which could be used to have these conversations, are going to be the ones that are most expensive. And you, in the Global South, maybe not going to be able to do this. So what’s the best way to do this? So what’s the alternative then? What can we do to change things up? Well, what I’ve developed is a framework for how we can actually introduce gen AI tools in an ethical way into assessment settings. Now, what this is, is a conversation starter between academics and students to say, look, we know that gen AI tools exist. We can’t put the genie back in the bottle, as much as some academics would probably like to, I think, and say, let’s just go back to where this is going. So what we have is a situation where, in the last two years, academics have been saying, oh, these students cheating using gen AI, yet still setting the same essay questions they’ve set for 20 years. But we’re beyond that timing. Now is the time to change, and the AI assessment scale is a tool primarily for assessment redesign. It’s a way to say, look, what are the important things we need to change? What are the things that we need to change? So this starts off right from the very beginning, where we say, look, there’s some times where we can’t use any AI at all. Now, if you are a mid-20s, you might want to use a gen AI tool. But we’re beyond that timing. Now is the time to change, and the AI assessment scale is a tool primarily for assessment redesign. It’s a way to say, look, what are the important things we need to change? What are the things that we need to change? What are the things that we need to change? So this starts off right from the very beginning, where we say, look, there’s some times where we can’t use any AI at all. Now, if you are a medical student, and you are training future nurses and doctors, you want to ensure that when that student graduates and becomes a doctor, they actually know the fundamental biological aspects of a human. So how are you going to test that, then? Well, you’re not going to say, here’s an assignment, and write me about the human heart. You’re going to put them in an example, or put them in a face-to-face assessment situation, or a one-to-one viva, or a presentation, and say, tell me about this situation. There’s a corpse. Demonstrate that you know how to cut that up before I’m going to let you practice on somebody else. You’re not going to give them an assignment to say, tell me about how you cut up this corpse. Hopefully, they’re not going to be corpses that they’re actually dealing with. Hopefully, when we graduate, then they’re actually going to be able to deal with live humans. So sometimes, we need this fundamental knowledge to be tested. And that’s when we say, look, there’s sometimes where we say there’s no AI. And this is a secured assessment. But then, as soon as we go away from a secured assessment, it is no longer possible to test that AI. So if you can’t control how the students are using Gen-AI tools, what you need to do is to change your assessments so that they focus on the things that you want to train them about. So, for example, at level two, this is where we’re talking about the human heart. This is where we’re talking about the human heart. So if you can’t control how the students are using Gen-AI tools, what you need to do is to change your assessments so that they focus on the things that you want to train them about. So, for example, at level two, this is where we’re talking about process-based assessments. So you’re a writing instructor, and you want to teach students about how to plan an essay. So you say, what tools can you use to help you plan an essay? And then you submit that as your assessment, and then we explore it. Because when students graduate, they are going to be asked to do tasks by their employers, and what we want is for them to be able to finish that output. We don’t say, oh, well, you’ve got to use, you’re not allowed to use the internet to do your job, or you’re not allowed to use your Gen AI tools. So we’ve got to train students how to use them effectively for different situations. At the next level, we might say, we want to have AI as a collaborator. We want to train students how to use Gen AI tools to draft text, to adjust what they are creating, to give them maybe even feedback on their work. And what we’re looking at is this co-creation element, rather than trying to say, oh, well, you wrote that part, and the AI wrote that part. I write using Gen AI tools, and by the time I’ve finished writing my journal paper, I can’t tell myself which part I wrote by myself and which part the AI wrote, and it’s all my ideas and it’s all my voice. So the idea is we’re trying to train students on maintaining their voice and maintaining a critical approach to how is the best way to use AI. And then we can go beyond that. We can say, you know, there’s some times where we want students to use AI tools specifically, and therefore we want to assess how well they’re actually using Gen AI tools. So we might say, rather than just, oh, yeah, you can use AI for this, we say, you must use AI for this, or show me your use of this tool to demonstrate this final problem. So what we say this full AI. And this used to be our final version of the scale, the final level. Then we recognize that technology is changing so rapidly, we need to recognize the increasing multimodal use of AI. And that’s why we have this final AI exploration level. Now in this AI exploration level, what we’re looking at is solving problems that we don’t necessarily even know exist yet. How can we use Gen AI to solve problems that have been created by Gen AI, or to do things in a different way, to fundamentally use Gen AI to have this element of co-design and co-working between an educator, a student, and a Gen AI tool to actually solve something new? Now we’re not going to be talking about K-12 students here on this, but we may be talking IB final project. Maybe we’re looking at undergraduate dissertations, PhD students, master’s students. So this is how we can bring all of these together into this five point scale, which can hopefully support students in using AI in a different way. So that’s how I think we can use it in the Global South. It’s a free framework. It doesn’t require any licensing. If you want to take this and adapt this, we actually have tools available. So this one on the bottom, you can download a translation. We have this translated into 12 different languages already with more to come. And we also have the design assets linked there. So it’s just a Canva asset that you can change and adapt to your own context. Because not everybody is the same. Every country has got different requirements. So we’ve got to be able to change accordingly. So some information about the AI assessment scale.
Jingbo Huang : Thanks. Thank you, Mike. I think the tools are very useful. And thinking about the Global South, probably the capacity building for the teachers and for the students would be of a challenge compared to the Global North. That’s just my impression. So the last question, but not least, we go back to our legal expert, Elia Mani. Africa, that’s where you are from, will be the future of the world. I really believe so. It is expected to contribute to 62% of the global population in the next 25 years.
Eliamani Isaya Laltaika: Nigeria and Tanzania, specifically Tanzania, where you’re from, expected to grow their population by 50 to 90%. How can Africa benefit from and contribute to the development of Gen-AI in education? Thank you very much for that question, which is very close to my heart. And I’ll start by giving the similarities between Tanzania and Nigeria as an entry point to address the question. And both Tanzania and Nigeria are hosts to the Nelson Mandela African Institutions of Science and Technology. It’s unfortunate that many people don’t know that the late Nelson Mandela had a vision of science, technology, and innovation, powering the next generation of Africans and increasing competitiveness in the continent. And as a result, four institutions were established named after him. And I come from the one that is in Arusha, Nelson Mandela African Institution of Science and Technology. That’s what you can see, that I’m an adjunct faculty member there. And this has been our way of positioning ourselves to benefit from the global innovation scale and also to contribute. So just like how we established Nelson Mandela institutions in the past, the one in Arusha was established in 2011, and the one in Nigeria was established in 2007, and you can Google about it and you will see that quite a number of world-class innovations have been happening in these institutions and contributing meaningfully towards empowering the next generations of Africans. Our current president, who happens to be a lady, female head of state, we are very proud of our president, Samia Sulu Hassan. She has pioneered STEM education, science, technology, engineering, and mathematics as the way to prepare the next generation of Tanzanians and Africans in general to contribute meaningfully to innovation. And just last week, she reshuffled the cabinet and when addressing the nation of that, she told the minister of ICT, who I’m told is on his way coming to attend this, that his ministry has been narrowed so that he can focus specifically on ICT to ensure that he explores whatever is going to help Tanzania to improve and compete on all fronts in terms of ICT. We have a long way to go, and this is how I want to finish my contribution by asking everyone to ensure that you give a hand to the global south. Personally, I got a scholarship to study my master’s and PhD in Germany by the generous scholarship of the Max Planck Society. Many of my age mates studied in the US and the UK. We are not seeing this happening anymore on a larger scale. So we still need bridges to be built so that the north and the south can share expertise and share knowledge. That’s how we can position ourselves, not only to benefit, but also to contribute meaningfully to gen AI and STEM in general.
Jingbo Huang : Thank you, Eliemani. So, well, conscious of time, the session has to end, but I would like to encourage all of you, whoever would like to have an exchange of ideas with the panel members, please come up and we can have individual conversations. And for those online, sorry, we cannot accommodate the questions. Feel free to write to us and, you know, or yeah, write to us and we can have an exchange later. All right. Thank you very much to your panelists and thank you for being here and listening to the session. Thank you. Thank you. Thank you. Thank you.
Mohamed Shareef
Speech speed
124 words per minute
Speech length
1399 words
Speech time
672 seconds
Educators in Maldives already using generative AI, but lack knowledge and training
Explanation
Mohamed Shareef found that 85% of educators in Maldives are already using generative AI, but their primary concern is lack of knowledge and training opportunities to leverage it effectively. This indicates a need for capacity building in AI for educators in developing countries.
Evidence
Survey of 270 educators in Maldives showing 85% usage of generative AI and concerns about lack of knowledge and training
Major Discussion Point
Use of Generative AI in Education
Agreed with
Antonio Saravanos
Mike Perkins
Eliamani Isaya Laltaika
Agreed on
Need for AI education and capacity building
Generative AI risks exacerbating existing digital divides between Global North and South
Explanation
Mohamed Shareef argues that generative AI is creating a new front in the digital divide, as developed countries invest heavily in AI infrastructure and education. This widens the gap between the Global North and South in terms of access, economic disparities, and educational opportunities.
Evidence
Comparison of AI investments and educational ecosystems between developed and developing countries
Major Discussion Point
Generative AI and the Global South
AI and digital literacy should be integrated into curricula
Explanation
Mohamed Shareef emphasizes the importance of incorporating AI and digital literacy into international curricula. He suggests developing multidisciplinary AI modules for all students, regardless of their field of study, to prepare them for the AI-driven future.
Evidence
Work with higher education institutions to develop AI modules for various disciplines
Major Discussion Point
Changing Educational Approaches for AI
Antonio Saravanos
Speech speed
123 words per minute
Speech length
1062 words
Speech time
515 seconds
Need to reframe generative AI as a tool for deeper understanding, not just producing answers
Explanation
Antonio Saravanos advocates for reframing generative AI as a tool to enhance understanding, creativity, and problem-solving abilities, rather than a shortcut for answers. He emphasizes the importance of teaching students to use AI effectively as they will rely on it in their future careers.
Evidence
Examples of using case studies with AI-generated outputs containing mistakes to teach critical thinking
Major Discussion Point
Use of Generative AI in Education
Agreed with
Mohamed Shareef
Mike Perkins
Eliamani Isaya Laltaika
Agreed on
Need for AI education and capacity building
Educators should focus on teaching students to use AI effectively, not banning it
Explanation
Antonio Saravanos argues that educators should guide students in using generative AI as a tool to deepen their understanding and problem-solving abilities. He emphasizes the importance of preparing students for future careers where AI tools will be prevalent.
Evidence
Examples of incorporating AI-generated solutions into classroom discussions and critiques
Major Discussion Point
Changing Educational Approaches for AI
Agreed with
Mike Perkins
Agreed on
Redesigning educational approaches for AI integration
Need to develop local AI tools and solutions in Global South to avoid dependence
Explanation
Antonio Saravanos suggests that academics in the Global South should work together to develop local AI tools and solutions. This approach can help overcome limitations of free tools and reduce dependence on expensive solutions from the Global North.
Evidence
Mention of open-source AI solutions that can be run locally with support
Major Discussion Point
Generative AI and the Global South
Mike Perkins
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Current AI detection tools are not reliable for accusing students of plagiarism
Explanation
Mike Perkins argues that current AI detection tools are not suitable for accusing students of plagiarism due to their inaccuracy. He emphasizes the risk of falsely accusing students, particularly those from disadvantaged backgrounds or with language barriers.
Evidence
Research showing low accuracy rates of AI detection tools and the risk of false accusations
Major Discussion Point
Use of Generative AI in Education
Differed with
Antonio Saravanos
Differed on
Effectiveness of AI detection tools in academic settings
AI assessment scale framework can help ethically integrate AI into education in Global South
Explanation
Mike Perkins presents an AI assessment scale framework for ethically integrating AI into educational assessments. This framework provides a conversation starter between academics and students, offering a way to redesign assessments to accommodate the reality of AI tools.
Evidence
Description of the AI assessment scale with different levels of AI integration in assessments
Major Discussion Point
Generative AI and the Global South
Agreed with
Mohamed Shareef
Antonio Saravanos
Eliamani Isaya Laltaika
Agreed on
Need for AI education and capacity building
Assessments need to be redesigned to focus on skills AI can’t replicate
Explanation
Mike Perkins argues for redesigning assessments to focus on skills that AI cannot replicate. He suggests moving away from traditional essay questions to more practical, process-based assessments that test students’ ability to use AI tools effectively.
Evidence
Examples of assessment redesign using the AI assessment scale framework
Major Discussion Point
Changing Educational Approaches for AI
Agreed with
Antonio Saravanos
Agreed on
Redesigning educational approaches for AI integration
Eliamani Isaya Laltaika
Speech speed
117 words per minute
Speech length
1289 words
Speech time
657 seconds
Copyright frameworks need to be revisited to address use of copyrighted material in AI training
Explanation
Eliamani Isaya Laltaika argues that copyright frameworks need to be revisited to ensure proper attribution and compensation for works used in AI training. He emphasizes the importance of striking a balance between encouraging innovation and protecting creators’ rights.
Evidence
Discussion of copyright violations in AI training and the need for attribution in AI-generated content
Major Discussion Point
Use of Generative AI in Education
Ethical guidelines for AI use in education needed at all levels
Explanation
Eliamani Isaya Laltaika emphasizes the need for establishing ethical guidelines for AI use in education at various levels. He suggests that these guidelines should be developed at university, ministry, and government levels to ensure responsible AI use.
Major Discussion Point
Changing Educational Approaches for AI
Africa needs partnerships with Global North to build AI/STEM capacity
Explanation
Eliamani Isaya Laltaika argues that Africa needs continued partnerships with the Global North to build capacity in AI and STEM fields. He emphasizes the importance of scholarships and knowledge-sharing to position Africa to both benefit from and contribute to global AI development.
Evidence
Personal experience with Max Planck Society scholarship and examples of Nelson Mandela African Institutions of Science and Technology
Major Discussion Point
Generative AI and the Global South
Agreed with
Mohamed Shareef
Antonio Saravanos
Mike Perkins
Agreed on
Need for AI education and capacity building
Agreements
Agreement Points
Need for AI education and capacity building
Mohamed Shareef
Antonio Saravanos
Mike Perkins
Eliamani Isaya Laltaika
Educators in Maldives already using generative AI, but lack knowledge and training
Need to reframe generative AI as a tool for deeper understanding, not just producing answers
AI assessment scale framework can help ethically integrate AI into education in Global South
Africa needs partnerships with Global North to build AI/STEM capacity
All speakers emphasized the importance of educating both teachers and students about AI, its proper use, and integration into the curriculum.
Redesigning educational approaches for AI integration
Antonio Saravanos
Mike Perkins
Educators should focus on teaching students to use AI effectively, not banning it
Assessments need to be redesigned to focus on skills AI can’t replicate
Both speakers argue for adapting educational methods to incorporate AI tools effectively rather than trying to ban or detect their use.
Similar Viewpoints
Both speakers highlight the potential for AI to widen the gap between developed and developing countries, emphasizing the need for collaboration and support from the Global North.
Mohamed Shareef
Eliamani Isaya Laltaika
Generative AI risks exacerbating existing digital divides between Global North and South
Africa needs partnerships with Global North to build AI/STEM capacity
Unexpected Consensus
Limitations of AI detection tools in education
Mike Perkins
Eliamani Isaya Laltaika
Current AI detection tools are not reliable for accusing students of plagiarism
Copyright frameworks need to be revisited to address use of copyrighted material in AI training
While coming from different perspectives (education and law), both speakers highlight the inadequacy of current tools and frameworks to address AI-related challenges in education and copyright.
Overall Assessment
Summary
The speakers generally agreed on the importance of AI education, the need to redesign educational approaches, and the challenges faced by the Global South in AI adoption.
Consensus level
Moderate to high consensus on the main issues, with implications for a collaborative, global approach to integrating AI in education while addressing equity concerns.
Differences
Different Viewpoints
Effectiveness of AI detection tools in academic settings
Antonio Saravanos
Mike Perkins
Unfortunately, it’s quite easy to detect the use of TATCPT or another artificial intelligence, specifically an NLP, natural language processing, at the novice level, at the student level.
Current AI detection tools are not reliable for accusing students of plagiarism
Antonio Saravanos believes it’s easy to detect AI use in student work, while Mike Perkins argues that current AI detection tools are unreliable and risk false accusations.
Unexpected Differences
Perception of AI in developing countries
Mohamed Shareef
Eliamani Isaya Laltaika
Generative AI risks exacerbating existing digital divides between Global North and South
Africa needs partnerships with Global North to build AI/STEM capacity
While both speakers discuss AI in developing countries, their perspectives differ unexpectedly. Shareef focuses on the risks of AI widening the digital divide, while Laltaika emphasizes the opportunities for partnerships and capacity building in Africa.
Overall Assessment
summary
The main areas of disagreement revolve around the detection of AI-generated content in academic settings, the approach to integrating AI in education, and the perception of AI’s impact on developing countries.
difference_level
The level of disagreement among speakers is moderate. While there are clear differences in some areas, there is also general agreement on the importance of adapting education to incorporate AI. These differences highlight the complexity of integrating AI in education globally and the need for nuanced approaches that consider both opportunities and challenges.
Partial Agreements
Partial Agreements
Both speakers agree on the need to change educational approaches for AI, but differ in their specific recommendations. Saravanos focuses on using AI as a tool for deeper understanding, while Perkins emphasizes redesigning assessments to test skills AI can’t replicate.
Antonio Saravanos
Mike Perkins
Need to reframe generative AI as a tool for deeper understanding, not just producing answers
Assessments need to be redesigned to focus on skills AI can’t replicate
Similar Viewpoints
Both speakers highlight the potential for AI to widen the gap between developed and developing countries, emphasizing the need for collaboration and support from the Global North.
Mohamed Shareef
Eliamani Isaya Laltaika
Generative AI risks exacerbating existing digital divides between Global North and South
Africa needs partnerships with Global North to build AI/STEM capacity
Takeaways
Key Takeaways
Generative AI is already being widely used in education, but educators often lack proper training and knowledge to use it effectively
Current AI detection tools are unreliable for identifying AI-generated content in academic settings
Generative AI risks exacerbating existing digital divides between the Global North and South
Educational approaches and assessments need to be redesigned to effectively integrate AI tools
Copyright frameworks and ethical guidelines need to be updated to address AI use in education
Partnerships between the Global North and South are needed to build AI capacity in developing regions
Resolutions and Action Items
Develop ethical guidelines for AI use in education at institutional and national levels
Integrate AI and digital literacy into curricula across disciplines
Redesign assessments to focus on skills AI cannot replicate
Invest in IT infrastructure in developing countries to improve access to AI tools
Establish partnerships between Global North and South for AI capacity building
Unresolved Issues
How to effectively detect AI-generated content in academic work
How to ensure equitable access to advanced AI tools in the Global South
How to balance copyright protections with the use of copyrighted material in AI training
How to retain skilled AI professionals in developing countries
Suggested Compromises
Use AI assessment frameworks that allow controlled integration of AI tools into education rather than banning their use
Develop local, open-source AI solutions in the Global South to reduce dependence on expensive proprietary tools
Revisit copyright frameworks to allow some use of copyrighted material in AI training while providing compensation to creators
Thought Provoking Comments
Educators cannot effectively detect the use of gen AI tools. There have been several studies which have demonstrated this.
speaker
Mike Perkins
reason
This challenges the common assumption that AI-generated content can be easily detected, introducing a significant problem for academic integrity.
impact
It shifted the conversation from how to detect AI use to how to adapt education systems to work with AI. It led to discussion of changing assessment methods and integrating AI into curricula.
AI is a blessing in this guise in developing countries. In Tanzania, for example, getting money from the state and then they publish in international journals. You cannot access them.
speaker
Eliamani Isaya Laltaika
reason
This provides a unique perspective on how AI could potentially democratize access to knowledge in developing countries.
impact
It broadened the discussion to consider the global implications of AI in education, particularly for the Global South. It led to further exploration of digital divides and equitable access to AI technologies.
For me, gen AI is like a red bull. It gives digital transformation wings. Suddenly, it’s printing. Those who can have this red bull, how are you going to catch up with them?
speaker
Mohamed Shareef
reason
This vivid metaphor effectively illustrates the potential for AI to widen existing digital divides.
impact
It focused the discussion on the urgent need for policies and strategies to ensure equitable access to AI technologies, particularly in developing countries.
What I’ve developed is a framework for how we can actually introduce gen AI tools in an ethical way into assessment settings.
speaker
Mike Perkins
reason
This offers a practical solution to the challenges of integrating AI into education while maintaining academic integrity.
impact
It moved the conversation from identifying problems to discussing concrete solutions, providing a framework for educators to adapt to the reality of AI in education.
Overall Assessment
These key comments shaped the discussion by challenging assumptions about AI detection, highlighting global inequalities in AI access, and proposing practical solutions for integrating AI into education. The conversation evolved from identifying challenges to exploring nuanced perspectives on AI’s impact in different global contexts and discussing concrete strategies for ethical AI integration in education. This progression deepened the analysis and broadened the scope of the discussion beyond initial concerns about academic integrity to encompass global educational equity and the transformation of assessment methods.
Follow-up Questions
How can educators effectively assess students’ work when generative AI tools are widely available?
speaker
Mohamed Shareef
explanation
This is a key concern for higher education faculty as they struggle to determine if students are submitting their own work or AI-generated content.
What are effective strategies for teaching students to use generative AI tools responsibly and ethically?
speaker
Antonio Saravanos
explanation
As generative AI becomes more prevalent in education and industry, students need guidance on how to leverage these tools appropriately.
How can assessment methods be redesigned to account for the existence of generative AI tools?
speaker
Mike Perkins
explanation
Traditional assessments may no longer be effective, so new approaches are needed to evaluate student learning in the age of AI.
What legal and ethical frameworks are needed to address copyright and intellectual property issues related to generative AI in education?
speaker
Eliamani Isaya Laltaika
explanation
The use of copyrighted materials to train AI models raises complex legal questions that need to be resolved.
How can the digital divide between the Global North and Global South be addressed in relation to generative AI technologies?
speaker
Mohamed Shareef
explanation
Ensuring equitable access to AI tools and education is crucial to prevent widening global inequalities.
What strategies can be employed to retain talented AI researchers and practitioners in developing countries?
speaker
Mohamed Shareef
explanation
Preventing brain drain is important for building local AI capacity in the Global South.
How can generative AI tools be effectively integrated into educational curricula in resource-constrained environments?
speaker
Antonio Saravanos
explanation
Adapting AI education for contexts with limited technological infrastructure is crucial for global equity.
What role can international partnerships play in supporting AI education and research in Africa?
speaker
Eliamani Isaya Laltaika
explanation
Collaboration between Global North and South institutions could help build AI capacity in developing regions.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online