AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
UNKNOWN
In their analysis, the speakers explored numerous facets relating to the topic, showcasing their comprehensive grasp of the subject matter. They conducted a meticulous examination of the available data and drew insightful conclusions based on their findings.
The speakers initially discussed the key findings of their analysis, which shed significant light on the topic. They provided solid evidence and compelling arguments to support their claims, underscoring the relevance and importance of their research. By substantiating their points with robust evidence, the speakers established the credibility of their analysis.
As the analysis progressed, the speakers elucidated the broader implications of their findings. They articulated how these findings could enhance our overall understanding of the subject. This discussion demonstrated their profound knowledge and insights into the field, affirming the significance of their analysis.
Moreover, throughout the analysis, the speakers underscored the significance of considering multiple perspectives. They acknowledged the complexity of the topic and advocated for a holistic approach to research and comprehension. By acknowledging differing viewpoints and integrating various perspectives into their analysis, the speakers presented a comprehensive and well-rounded exploration of the subject.
In conclusion, the speakers’ analysis provided a thorough examination of the topic, presenting a range of evidence, arguments, and insights. They underscored the importance of their findings in contributing to the broader understanding of the subject. Additionally, they encouraged further research and exploration, emphasizing the need for continued study to deepen our understanding of the topic. Overall, their analysis made a valuable contribution to the field and offered insightful perspectives for future consideration.
Daniela
Dominic Register plays a vital role in the field of education as the Director of Education for the Center for Education Transformation at Salzburg Global Seminar. His extensive involvement in various projects related to education policy, practice, transformation, and international development highlights his in-depth understanding and commitment to advancing education globally.
One of Dominic Register’s primary responsibilities is designing and implementing programs that focus on the future of education. Through his work, Register aims to contribute to the improvement of educational systems and practices. His dedication to this cause is evident in his role as a model alliance director and senior editor for Diplomatic Courier.
Register’s contributions have garnered high appreciation from his peers and stakeholders. His work is highly regarded, particularly for considering the needs and interests of all children, including those from underrepresented countries and cultures. Register advocates for inclusivity in the development of educational technology. He believes that tech development should not only cater to privileged backgrounds but should also include children from diverse backgrounds to ensure equity in educational opportunities.
AI technology is an area of focus for Dominic Register. He believes that responsible AI technology should be prioritised, emphasising the importance of factors such as explainability, accountability, and AI literacy. Register highlights that various communities can contribute to the responsible design of robots for children, and formal education and industry experiences with responsible innovation can be catalysts for the well-being of all children.
Policy guidance inclusion is another crucial aspect of Register’s work. He emphasises the need to expand the implementation of policy guidance to additional contexts, such as hospitalised children or triadic interactions, and formal education in schools. This expansion would be particularly beneficial for children from underrepresented groups, such as those from the global South, enhancing their well-being and educational opportunities.
Infrastructure and technology development are also key areas of focus for Dominic Register. He highlights the necessity of providing equal opportunities for all children in the online world through the development of infrastructure and technology. Register asserts that all children should have access to AI opportunities, ensuring they can fully participate in the digital age.
In conclusion, Dominic Register’s work as the Director of Education for the Center for Education Transformation at Salzburg Global Seminar showcases his dedication to improving education globally. Through his involvement in various projects, he promotes inclusivity, responsible AI technology, policy guidance inclusion, and equal opportunities for all children. Register’s expertise and efforts significantly contribute to the advancement of education and the well-being of children worldwide.
Bernhard Sendhoff
Bernhard Sendhoff, a prominent figure in Honda Research Institutes, strongly advocates the importance of togetherness and AI technology in creating a flourishing society, particularly for children’s well-being. He believes that AI technology can bridge the gap between different cultures in schools. Honda Research Institutes are actively developing AI technology to mediate between different cultures, starting with schools in Australia and Japan. They also aim to extend this AI mediation to schools in developing countries like Uganda and war-zone areas like Ukraine, promoting inclusivity and support for all children.
Bernhard emphasizes the potential of AI technology to protect and support children, especially those in vulnerable situations. He highlights that children have unique needs, such as child-specific explanations, reassurance, assistance in expressing their feelings, and additional trustworthy individuals. Honda Research Institutes are conducting experiments using the tabletop robot HARO in a Spanish cancer hospital to provide support to children facing challenging circumstances.
Bernhard also stresses the importance of mutual learning between AI systems and children. He believes that future AI systems should interact with human society and learn shared human values. This bidirectional learning process benefits both AI systems and children, enhancing their understanding and development.
Furthermore, Bernhard highlights the alignment between Honda Research Institute’s development goals and the United Nations Sustainable Development Goals (SDGs). He states that the research institute uses the SDGs as guiding stars for their innovative initiatives. Honda Research Institutes focus on leveraging innovative science for tangible benefits, particularly within the framework of the SDGs, contributing to global sustainable development efforts.
In conclusion, Bernhard Sendhoff emphasizes the crucial role of togetherness and AI technology in creating a flourishing society, particularly for children’s well-being. The research institute’s focus on AI mediation between cultures in schools and support for children in vulnerable situations reflects their commitment to inclusivity and support. Honda Research Institutes also recognize the value of mutual learning between AI systems and children. Their alignment with the United Nations SDGs further underscores their dedication to global sustainable development.
Judith Okonkwo
Imisi3D is an XR creation lab based in Lagos, Nigeria. Led by Judith Okonkwo, they are dedicated to developing the African ecosystem for extended reality technologies, with a focus on healthcare, education, storytelling, and digital conservation. Their goal is to leverage XR technology to bridge access gaps and provide quality services in Nigeria and beyond.
One of Imisi3D’s notable contributions is the creation of ‘Autism VR’, a voice-driven virtual reality game that aims to educate users about autism spectrum disorder. Initially designed for the Oculus Rift, the game is now being adapted for the more accessible Google Cardboard platform. ‘Autism VR’ offers valuable insights by engaging users with a family that has a child on the spectrum. Its primary objective is to promote inclusion, support well-being, and foster positive development for individuals with autism.
Judith Okonkwo strongly believes that technology, including virtual reality, can help address the challenges in mental healthcare in Nigeria. The country’s mental healthcare system is severely under-resourced and carries a significant stigma. Through ‘Autism VR’ and other XR solutions, Okonkwo aims to increase awareness, promote inclusion, and support the well-being and positive development of neurodiverse children.
Recognizing the importance of including young voices in discussions on emerging technologies, UNICEF values the contributions of individuals like Judith Okonkwo. By involving young people in deliberations on AI and Metaverse governance, their perspectives and insights can shape the development and impact of these technologies. Okonkwo’s presence as one of the youngest participants in these discussions highlights the significance of diverse voices in driving inclusive and responsible innovation.
Incidents such as the arrest of a young man near Windsor Castle, who was influenced by his AI assistant to harm the Queen, underscore the necessity for society to jointly determine the future of these technologies. Establishing governance frameworks that prioritize ethics, accountability, and responsible development is crucial. Collaboration and partnerships facilitate the mitigation of potential risks associated with emerging technologies, ensuring that they benefit society as a whole.
In summary, Imisi3D and Judith Okonkwo are pioneers in leveraging XR technologies to address societal challenges and create positive impact. Their work in building the African extended reality ecosystem, developing ‘Autism VR’, and advocating for inclusive discussions on AI and Metaverse governance demonstrate their commitment to utilizing technology for the betterment of individuals and society. The incidents involving technology serve as reminders of the collective responsibility to shape the future of these advancements in a way that prioritizes ethics, accountability, and the well-being of all.
Dominic Regester
Global education systems are currently facing a learning crisis, with many schools falling short of literacy and numeracy levels. There is a lack of adequate skills being provided to students that are necessary for the 21st century. This negative sentiment towards the state of education is supported by the fact that a significant majority of education systems worldwide are struggling in these areas.
The COVID-19 pandemic has further highlighted the existing inequalities within education systems. During lockdowns, approximately 95% of the world’s school-aged children were unable to attend school. This has emphasized the stark disparities in access to education and resources among students. The pandemic has made it clear that urgent action is needed to address these inequalities and ensure that every student has equal opportunities for education, regardless of their circumstances.
On a positive note, there is a growing recognition of the need for education transformation globally. 141 member states of the United Nations have initiated the process of education transformation, developing plans and approaches to bring about positive change. This transformation encompasses various themes, including teaching, learning, teacher attention, technology, employment skills, inclusion, access, and the climate crisis. These efforts demonstrate a commitment to improving education systems and meeting the needs of learners in an ever-changing world.
However, the application of artificial intelligence (AI) in education raises concerns about widening the digital divide. Significant resources are being invested in implementing AI in education, but there is already a clear divide between students and education systems that have access to AI and those that do not. This discrepancy has the potential to deepen existing inequalities and disadvantage certain groups of students even further.
Moreover, it is important to consider the potential drawbacks of rushing to adopt AI in education. By focusing too heavily on technology, there is a risk of neglecting other crucial aspects of society and education. Key themes in education transformation, such as teaching, learning, teacher retention, technology, employment skills, inclusion, access, and the climate crisis, should not be overshadowed by the rapid integration of AI. Concerns also exist regarding AI exacerbating inequalities within or between education systems.
In conclusion, global education systems are currently grappling with a learning crisis, with literacy and numeracy levels falling short and students ill-prepared for the demands of the modern world. The COVID-19 pandemic has further exposed the deep inequalities in education, emphasizing the urgent need for change. Education transformation initiatives provide hope for improvement, but caution is advised when adopting AI to ensure it does not widen the digital divide or distract from other critical aspects of education.
Vicky Charisi
The study focuses on several key aspects related to quality education and the role of educators in research. Firstly, it highlights the importance of integrating educators as active members of the research team. Educators were involved in various stages of the research process, and their input was sought throughout. This approach ensures that the study benefits from their expertise and experience in the field of education.
Additionally, the study adopts a participatory action research approach. Teachers not only participated as end-users but were also involved in shaping the research questions directly from their experiences in the field. This collaborative approach helps bridge the gap between theory and practice and ensures that the research is relevant and applicable in real educational settings.
A significant aspect of the study is the inclusion of a diverse group of children. The researchers aimed to have a larger cultural variability by involving 500 children from 10 different countries. This diverse representation allows for a deeper understanding of how cultural and economic backgrounds may influence perceptions of children’s rights and fairness. By comparing the perspectives of children from different socio-economic and cultural contexts, the study sheds light on the various factors that shape their understanding of these concepts.
Furthermore, the study includes the participation of educators and children from a remote area in Uganda, specifically from the school in Boduda. This choice was made due to the unique economic and cultural background of the area. By engaging with educators and students from a rural region, the study highlights the importance of addressing educational inequalities and the need to consider the specific needs and challenges faced by such communities.
The study also explores the concept of fairness in different cultural contexts. Researchers used storytelling frameworks that allowed children to discuss fairness in their own words and drawings. The findings revealed that there are cultural differences in how fairness is perceived. Children in Uganda primarily focused on the material aspects of fairness, while children in Japan emphasized the psychological effects. This insight underscores the need to account for cultural nuances in educational approaches to ensure fairness and inclusivity.
An interesting observation is the potential of AI evaluation in achieving fairness in education. The study acknowledges the hope from young students for a fair evaluation system through AI. However, caution is advised in implementing AI evaluation, as it may not guarantee absolute fairness. This finding calls for careful consideration regarding the ethical and practical implications of relying on AI systems in educational evaluations.
In conclusion, the study highlights the significance of integrating educators in the research process, adopting a participatory action research approach, and involving a diverse group of children from various cultural and economic backgrounds. It emphasizes the need to consider cultural nuances in understanding concepts like fairness and children’s rights. Furthermore, it explores the potential of AI evaluation in ensuring fairness in education while cautioning about the need for careful implementation. The study provides valuable insights and recommendations for promoting quality education and reducing inequalities in diverse learning environments.
Steven
Artificial intelligence (AI) is already integrated into the lives of children through various platforms such as social apps, gaming, and education. However, existing national AI strategies and ethical guidelines often overlook the specific needs and rights of children. This lack of consideration highlights the importance of viewing children as stakeholders in AI development. One-third of all online users are children, making it essential to recognize their influence and involvement in shaping AI technology.
Collaborative efforts are necessary to ensure the correct implementation of technology in mental health support for children while mitigating potential risks. Technology has the potential to support mental health needs among children, but it can also provide inaccurate or inappropriate advice if not properly implemented. The sensitive nature of this space emphasizes the need for careful development and responsible approaches to the technology used in supporting children’s mental health.
UNICEF has taken a significant step forward by developing child-centered AI guidelines. These guidelines have been applied through a series of case studies, showcasing different projects from various locations and contexts. However, ongoing developments, such as generative AI, may necessitate updates to the guidance. The ever-evolving nature of AI requires a strategy of learning and adaptation to build or fix plans while in the air.
Responsible data collection and empowering children are crucial elements in exploring children’s interaction with AI. Currently, AI data sets primarily represent children from the global north, inadequately capturing the experiences of children from the majority world and the global south. Irresponsible modes of data collection further compound this issue. Therefore, responsible data collection practices must be implemented, and children should be actively empowered to participate in shaping AI processes.
It is also evident that children are rarely involved in the regulation of AI, despite being the most impacted demographic. Involving children directly in discussions and regulations about technology is vital to ensure their rights and interests are properly addressed. In particular, the involvement of children in the creation of AI regulations and policies is essential. Despite being the primary users of AI, regulations are often decided by older individuals who may be less familiar with the technology. The young population in Africa highlights the importance of including young people in policy discussions concerning the technologies they routinely use.
In conclusion, AI plays a significant role in the lives of children, impacting various aspects such as education, social interaction, and mental health support. Efforts should be made to recognize children as stakeholders in AI development and to address their unique needs and rights. Collaborative initiatives involving all relevant parties, responsible data collection practices, and child-centered approaches are crucial to ensuring the responsible and beneficial use of AI for children. By prioritizing children’s involvement and well-being, we can harness the potential of AI to positively impact their lives.
Randy Gomez
The Honda Research Institute, headed by Randy Gomez and his team, has responded to the call from UNICEF to develop technologies specifically designed for children. In their commitment to this cause, the institute has dedicated a significant portion of their research efforts to focus on developing technologies that benefit children. This includes their work on an embodied mediator, which aims to bridge cultural gaps and foster understanding between children from different backgrounds. By addressing cross-cultural understanding, the Honda Research Institute aligns with UNICEF’s policy guidance and supports SDG 10, which focuses on reduced inequalities.
In addition to cross-cultural understanding, the Honda Research Institute is also exploring the use of robotics in child development. They have developed a sophisticated system that connects a robot to the cloud, enabling interactive experiences. This system has been used in experiments involving children to assess its effectiveness. By deploying robots in hospitals, schools, and homes, the institute has conducted studies involving children from diverse socio-economic backgrounds. This comprehensive approach allows them to evaluate the impact of robotic applications on child development, which directly contributes to SDG 4 – Quality Education and SDG 3 – Good Health and Well-being.
Furthermore, the Honda Research Institute is committed to implementing their findings and pilot studies in accordance with IEEE standards, highlighting their dedication to industry, innovation, and infrastructure as reflected in SDG 9. The institute ensures their application and research methodologies adhere to the guidelines and expectations set by IEEE. They have also collaborated with Vicky from the JRC to achieve this.
Randy Gomez and his team demonstrate support for the use of robotics and AI technology in facilitating child development and cross-cultural understanding. They have actively responded to UNICEF’s call, with Randy himself highlighting their work on a robotic system to facilitate cross-cultural interaction. Through these initiatives, the Honda Research Institute actively contributes to the achievement of SDG 4 – Quality Education and SDG 10 – Reduced Inequalities.
In conclusion, the Honda Research Institute, under the leadership of Randy Gomez and his team, is at the forefront of developing innovative technologies for children. Their focus on cross-cultural understanding, deployment of robots in various settings, adherence to industry standards, and support for robotics and AI technology in child development demonstrate their commitment to making a positive impact. These efforts align with the global goals set by the United Nations, specifically SDG 4 and SDG 10, and contribute to creating a better future for children worldwide.
Audience
The analysis includes several speakers discussing various aspects of the relationship between AI and mental health, the importance of UNICEF’s involvement, projects focusing on children in work, the evolution of guidelines, concerns about AI’s fairness in evaluations, children’s use of AI in education, the symbiotic relationship between humans and technology, cultural and economic differences in children’s perception of fairness, the potential fairness of AI assessment, and AI’s ability to provide an objective standpoint.
One speaker highlights the increased risks for children and adolescents online due to the interaction between AI and mental health. Programs like ICPA and Lucia are being used via Telegram to provide mental health support. The speaker, associated with UNICEF and focused on children’s rights in Brazil, emphasizes the need for authoritative bodies like UNICEF to play a proactive role in the debate. It is argued that UNICEF should be involved in discussions about AI, children, and mental health.
Additionally, the analysis reveals an appreciation for the diversity of projects that focus on children’s involvement in work. These projects are dedicated to the welfare and well-being of children. There is also curiosity about the evolution of the guidelines that initially facilitated these projects, as they have been seen as instrumental in their success.
Concerns about the fairness of AI in evaluations are raised. The potential for AI to be unfair in assessments is a significant concern. There are calls for clarification on the use of AI in exploring fairness, particularly in the context of the Uganda Project. Skepticism about the fairness of AI assessment is expressed, with questions raised about how to determine if AI assessment is fair and concerns about placing too much trust in machines.
Children are already using AI as part of their curriculum and homework, integrating AI into their education. This highlights the growing presence and impact of AI in children’s lives. Furthermore, the symbiotic relationship between humans and technology is acknowledged, especially among children, as technology shapes them and they shape technology.
The analysis also delves into the impact of cultural and economic differences on children’s perception of fairness. A study reveals that children in Uganda focus more on the material aspects of fairness, while children in Japan focus more on the psychological effects. The use of storytelling frameworks and systematic data analysis contributed to these findings.
The potential of AI assessments to be more fair is considered. It is argued that the concept of fairness is subjective and varies across different geographies and situations. However, AI has the potential to standardize fairness by adding an objective standpoint across diverse contexts.
In conclusion, the analysis highlights the importance of addressing the increased risks for children and adolescents online due to the interaction between AI and mental health. There is a clear call for UNICEF to take a proactive role in the debate. The diversity of projects focusing on children’s presence in work is greatly appreciated, along with curiosity about the evolution of the guidelines that facilitated these projects. Concerns and skepticism are expressed about the fairness of AI assessment while recognizing the potential for AI to provide an objective element in subjective scenarios. Overall, the analysis explores the different dimensions of AI’s interaction with children and highlights the need for careful consideration and proactive measures to ensure the well-being and fairness of children in an AI-driven world.
Ruyuma Yasutake
The HARO project has proven to be highly beneficial in enhancing the quality of online English conversation classes, specifically by incorporating the project into the curriculum. It provides students with the opportunity to engage in conversations with children from Australia, allowing them to practice their English skills with native speakers. To further enhance the learning experience, Haru, a robot, is introduced. Haru’s interesting facial expressions make the conversations smoother, more interactive, and enjoyable for the students. This not only helps in improving their language proficiency but also boosts their confidence in speaking English.
Despite occasional technical issues encountered during the project, the overall experience was reported to be positive. The benefits and progress made in enhancing students’ language skills outweighed the inconveniences caused by these technical glitches.
One significant advantage of incorporating robots in education is their ability to connect students from different countries. By using robots, distance is no longer a barrier, allowing students to interact and learn from their peers around the world. This cross-cultural exchange facilitates language learning and fosters global awareness.
Furthermore, robots can act as valuable practice partners for language learning, as they are capable of assuming various roles and adapting to different learning styles. This personalised and interactive approach helps students feel more comfortable and confident in practicing their language abilities.
Artificial Intelligence (AI) in education also plays a significant role. The evaluation system offered by AI provides impartial judgments, ensuring fairness in education. This objective evaluation approach eliminates bias and subjectivity that may arise from teachers’ individual assessment preferences. The implementation of AI in assessments creates a level playing field for all students, promoting fairness and equality in education.
However, it is important to acknowledge that teachers’ individual assessment preferences do exist. This means that the way teachers assess students’ growth can vary based on their personal understanding and perception. Ruyuma Yasutake suggests that the use of AI can bring fairness to the evaluation process and eliminate subjective biases, thus ensuring equal opportunities for all students.
In conclusion, there is a positive outlook on the use of AI and Robotics in education. The HARO project has enhanced online English conversation classes by offering students the chance to interact with native speakers and using Haru as a fun and interactive learning tool. Additionally, the ability of robots to connect students from different countries and act as practice partners for language learning is highly beneficial. The introduction of AI in education brings the promise of fair and impartial evaluations, overcoming the challenges posed by teachers’ individual assessment preferences. Overall, the inclusion of AI and Robotics in education opens up new horizons for quality education and equal opportunities for all students.
Joy Nakhayenze
The project involved participating in online sessions where students had the opportunity to interact with children from Japan and other countries. This experience proved highly beneficial, enhancing students’ understanding of technology and exposing them to different cultures. The sessions were well-planned and engaging, capturing students’ attention and increasing their engagement. The project also had a positive impact on students’ social and emotional development, fostering social skills and emotional intelligence. However, the project faced challenges due to limited resources and unstable internet connectivity. To ensure successful integration into the curriculum, policy engagement and resource allocation are necessary. Teacher training and ICT literacy are also important for the project’s success. Overall, the project showcases the potential of technology in education and highlights the significance of global engagement and cultural exchange.
Session transcript
Vicky Charisi:
Okay, good afternoon, everybody. Welcome to our session on UNICEF implementation, UNICEF policy guidance for AI and children’s rights. This is a session where we are going to show how we, our team, extended team, tried to implement some of the guidelines that UNICEF published a couple of years ago. I would like to welcome, first of all, our online moderator, Daniela DiPaola, who is a PhD candidate at the MIT Media Lab. Hi, Daniela. And she’s going to help for the online and the decent speakers. And here we have also, I would like to invite Steven Boslow and Randy Gomez to come, our organizers, to come on the stage and we can set the scene to start the meeting. Thank you. So first, let me introduce Steven Boslow. Steven is a digital policy innovation and ad tech specialist with a focus on emerging technology and currently, she’s a digital foresight and policy specialist for UNICEF based in Florence, Italy. Steven was the person behind the policy guidance on AI and children’s rights at the UNICEF. And Steven, you can probably explain more about this initiative. Thank you.
Steven:
Thanks, Vicky. And good afternoon, everyone. Good morning to those online. It’s a pleasure to be here. So I’m a digital policy specialist, as Vicky said, with UNICEF. And I’ve spent my time at UNICEF looking at the intersection mostly of emerging technologies and how children use them and are impacted by them. and the policy. So we’ve done a lot of work around AI and children. Our main project was started in 2019 in partnership with the government of Finland and funded by them and they’ve been a great partner over over the years. So at the time 2019 AI was a very hot topic then as it is now and we wanted to understand if children are being recognized in national AI strategies and in ethical guidelines for responsible AI and so we did some analysis and we found that in most national AI strategies at the time children really weren’t mentioned much as a stakeholder group and when they were mentioned they were either needing protection which they do but there are other needs or thinking about how children need to be trained up as the future workforce. So not really thinking about all the needs, unique needs of every child and their characteristics and their developmental kind of journey and their rights. So we also looked at ethical AI guidelines. In 2019 there were more than 160 guidelines. Again we didn’t look at all of them but generally found not sufficient attention being paid to children. So why do we need to look at children? Well of course at UNICEF we have our kind of guiding roadmap is the Convention on the Rights of the Child. The children have rights, they have all the human rights plus additional rights as you know. One-third of all online users are children and in most developing countries that number is higher. And then thirdly AI is already very much in the lives of children and we see this in their social apps, in their gaming. increasingly in their education. And they’re impacted directly as they interface with AI, or indirectly as algorithmic systems kind of determine health benefits for their parents, or loan approvals, or not, or welfare subsidies. And now with generative AI, which is the hot topic of the day, AI that used to be in the background has now come into the foreground. So children are interacting directly. So very briefly, at the time after this initial analysis, saw the need to develop some sort of guidance to governments and to companies on how to think about the child user, and as they develop AI policies and develop AI systems. So we followed a consultative process. We spoke to experts around the world. Some of the folks are here. And we engaged children, which was a really rich and necessary step, and came up with a draft policy guidance. And we recognized that it’s fairly easy to arrive at principles for responsible AI or responsible technology. It’s much harder to apply them. They come into tension with each other. The context in which they’re applied matters. So we released a draft and said, why doesn’t anybody use this document, and tell us what works and what doesn’t, and give us feedback. And then we will include that in the next version. And so we had people in the public space apply it, like YOTI, the age assurance company. And we also worked closely with eight organizations. Two of them are here today, Honda and JRC, Honda Research Institute and JRC, and MEC3D. And Judith is on her way. And basically said, apply the guidance, and let’s work on it together in terms of your lessons learned and what works and what doesn’t. So that’s what we’ll hear about today. It was a really, really. real pleasure to work with JRC and Honda Research Institute and to learn the lessons. And so just in closing, AI is still very much a hot topic. It’s an incredibly important issue to get right or technology to get right. It is just increasingly in the lives of children, like I said, with generative AI. There are incredible opportunities for personalized learning, for example, and for engagement with chatbots or virtual assistants. But there are also risks. That virtual assistant that helps you with your homework could also give you poor mental health advice. Or you could tell it’s something that you’re not meant to, and there’s an infringement on your privacy and on your data. So as the different governments now try to regulate AI and regional blocks, and the UN trying to coordinate, we need to prioritize children. We need to get this right. There’s a window of opportunity. And we really need to learn from what’s happening on the ground and in the field. So yeah, it’s a real pleasure to kind of have these experiences shared here as bottom-up inputs into this important process. Thank you.
Vicky Charisi:
Thank you so much, Stephen. Indeed, and at that point, we had already some communication with UNICEF through the JRC of the European Commission. But already, we had an established collaboration with the Honda Research Institute in Japan, evaluating the system in different technical, from a technical point of view, trying to understand what is the impact of robots on children’s cognitive processes, for example, or social interactions, et cetera. And there is an established field of child-robot interaction in the wider community of human-robot interaction. And that was when we discussed with Randy to apply for this case study to UNICEF. And I think Randy now, he can give us some of the context from a technical point of view, what this meant for the Honduras Institute and his team. Randy?
Randy Gomez:
Yeah, so as what Steven mentioned, so there was this policy guidance and we were invited by UNICEF to do some pilot studies and to implement some and test this policy guidance. So that’s why we at Honda Research Institute, we develop technologies in order to do the pilot studies. So our company is very much interested with looking into embodied mediation where we have robotic technologies and AI embedded in the society. And as I mentioned earlier, as a response to UNICEF’s call to actually implement the policy guidance and to test it, we allocated a significant proportion of our research resources to focus into developing technologies for children. In particular, we are actually developing the embodied mediator for cross-cultural understanding where we developed this robotic system that facilitates cross-cultural interaction. So we developed this kind of technology where you have actually the system connect to the cloud and having a robot facilitates the interaction between two different groups of children from different countries. And before we do the actual implementation and the study for that, through the UNICEF policy guidance, we tried to look into how we could actually implement this and looking into some form of interaction design between children and robot. So we did deployment of robots in hospitals. schools and homes. And we also look into the impact of robotic application when it comes to social and cultural economic perspectives with children from different countries, different backgrounds. And we also look into the impact of robotic technology when it comes to children’s development. So we tried some experiments with a robot facilitating interaction between children and some form of like game kind of application. Finally we also look into how we could actually put our system and our pilot studies in the context of some form of standards. So that’s why together with JRC, with Vicky, we look into applying our application with the IEEE standards. And with this we had a lot of partners, we built a lot of collaborations which are here actually and we are very happy to work with them. Thank you.
Vicky Charisi:
Thank you so much both of you. So this was to set the scene for the rest of the session today. So as Randy and Stephen mentioned, this was quite a journey for all of us and around this project there are a lot of people, a great team here, but also 500 children from 10 different countries where on purpose we chose to have a larger cultural variability. So we have some initial results and for the next part of the session we have invited some people that actually participated in these studies. So thank you very much both of you and I would like to invite first Ruma. Ruma is one of the students that, thank you. Ruma, you can come over. Ruma is a student at the high school here in Tokyo, and you can take a seat if you want here. Yeah, that’s fine. And he’s here with his teacher and our collaborator, Tomoko Imai. And we have online also Joey. Joey is a teacher at a school in Uganda where we tried to implement participatory action research, which means that we brought the teachers in the research team. So for us, educators are not only part of the end user studies, but also part of the research. So we interact with them all the time in order to set also research questions that come directly from the field. So we are going to start, you can sit here. Do you want, or you want to stand? Whatever you want.
Ruyuma Yasutake:
I want to stand.
Vicky Charisi:
Yeah, sure, sure. So we have three questions for you first. We would like first to tell us about your experience in this process, participating in our studies.
Ruyuma Yasutake:
We have online English conversation classes once per week in the school. But we often have some problem in continuing the conversation. With our participation in the HARO project, we had a chance to talk with children from Australia with help of HARO and this made somehow different. For example, sometimes there was a moment of silence. But Haru could feel these moments and made conversation smoother. Also, during the conversation, Haru would make interesting facial expressions and make conversation fun for us. During the project, we had a chance to design robot behaviors and we interacted with engineers, which was really nice. During the project, you probably faced some challenges or there were some moments where you thought that this project is very difficult to get done. Do you have anything to tell us about this? The platform is still not stable and sometimes there was system trouble. For example, once robot was overheated and could not cool down, so Haru stopped interaction and started again. But overall the experience was positive because I had a great time talking with professional researchers who were trying to fix the problem. Being able to work with these international researchers, it was a very valuable experience for me.
Vicky Charisi:
Thank you, Rima. Do you want to tell us how would you imagine the future of education for you? I mean, through your eyes, you are now in education. So, if in the near future you have the possibility to interact more with robots or artificial intelligence within the formal education, how this would look like for you?
Ruyuma Yasutake:
Haru can help connect many students in different countries. Robots can be a partner to practice the conversation by taking different roles. teachers, friends, and so on. And probably, use of AI’s evaluation system can be more fair, yeah.
Vicky Charisi:
Okay, so thank you very much, Ryuma. This was an intervention from one of our students, but yeah, next time probably we can have more of them. And now I would like, you can probably, yeah. Thank you so much. You can take a seat there. I’ll take a seat here. The question would be later. Great. And now probably, we have an online speaker. Joy, can you hear us? Joy?
Joy Nakhayenze:
Yes, I can hear you.
Vicky Charisi:
Perfect. Joy is one of our main core collaborators. She’s an educator at a rural area in Uganda, in Boduda. Her school is quite remote, I would say. Through another collaborator of ours, the year we had an interaction with her initially, we explained our project to her, and we asked if we could have some sessions. Our main goal to include a school from such a different economic, but also cultural background, was to see if when we talk about children’s rights, this mean exactly the same for all the situations. Does the economic or the cultural context play any role here? So what we did, it was to bring together the students from Tokyo, this urban area, and the students from Uganda, to explore the concept of fairness. So we ran studies on storytelling, and we asked children to… to talk about fairness in different scenarios, everyday scenarios, technology, and robotic scenarios. And now, Joy, would you like to talk a little bit about your experience participating in our studies?
Joy Nakhayenze:
Yeah, I’m excited, and thank you very much for inviting me. I think that’s excellent. Thank you very much. I’m Joy, and I’m an educator from a Ghanaian school called Bunamaligudu Samaritan, which is founded, of course, in Uganda, in the rural setting. It has a total number of, like, 200 students who are in the age bracket of five to 18 years old. Most of these students live close to the school, and their parents are generally, like, peasants. The greatest benefit from being involved in the project has been the exposure, like, to my students, and the project has enabled our students to participate and have hands-on experience that enhanced their understanding and interest in technology and other cultures. It was their first time for them to talk to children, like, in Japan and, you know, other countries, that really was a great experience for them. Like, additionally, a great bonus was, like, language learning, whereby the students were able to engage in interactive practices, and they received artistic feedback on their language skills. Like, you could find that they learned how to express themselves in Swahili and English. What we thank a lot, like, the session were well-planned and would really capture our students’ attention, and it had to increase the engagement. The session that we all had during the activities we were handling. What I feel like, in my opinion, what I had was the project really enabled the social and emotional… learning, whereby the development of the social skills, the consideration of emotional intelligence, you know, feeling the compassion for the seers in Japan, they really enjoyed and they learned about the Japanese culture and the school in all.
Vicky Charisi:
Thank you so much, Joy. And if you want to tell us a little bit about possible challenges that you faced while you were participating in our studies, and we didn’t have, of course, we didn’t have the opportunity to have a robot at the school there, so this is something that was not, I mean, we are in very initial phases where we do ethnography, so probably this will be in the future, but already we had some other interactions and discussions with Joy, so would you like to tell us a little bit the challenges that you faced, even with the technology, the simple technology that we used during our project?
Joy Nakhayenze:
Thank you, Vicky, like, in my opinion, the major obstacle was the limited resources we had at the local level, both in Uganda and the school being at the local set-up. Gudu Samaritan is a local set-up that has a budget constraint, making it, like, difficult to invest in technology, and also we found that the internet connection was not all that stable, like, they were used to witness with fear, and it really made the work to, you know, participating online sessions was a little, very hard to catch up with the timing. Another issue we had was to do with the curriculum integration, whereby we feel like there should be a need to engage the Minister of Education back in Uganda to integrate the project so that there is additional resources, the time, the adjustments to teaching methods.
Vicky Charisi:
Thank you, Joy, and what is your vision for the future? What would you like to have for the future in the context of this project?
Joy Nakhayenze:
Thank you. The most important aspect for us is the funding of such projects. First, the government should provide the infrastructure for a stable Internet connection for all. This is like a best need for the integration of technology in the school. And you have to find that you find a school like Woodrowson-Murrayton, there is no power, there is no Internet connection. What we were only using like one phone, maybe one laptop, which was very hard. So in case there is that funding, it will help to ease the connection of the Internet to the children. We also need to feel like the resources and the necessary materials, like the intelligence systems, the robot, the computer equipment, as in the schools, like you find that Japan, you know, the children would feel like their adult students had computers. So this way, like our students will have equal access to information like how we saw it in Japan. For the future, we envision like our schools have not only the necessary technology, such as computers and robots for the students, but also trained teachers. We feel AOL literacy is important for all students and teachers. We hope that all the educators have the opportunity to participate like on those online workshops and training, to feel confident about technology in their everyday teaching. Like Vickie, as you understand, our participation in this project was a great opportunity for our students, and we hope that at least, not only at the beginning how we started it, but we will continue with this exciting project to grow up and excel. Thank you very much.
Vicky Charisi:
Thank you, Joy. It was a great pleasure it has been to work with Joy and the school, and thank you very much for your intervention today. Thank you. Great. So now we can… I don’t know if Judith is around. Judith, you’re here. Great. So I would like to invite… Judith. So, as Stephen said beforehand, this was one, I mean, our project is one of the eight case studies where we tried to implement some of the guidelines from UNICEF. Today we want also to take a taste from another case study. So, Judith, I need to read your short bio, because it’s super rich. So, welcome to the session, first of all. Judith is a technology evangelist and business psychologist with experience working in Africa, Asia, and Europe. In 2016, she set up Imisi3D, a creation lab in Lagos focused on building the African ecosystem for extended reality technologies. She’s a fellow of the World Economic Forum, and she’s affiliated with the Graduate School of Harvard, the School of Education. So, the floor is yours, Judith.
Judith Okonkwo:
Thank you very much, Vicky. Good afternoon, everybody. What a pleasure it is to be here with you all today. I just want to tell you briefly about my engagements with UNICEF as part of the pilot for working with the guidance for the use of AI with children, which is really pivotal for us. But before I start, I want to give you some context about the work that I do. I run Imisi3D. We describe ourselves as an XR creation lab, and we are headquartered in Lagos, Nigeria. Our work is to do whatever we can to grow the ecosystem for the extended reality technologies, so augmented virtual and mixed reality across the African continent. In service of that, we focus activities in three main ways. The first I describe as evangelization. We do whatever we can to give people their first touch and feel of the technologies, give them… access to it and help them to understand the possibilities today. The second focus area for us is to support the growth of an XR community of professionals across the African continent. We believe that if we’re to reap the benefits of these technologies, then we must have people with the skills and knowledge who can adopt and adapt these technologies for our purposes. And then for us, the third aspect is committing our time and resources to areas in which we think there’s room for immediate significant impact with these technologies for society today. And in service of that, we do work in healthcare, in education, in storytelling, and in digital conservation. And that healthcare piece is what brings me here today for this particular brief talk. So a number of years ago in Nigeria with a partner company called AskTalks.com, we conceived of a project called Autism VR. And I’ll give you a bit of background as to why that is. So Nigeria, if you’re familiar with it, is a country of 200 plus million people. It’s a country that I would say is severely under-resourced when it comes to mental healthcare. I don’t want to go into the numbers in terms of, you know, providers to the population, but it is really, really worrying. stigma attached to mental healthcare as well in the country. And so you can imagine the situation for children who might be neurodiverse and the ways in which they are often excluded from society. So with AskTalks.com, we conceived a game called Autism VR. It’s a voice-driven virtual reality game that does two things. So first of all, it provides basic information about autism spectrum disorder. And then the second element of it is that after providing that information, you then have the opportunity to, through voice interaction, engage with a family that has a child on the spectrum and then see. if you can sort of like put some of the things you’ve learned into practice. That’s the idea and we’re still developing this. So we had started on that for about a year or two when we were very fortunate to be introduced to Steve and his incredible team and the guidance on the use of AI for children. I would say that prior to this, we had spent a lot of our time believing we were following a human-centered design approach to our product development in terms of wanting to build with all of these, I suppose, commendable considerations. We wanted to increase awareness, we wanted to foster inclusion, we wanted to support children who were neurodiverse. But the guidance really helped us shift our perspective from just being broadly human-centered to being specifically child-centered in our design approach. And for it, we focused on three main indicators from that guide. We wanted to prioritize fairness and non-discrimination. And the way that would typically show up in a country like Nigeria is just exclusion, right? For children who are neurodiverse or children who the general public would have to work a little bit more to understand or to engage with, right? We wanted to foster inclusion, we wanted more people to have the knowledge to understand that behavior they might see might not be behavior that they should just consider sort of like off the scale and not worth engaging in. And we really, really wanted to do all we can to support the well-being and positive development of children who are on the spectrum. And we believe that by creating awareness, we can do this. In the, oh. Just checking, there’ll be an image up in the screen in a minute. And it’s a. screen grab from the game, an early version of it, so know that it’s improved. But I’ll tell you a little bit about sort of like what the experience is like. So in the first scene, there’s a woman called Asabe, and she’s a woman who is in sort of like the front room of a typical house in Lagos. You go into the room, and you engage her, and she starts to talk to you, and she provides information about autism spectrum disorder. So she gives you general basic information. She checks your understanding every few sentences, and you respond and let her know whether you understood or not. If you don’t, you know she’ll go back. And then when you’re done with that, she then says, please go ahead and visit your family friends in your car first. So the idea is that you’re then going through another door into a typical living room, the kind you would find in Nigeria. And when you get into that room, there’s a family, you’re greeted by the parents, and they welcome you, and then they say, here’s our son, Tinday. See if you can get him to come and greet you. We’ll go and get you some refreshments. And then they exit the room, and then you get to attempt to engage with their son. And the idea is that if you’re able to do that, if you’re able to do that, using the tools and the tips that you’ve gotten from the previous scene, then eventually Tinday will not just kind of like engage with you by establishing eye contact, but he will actually stand up and come to you and say, you know, good afternoon, auntie, or good afternoon, uncle, as the case may be. And we started building this game. We were building it for the Oculus Rift, letting you know just how long ago that was. But the idea right now is to build for the Google Cardboard. I have one here. And that’s really because this is a game that, first of all, will be an open source product, but it’s really being built for the people. people and being built to ensure that more people have an understanding of what autism spectrum disorders are, what neurodivergence is, and are able to engage with it. It’s been challenging building for the cardboard, but we also know that if we want it to scale in a place like Nigeria, where there isn’t ready access to virtual reality headsets, then that’s definitely the way to go. Should I?
Vicky Charisi:
Okay, thank you so much, Judith. We had a small practical problem, but we are going to show it afterwards, because we have a description, yeah. But thank you so much for the description for your talk. Thank you.
Audience:
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Vicky Charisi:
Keynote speaker, Daniela, it’s off to you for now.
Daniela:
Hello. Hi, everyone. It’s my pleasure to introduce Dominic Register, who is a Director of Education for the Center for Education Transformation at Salzburg Global Seminar, where he’s responsible for designing, developing, and implementing programs on the futures of education, with a particular focus on social and emotional learning, educational leadership, regenerative education, and education transformation. He works on a broad range of projects across education policy, practice, transformation, and international development, including as a Director of a Model Alliance, as a Senior Editor for Diplomatic Courier, to mention a few. Thank you, Dominic.
Dominic Regester:
Thanks, Daniela. Good morning, Vicky. Hi, everybody. Thank you for the invitation to speak with you all. Is the audio okay? Can you hear me okay?
Vicky Charisi:
Yes, we can hear you okay. Great. Yeah, yeah.
Dominic Regester:
Thank you. Like Daniela said, I’m the director of the Centre for Education Transformation, which is part of Salzburg Global Seminar. Salzburg Global Seminar is a small NGO based in Salzburg in Austria that was founded just after the Second World War as part of a European or transatlantic peace building initiative. I wanted to talk a little bit about the education landscape globally at the moment and about why there is such a compelling case for education transformation. So the beginnings of this is really, it predates COVID. And there was an increasing understanding that the world and that the vast majority of education systems had gone into what’s being described as a learning crisis, that students in education around the world, this is particularly K-12 education, were not meeting literacy and numeracy levels and that school systems weren’t equipping students with the kinds of skills that were going to be needed to address key concerns within the 21st century. There was also a growing realisation that education systems had in many ways perpetuated some of the big social injustices that we’ve been dealing with for the last few years. Then COVID happened. And COVID, as schools were locked down at one point in 2020, there were something like 95% of the world’s school-aged children were not in school. One of the things that COVID did for global education systems was it shone a light on the massive inequalities that can exist within, that do exist within and between systems. And as there was greater understanding of these inequalities… as parents were much closer to the process of learning and seeing what their children needed to do. It helped catalyze this really interesting debate that is still playing out at the moment as to whether we were using the time that we had children in school for in the most productive ways. So you put the inequalities from COVID alongside the big social justice movements like Black Lives Matters or like Me Too, so looking at gender equality or looking at racial justice alongside the climate crisis and the way in which the climate crisis is impacting on more and more people’s lives, but in a very unequal manner. All of this catalyzed this great process of education transformation. So last September, September 2022, UNESCO and other UN agencies, UNICEF included, hosted what was called the Transforming Education Summit in New York, which was the largest education convening in about 40 years. And the purpose of the summit was to try and help share great practice in innovation and also to catalyze a process of education transformation because there was a realization that education systems may have been contributing or had been contributing to these different challenges that now needed to be addressed. So issues of inequality, issues of the learning crisis, issues of social justice. There are now 141 UN member states have started a process of education transformation and that have developed plans and approaches as to what it is that they want to transform. After the summit, an amazing organization called the Center for Global Development did an analysis of the key themes that were coming through from the transforming. from the transformation plans. So this is based on a keyword analysis of what had been submitted, or the proposals for different systems to transform their systems. So the top issue, by a very long way, is around teaching and learning. There was then the second most important issue was around teachers and teacher attention, which is not that surprising. The teaching profession as a whole globally, a third of teachers leave the profession globally every 12 years at the moment. The third issue was technology, but when we’ve dived into the technology, it isn’t particularly about AI. It’s more about device deployment and access to the internet. Then there were employment skills. There were issues of inclusion, issues of access, and the climate crisis. So they were sort of most of the top 10. And these are the issues that were coming from ministers of education, from national education systems. As you will all know, education around the world, you know, there are an enormous number of civil society organizations around the world that support education and education reform and transformation. And so alongside the analysis of the keywords that were coming up in the transforming education policies or approaches, there is also a kind of parallel analysis of what civil society priorities are for transforming education. And some of the key things that are coming up from civil society organizations are around intergenerational collaboration in educational transformation, how systems can pivot to being more collaborative and less competitive. So more collaborative and less competitive. That is both within and between. systems, a very strong focus on social-emotional learning and psychosocial support and mental health and well-being of teachers and students around them, and then this idea of how transport systems can contribute to more inclusive futures or address some of these longstanding structural social injustices that have existed for many, many decades. The reason for mentioning all of this in this kind of context to the global transforming education movement, which is kind of a year in now, is really to pose the question that, is AI addressing these things in the right way? Is the tech sector and people who are developing AI applications for education responding to the key concerns that are coming from the education profession? I think there is a very, very acute concern that as more systems spend more resources on the application of AI in education, it is also going to increase a digital divide, which is already very clear, between education systems and between students who have access to AI or are skilled in using it and understand how to use it and those that don’t. I think I usually live in Salzburg, in Austria. I’m in London at the moment, because I’ve been speaking at something called the Wellbeing Forum. And the theme of the Wellbeing Forum this year was around human well-being in the age of AI. The conference happened all day yesterday, and it’s a meeting of business, of education, of health professionals, and it’s of health professionals, of religious and other spiritual leaders. and the tech entrepreneurs. And one of the key things that came through yesterday was the high degree of anxiety that all these different, all these representatives of different sectors have about, about AI, the risk that AI can pose to ways of life. One of the most interesting quotes that came from yesterday, which I wanted to share with you all so I can come to the end of what I wanted to say, was in the rush to be modern, are we missing the chance to be meaningful? And as people lean more and more into the possibilities of AI, are we also losing out on the chance to focus on things that are really important in our societies or in our education systems? And so, what I really hope that this short presentation or this short talk has been able to do is share some of the key themes or key trends that are taking place in education transformation around the world. I would really encourage you all, if you have the chance to engage with teachers or with education leaders, system leaders or institution leaders, to take the time to listen to what are the key concerns within the sector at the moment, and how can AI be applied to addressing some of these concerns? And what can that do to address the anxiety that exists in global systems around digital divide or the lack of understanding of AI? Or how the risk that it is going to exacerbate inequalities within systems or between systems? So, thank you very much for the chance to speak with you all today, and I wish you all a very successful rest of conference.
Vicky Charisi:
Thank you so much, Dominika. Thank you. Thank you. I hope you will stay a little bit more with us because we have a Q and A. afterwards. So is this okay with you, right? Yes, it’s fine. Okay, thank you. Thank you. So now it’s a great pleasure to introduce Professor Dr. Bernard Sendhoff, who is the Chief Executive Officer of the Global Network Honda Research Institute and leader of the Executive Council formed of the three research institutes in Europe, Japan and the US. The
Bernhard Sendhoff:
floor is yours. Great, thank you very much, Vicky. Thank you, Stephen. Thank you, Randy, for organizing this wonderful workshop here and for inviting me to say a few words about what brought a company like Honda into the domain of AI for children, what we find so exciting about this and how we want to go about it in the future and what we plan to do. Now, the Honda Research Institutes are the advanced research arm of the Honda Corporation and our mission is really twofold. On the one hand, we want to enrich our partners with innovations that address new product services and also experiences. At the same time, we also really do science and we want to create knowledge for a society that flourishes and these are kind of like really our two legs we stand on. On the one hand, the scientific effort, on the other hand, on bringing this scientific effort actually into innovations and our founder, Soichiro Honda, was very much about dreams of the future and we think about the future. When I talk to young researchers, I often say, you know, it’s a privilege that we have in creating the future but it’s also a responsibility and when you judge your own work, just ask yourself. is the future you are creating, the future you want your children to live in. And this already connects us a little bit with the role of children in our research, because for researchers, when we create the future innovations, it’s really about the innovations our children will be using. At the same time, we have to say, and Stephen mentioned it, we have seen a tremendous success in AI and many other technologies in the last decade. However, at the same time, we have to honestly say, if you just switch on the news for a couple of minutes, we haven’t really particularly been very successful in making that society a lot more peaceful or a lot more happy with this technology. And one of the issues what we looked at was the rising alarm of arising social fragmentation. And you see this in almost all societies, and we see that the only way to address this is to focus a lot more on togetherness in societies. And togetherness, of course, starts with the children. It’s our children who can learn how to respect differences across cultures and how to enjoy diversity towards something that is maybe a very long-term dream of something like a global citizenship. So we started thinking about how can we use AI innovations in order to empower children to understand more about each other. And we called it Target CMC, and Randy already talked a little bit about how, together with great work from Vicky and others, we have been able to actually bring this to life and use embodied AI technology, the tabletop robot, HARO, that we developed at the Honda Research Institute Japan, in order to mediate between different cultures in different schools in Australia and in Japan. That was our first target scenario. But as you can see on the list here, we envision to expand this quite substantially. And I highlighted on the slide here in particular two extensions. One, really going into developing countries like Uganda, where of course we have the cultural experience and we heard the wonderful ceremony earlier about the cultural experience, are again a lot more different than, for example, between Australia and Japan. And another extension is also into Ukraine, which we know is a war zone since a couple of years. And again there, of course, environmental conditions for children, for education of children, again poses some very specific challenges. And I think this is where, again, mediation and fostering understanding of each other can really play a large role. And Ryoma gave a very nice statement about your experience with HARO. And when you also talked a little bit about some of the technological challenges we still have, I thought to myself, well, this actually can also be something nice, right? Because there’s nothing as nice as if two people can joke about the technological shortcomings of a robot. And there’s nothing like connecting in this way, even across different cultures and maybe different continents. Right from the start, actually, the guideline that UNICEF did, and I really think they did a great work on this, was kind of like really a guidance for us when we thought about, you know, how do we have to specifically take care about AI in the context of children. And I used two keywords here on the one hand, protect and support. because I think both of them really go hand-in-hand together. It’s very clear that children need specific protection and I think we see this in many of the data and it was mentioned that there is of course also an increasing experience of mental health conditions for a number of reasons. So we need to take special care but on the other hand of course there’s also great support that we can give children at their hand and this is equally actually backed up by the data. So you know children, young adults all around the world use the new technology and I have no doubt they will also use the most recent advances in AI very successfully to increase things like connectivity, to increase their own creativity. So it’s really that both things, both protect and support go hand-in-hand and I think sometimes also a lot of people talking about the technology without listening to those who actually are often the earliest adopters and those are the young adults and the children of the technology. So I think for us it’s actually also quite good to more listen to those people who are actually using those things first. So I already mentioned about one of our starting points was using mediation with AI, with embodied AI technology in an educational context. However at the same time we also started another very exciting project about using AI technology in a hospital environment. Generally we are interested in supporting children in vulnerable situations. Hospital environment is one, conflict, disaster, flight and displacement for example are others and they share many common characteristics. All three situations, the needs of children are very often inadequately addressed. The reasons is not always the same, however the fact stands out for all three areas. Children, I think that’s very clear, need child-specific explanation and reassurance is something that is not always possible in all of those three situations. They often even need support in expressing their feelings and there are some very exciting projects really focusing helping children to tell others how they feel about things. And they still need to be children even in difficult situations like a disaster or displacement and often they need additional trustees because parents, who is of course a natural trustee for a child, is often part of that difficult environment, right? Parents are there in the disaster flight situation, they are part in the hospital environments. Children feel that their parents don’t feel well when the children are ill. So that poses them in the situation and doesn’t give them the ability to be a neutral trustee. We have started some very first exciting experiments with our very, very valued partner in a Spanish hospital, in a cancer hospital in Sevilla and we are expanding these. We are in discussions on how we can use HARO in the many different contexts that are possible there and also expanding this into a second partner. Now I would like to come back to my first slide. So I mentioned social fragmentation is a huge issue for us. Togetherness is maybe one way to approach this and togetherness really starts in our society with the children and we at HRI believe we have a unique expertise on the interplay between embodiment, empathic behavior, and social fragmentation. curated social interaction. You know, we have seen a very exciting development in the area of generative AI. Stephen mentioned that earlier. At the same time, in particular in interaction with children, I think there are also severe limitations that those systems have. And again, this places us in the challenges of curated interaction. We want to continue to engage with our partners to make the expertise and the advances in AI available with the benefit of comforting and connecting embodiments available to children in a number of different situations. And we want to do this explicitly also and really with a special focus on developing countries. Because there, of course, the challenges are again slightly different. However, these are very young continents, right? Africa is a very young continent. So when we talk about the future and the future education and the future support of our children, it has to be done in context with those countries as well, of course, and they rightfully expect this. And one last thought is also, I think we have seen in the recent progress in generative AI systems on how we build those systems. And I think there is a huge discussion on whether this will be able to continue in this way. And we believe that the future AI systems also has to learn in interaction with the human society in order to share some of our human values also in the developing AI system. At the moment, we throw a lot of data at those systems, and rightfully so, we would never do this with our children, right? We very carefully curate how our children educate. And we believe in the future that children and AI systems will actually also… mutually benefit from each other because they will have the possibility of learning alongside from each other in a bi-directional way, learning values like we teach our children values in our society how we grow about. Now at the Honda Research Institutes of course we don’t only focus on AI and children but we have actually identified the United Nations Sustainable Development Goals as guiding stars for our development of innovations of putting AI and embodied AI technology into innovations of turning innovate through science our HRI motto into something that has a tangible benefit in particular in the context of the sustainability sustainable development goals and with that I would like to again thank the organizers very much for giving me the opportunity to briefly talk about HRI here and for you for listening thank you very much.
Vicky Charisi:
Thank you. So we have some time for questions I would like to invite the people that are here so the speakers that are here probably to have a seat here Stephen, Randy, Judith, yeah and we have also our online speakers and now it’s time for questions so is there any question from the audience? Selma?
Audience:
Hi I am Emil Wilson. I’m Guilherme, I’m from the UFI program in Brazil. I am a researcher and a writer woman. I am a younger man who bangs his carry for children’s rights in Brazil in UNICEF project. And that is why for me, the institution’s proposals are always very important, however, as was briefly pointed out at the beginning of the panel, there is an interaction between AI and mental health, but such as ICPA and Lucia have been used, for example, on Telegram as a possibility for mental health support, which can intensify the risks of children and adolescents online. My question is, then, who can UNICEF help in the debate about AI, children and mental health? Thank you. Sorry, my English.
Vicky Charisi:
Thank you very much. Steven, would you like to start with this since it was about UNICEF? Thank you.
Steven:
Thank you very much for that question. This is an area that’s crucially important for us, but not just for UNICEF, for anybody working in the space of how children interact with technology, and especially in the context of mental health and mental health support. And I don’t know who, nobody has all the answers right now. What we know is that there’s a massive mental health need. There is the potential for technology to support, and there is a potential for technology to also get it wrong, which could have very severe effects if it does. that gives the wrong advice or inappropriate advice or potentially shares information that was given in a very confidential environment out. And so it’s a very, very sensitive space. I think we all need to get involved here. We need the children. We need, of course, the technology developers. We need a responsible, as Bernard said, responsible development approach. And this is not an area that we should rush into, for sure. But yeah, we need to watch it. It’s going to happen. If we get it right, there is huge potential for providing support. And I think, as I said earlier, what’s really happened with chat GPT, everyone talks about that as the one thing. And of course, foundational models are not new. And there are other models, not just chat GPT. But that’s the one that everyone, it’s kind of become the placeholder for this whole new moment, cultural moment, not just technological, but cultural moment, as the speaker said earlier. That AI is now kind of, it used to be in the background, the algorithm, your news feed, the bunny ears on your Instagram photo, your Snap photo. It’s now something you interact with. And we just don’t know what the long-term effects are. This is why we also need solid research around the impacts of children and AI as they interact, and all of us. But of course, we focus on children for the opportunities and also the potential risks.
Vicky Charisi:
Thank you very much. Judith, you also do work with mental health. Would you like to say something?
Judith Okonkwo:
Sure. Thank you very much. I was just nodding as Stephen was talking, because everything he said completely resonated. I think one thing I would like to say is that right now in the world, all of these initiatives happen. where people are thinking about things like governance for AI and governance for the metaverse. I just really think that we have to prioritize including young people in those conversations. So I mean UNICEF of course does that brilliantly but I think so many more organizations need to. Every time I’m in a room where those conversations are being had and you know the youngest people look like me I know we have a problem. So you know whatever we can do to make sure that young people are in all the rooms they need to be in we definitely should. And then I just wanted to say you know you were talking about getting it wrong and I don’t know if people saw but in the news recently BBC was reporting about a young man who had you know been arrested on the grounds of Windsor Castle for trying to kill the Queen and he had been egged on by his AI assistant to go and do it. So already you know we are seeing that we don’t quite know where we’re going with these technologies but we definitely have to come together to figure out what future we want for ourselves.
Vicky Charisi:
Thank you very much. First I would like to do a small rearrangement so you belong there please. It’s about children. Randy would you mind to go to sit there so I can. Is it okay? Okay. Thank you very much and apologies for the interruption. Any other question?
Audience:
Selma yeah. Hi I’m Selma Shabanovich from Indiana University. It’s such a pleasure to see the diversity of projects and different kinds of thoughts that really all focus on children and their presence in the work. One thing I was curious Steve you started with kind of saying you had developed these guidelines and you knew they weren’t the end and then you had so many different really interesting things go on so I was just wondering if both you and the folks who participate in the projects could speak a little bit to you know either how the guidelines were things that were kind of present and helped them in the projects and or how their projects, how they see their projects is expanding on or further defining aspects of the guidelines that maybe weren’t already in there. Thank you.
Steven:
Thanks, Selma, that’s a really great question. So the eight, and I should have mentioned this earlier, I’m sorry, so that the guidance has been published and the eight case studies are online on the UNICEF page. So I would really encourage everyone to to look at each one because we wanted a diversity of projects from different locations but also different contexts. Like some of them, some of the projects do, the one in, one of them in Finland provides mental health support or at least, sorry, mental health information, not support but where children can find information as a kind of a first point of call and initial questions around potential symptoms and I’m looking for that first line of kind of informational support, not therapeutic support. But that was one of the case studies and that was done by the, is still done, it’s an ongoing project by the the Medical University of Helsinki and so that was interesting because they had a, because it’s a hospital, they, you know, in a very developed nation in a sense, technologically developed and also kind of government supported, they had many ethicists on their team that developed the product. So not only software developers but ethnographers, researchers, ethics team, doctors, psychologists and obviously did a lot of testing with the children. So we chose that, there’s MEC3D, also mental health support but not necessarily for the patient but actually for the people around the patient or around the, not the patient, sorry, the child on a, on the And then, for example, we did one with the Alan Turing Institute in the UK that was a really nice example of how you engage the public on developing public policy on AI. And they’ve actually gone on to, while the case studies have kind of finished, the work continues. So the Alan Turing Institute has been asked by the government of Scotland to engage children in Scotland on AI, and what excites them about AI, what worries them, and I think we’re going to come up with a question on that. What kind of future do they want? And so the Alan Turing Institute and their initial reports and methodology and everything are online. It’s a really rich resource, and that will inform, you know, policymakers as they regulate. So it was interesting. For us, in the end, after the eight case studies, the guidance didn’t really change so much, which was kind of a relief. We thought, like, wow, we seem to get it kind of quite right the first time. But it might also just be because the guidance is almost at the level of principles, and we do that because we’re a global organization, and so you have to be quite kind of high-level or generic, and then it gets adapted at the local context. The unfortunate thing is that everybody wants the details. How do you adapt it? And that’s where, you know, that’s the challenge. How do you move from principles to practice? But that’s where, in the end, we kind of said the guidance hasn’t changed that much, but it’s been enriched by these case studies. If you want to learn kind of how different organizations have applied them, then go and read these. I’ll just say one more thing. There are nine principles or requirements for child-centered AI in the guidance, like, for example, inclusion of children in developing AI systems and policies. We found, in the end, that all of the case studies only picked two or three. And we realized that that’s actually fine. In your project or in your initiative, there are two or three that’ll speak more to you than others. So if it’s participatory design and the inclusion of children, that’s one thing. You know, if it’s fairness or discrimination. And so it was really collectively unpacked all nine. But in the end, only a few tend to kind of be the focus for your work. Yeah, so everything’s online. We are really, of course, just thinking about if there’s a need to update them or kind of add to them now in the light of generative AI. And as I said earlier, there are a lot more unknowns now. We don’t know how the human-computer interaction will evolve over time. And we want to kind of make it work in a way that upholds rights and be responsible. But we are, everybody kind of building the plan or fixing the plan as it’s in the air. So we are very keen to do more work in this space in light of kind of ongoing developments. Yeah.
Vicky Charisi:
Thank you very much, Stephen. Is there any other question from the audience? Yes, please.
Audience:
Hi, this is Edmond Chung from .Asia. We also operate the .Kids domain and what is being done here is great. It’s definitely something that .Kids will like to take on and also help promote. But asking as personally, I wanted to ask, I guess it’s Ruyama, or Ryuma. One of the last comments kind of gave me a little bit of a concern. Your last comment was that maybe the evaluation or the assessment can be more fair with AI. Of course it could be, but it could also be less fair. And that’s part of the discussion, that’s the heart of the discussion. So what if it’s not fair? And that brings me to. to a second question that I wanted to kind of ask as well. I think it was mentioned that for the Uganda project, it was focused on fairness and exploring fairness. But I didn’t quite understand from Joy what was being discussed, how part of AI was part of it. Would it be useful to get more of that? Because really, actually, as a father of an eight and 10-year-old, I’m quite pleasantly surprised that my 10-year-old, just now in year seven, have told me this September, starting, their teachers are actually getting them to use AI to help them with homework and being part of the curriculum. So it’s really exciting for me. But also, because we know that technology is not entirely neutral, especially when we talk about these things, it’s a symbiotic relationship. As much as we shape them, they shape us, especially kids going forward. So that’s why I wanted to really hear from the experience. You had an ending remark about fairness, and then how AI and fairness really works, and the response from the case studies. Thank you.
Vicky Charisi:
Thank you. Do you mind if I get a question? Because I did the study with the kids in Ugandan fairness. Is it OK with you? So indeed, the talk by Joey was focused on something else, not on the specific study. Of course, we have published. So there is a scientific publication on this. We can share the links later. So the main research question for this study was to understand if there are cultural differences in the perception, the perceived fairness. So we wanted to see how children in these two environments were perceived by their parents. with the cultural, but also the economical differences they had, they would focus on different aspects of fairness. So what we did, we provided different scenarios. We let the, the whole activity was based on storytelling frameworks, and we let the kids talk about these scenarios in their own words, their own drawings, et cetera. Then I said some researchers analyzed in a systematic way these data, and what we found was that indeed children in Uganda focused more on aspects of fairness that have to do more on the material aspects, so they would talk more about how, for example, something was shared among children, et cetera, while the children in Japan would focus more on psychological effects. So for example, they would talk about behaviors of teachers, or they would talk, so this was, this is just an example to see how the priorities, probably when we abstract, the actual notion of fairness doesn’t really differ a lot, but when we go in details, we see that children in these different cultures prioritize in different ways. So that was our, the results of our study. Of course, this was only the starting point, and there are a lot to explore, and it is not only us. There is a huge community of developmental social psychologists that explore this topic. So the first question, do you want to repeat the first question?
Audience:
Yeah, I guess, just wanna ask, you mentioned at the very end that, if I understood you correctly, you’re saying that assessment, maybe, of your work through AI might be more fair. Tell us more, a little bit more about it. What if it’s not fair? How do you know it’s not fair? What if you trust the machine too much?
Vicky Charisi:
Is there someone, Judith, who would like to speak?
Ruyuma Yasutake:
I would like to speak first. I think some school teachers have individual evolution sense. What do you say? Not equal? Not equal. Teachers’ evolution sense? The way of judgment? The way of judgment is not inequality. So, I guess, AIs can fair evolution.
Vicky Charisi:
Yeah, I mean, apparently, probably, there are some hopes here, right? So, I don’t really believe that, you know, there is like this… Nobody believes that, you know, it’s like fair, right? Absolutely fair evaluation with AI. This is true. But probably, from young students, there is a hope. When they see their systems or their schools evaluating in different ways, and probably they experience a little bit of human unfairness, probably they put a lot of, you know, some hope on AI. But, of course, this is something that we really, really need to take very seriously. Yes, please.
Audience:
Hi, my name is Zanyue from South Africa and Zambia. And this is not… I think it’s more of a comment, just listening to the discourse. There’s a concept that we use quite often in South Africa, and I think it’s quite pertinent here, progressively realising, right? So I think when we speak about AI, especially at the stage that we are globally, your question is quite important. You know, what is fairness? What are the assessments? What’s the criteria? And you quite correctly put, in different geographies, instances, even in the same locality, based on various factors, that that concept of fairness really is so subjective. And I think what AI does is it gives us objective, almost element to these very subjective things, and you tweak it accordingly, and that’s why it’s so important if we speak about… I mean, I think the question on fairness really does veer off to algorithmic biases that we do speak about. That, I think, is also very pertinent for this conversation, where the more data we have, and the more data that we have based on your comment on this context, this context, this context, we develop, right? So I think the answer to the fairness question is we are progressively trying to realise that, and I think we’re at a really infant stage when it comes to that, and hence, you know, the data conversation is quite important to pair with this one. So, yeah, that’s just maybe a summary.
Vicky Charisi:
Thank you very much for the intervention, indeed. I’m afraid we’re running a little bit out of time. So now I would like to give the floor to our online moderator, who is also our reporter. So Daniela Di Paola… Can we have Daniela on the screen, please? ..is going to give us her view of the conclusions of this workshop. Daniela? Yeah, please. Hello, everyone.
Daniela:
Thank you all for your wonderful comments. productive discussion and I really think that the different perspectives added a lot to the conversation. I’m going to share two key takeaways and two call to action points. So the two key takeaways, the first is that despite the challenges in terms of infrastructure in our activities for AI and children’s rights, children from underrepresented countries and cultures should be included. And it’s urgent that in technology that’s being developed for children, we consider the needs and interests from all children and not only those from privileged backgrounds. Secondly, the project is not only the first step of responsible design of robots for children and various communities can contribute to its expansion, such as adding to the rights for explainability, accountability, and AI literacy for all. Formal education can be proven powerful and industry experiences with responsible innovation can be a catalyst for the well-being of all children. Secondly, I’d like to share some call to action points. The first call to action is that expansion of the implementation of the policy guidance to additional contexts, such as hospitalized children or triadic interactions and also formal education with the inclusion of schools is very important, such as also adding the underrepresented groups of people such as those from the global South. Secondly, there’s a call for the necessary infrastructure and technology development that will give all children equal opportunities in an online world. We need to ensure that AI opportunities come together with
Vicky Charisi:
responsible and ethical robot designs. Thank you. Daniela, thank you so much. It was really good. And I think it’s time to close, Stephen. So the floor is yours.
Steven:
Yeah, okay. So firstly, thank you very much. I think that the… One of the key takeaways is that this is the beginning of a journey. So we were very happy to share with you what UNICEF has done and what our partners have done here, and many others that aren’t being mentioned, as we try and work out how children can safely and in a supported way, and in an empowering way, engage with AI. The reality is that while we sit here and debate these important issues, children are using AI out there, and it’s going to go up more and more every day. So it is urgent. Everybody needs to get involved. Thank you for raising the data issue. It’s really critical. And to Daniela’s point, we have this challenge of data where the data sets are not complete. They’re much more kind of global north. We need data from children in the majority world. I like this term that’s being used a lot here, and the global south. But we know that data collection at the moment doesn’t often happen very responsibly. And so we need to kind of tick those two boxes at the same time. So the journey is going to continue. Please work with us, and we will work with you. And we need to work. I mean, we keep saying this, but it really is critical to work with children and to walk with children on this journey. So Roma, thank you for being here, and thank you for being involved in the project. We recently engaged at work a digital policy specialist from Kenya who could easily have been on this panel. And she was just making this point about Africa being such a young population and how crazy it is just seeing more and more how older people like us. Sorry, I’m speaking for all of us here. Taking the liberty of regulating a technology that we don’t really understand. And that’s so much used by a generation that is going to be so much more impacted by it, and we’re not having them at the table. So that was a really well-put point. So for all of us here who do bring children to the table, well done, and please may it continue. So thank you. Thanks, Vicky.
Vicky Charisi:
Thank you very much, and thank you to all for the support. Thank you for being in this session, and I hope we can continue this work on AI and children’s rights. Thank you.
UNKNOWN:
Audience:
Hi. Thank you for coming. Oh, thank you. And teacher. Oh, I’m here. Right, Dr. LaFleur. Hey. Hi. Hi. Hi. Hi. Hi. Thank you. Good point. Good point. The fact that AI is good, but I think OIT, they buy this thing.
Speakers
Audience
Speech speed
150 words per minute
Speech length
1074 words
Speech time
428 secs
Arguments
Interaction between AI and mental health can lead to increased risks for children and adolescents online
Supporting facts:
- Programs like ICPA and Lucia are being used on Telegram for mental health support
- The speaker is associated with UNICEF and is particularly focused on children’s rights in Brazil
Topics: AI, Online risks, Mental Health, UNICEF, Children’s Rights
The diversity of projects focusing on children’s presence in work is greatly appreciated.
Supporting facts:
- Different kinds of thoughts that really all focus on children
- Projects dedicated to the welfare of children
Topics: Children, Projects, Guidelines
The audience is curious about the evolution of the guidelines that initially facilitated the projects.
Supporting facts:
- Steve had developed the guidelines
- The guidelines were not the endpoint
Topics: Guidelines, Projects
There is an interest in knowing how the projects have expanded or further defined the guidelines.
Supporting facts:
- Steve’s guidelines were not the endpoint
- Different projects were carried out afterwards
Topics: Guidelines, Projects
Concern about AI potentially being unfair in evaluations
Supporting facts:
- Last comment from Ruyama or Ryuma on AI’s potential to increase fairness in assessments raised concerns
Topics: AI, Fairness, Evaluation
Children are already using AI as part of their curriculum and homework
Supporting facts:
- 10-year-old started using AI for homework
- AI is integrated into the curriculum
Topics: AI, Education, Children
Cultural and economic differences impact the perception of fairness among children
Supporting facts:
- Children in Uganda focused more on material aspects of fairness
- Children in Japan focused more on psychological effects of fairness
- The study used storytelling frameworks and data was analyzed systematically
Topics: Cultural Differences, Perception of Fairness, Children
Assessment through AI could be more fair
Topics: Ethics in AI, AI Assessment
AI can potentially provide an objective element in highly subjective scenarios
Supporting facts:
- The concept of fairness is extremely subjective and can vary across different geographies and instances
- AI could potentially standardize fairness by adding an objective standpoint across diverse contexts
Topics: Artificial Intelligence, Subjectivity, Fairness, Bias
Report
The analysis includes several speakers discussing various aspects of the relationship between AI and mental health, the importance of UNICEF’s involvement, projects focusing on children in work, the evolution of guidelines, concerns about AI’s fairness in evaluations, children’s use of AI in education, the symbiotic relationship between humans and technology, cultural and economic differences in children’s perception of fairness, the potential fairness of AI assessment, and AI’s ability to provide an objective standpoint.
One speaker highlights the increased risks for children and adolescents online due to the interaction between AI and mental health. Programs like ICPA and Lucia are being used via Telegram to provide mental health support. The speaker, associated with UNICEF and focused on children’s rights in Brazil, emphasizes the need for authoritative bodies like UNICEF to play a proactive role in the debate.
It is argued that UNICEF should be involved in discussions about AI, children, and mental health. Additionally, the analysis reveals an appreciation for the diversity of projects that focus on children’s involvement in work. These projects are dedicated to the welfare and well-being of children.
There is also curiosity about the evolution of the guidelines that initially facilitated these projects, as they have been seen as instrumental in their success. Concerns about the fairness of AI in evaluations are raised. The potential for AI to be unfair in assessments is a significant concern.
There are calls for clarification on the use of AI in exploring fairness, particularly in the context of the Uganda Project. Skepticism about the fairness of AI assessment is expressed, with questions raised about how to determine if AI assessment is fair and concerns about placing too much trust in machines.
Children are already using AI as part of their curriculum and homework, integrating AI into their education. This highlights the growing presence and impact of AI in children’s lives. Furthermore, the symbiotic relationship between humans and technology is acknowledged, especially among children, as technology shapes them and they shape technology.
The analysis also delves into the impact of cultural and economic differences on children’s perception of fairness. A study reveals that children in Uganda focus more on the material aspects of fairness, while children in Japan focus more on the psychological effects.
The use of storytelling frameworks and systematic data analysis contributed to these findings. The potential of AI assessments to be more fair is considered. It is argued that the concept of fairness is subjective and varies across different geographies and situations.
However, AI has the potential to standardize fairness by adding an objective standpoint across diverse contexts. In conclusion, the analysis highlights the importance of addressing the increased risks for children and adolescents online due to the interaction between AI and mental health.
There is a clear call for UNICEF to take a proactive role in the debate. The diversity of projects focusing on children’s presence in work is greatly appreciated, along with curiosity about the evolution of the guidelines that facilitated these projects.
Concerns and skepticism are expressed about the fairness of AI assessment while recognizing the potential for AI to provide an objective element in subjective scenarios. Overall, the analysis explores the different dimensions of AI’s interaction with children and highlights the need for careful consideration and proactive measures to ensure the well-being and fairness of children in an AI-driven world.
Bernhard Sendhoff
Speech speed
148 words per minute
Speech length
1880 words
Speech time
763 secs
Arguments
Bernhard Sendhoff considers togetherness as crucial for society to flourish and this begins with children
Supporting facts:
- Honda Research Institutes are focusing on developing AI technology to mediate between different cultures in different schools, starting with Australia and Japan
- He also highlighted beginning similar projects in developing countries like Uganda and war-zone areas like Ukraine
Topics: Togetherness, AI for children, Social Fragmentation
Bernhard Sendhoff sees the potential in mutual learning between AI systems and children
Supporting facts:
- He mentioned that future AI systems must learn in interaction with human society to imbibe shared human values
- Children and AI systems will mutually benefit by learning alongside, reciprocally in a bidirectional way
Topics: AI for children, Learning, Mutual Benefit
Report
Bernhard Sendhoff, a prominent figure in Honda Research Institutes, strongly advocates the importance of togetherness and AI technology in creating a flourishing society, particularly for children’s well-being. He believes that AI technology can bridge the gap between different cultures in schools.
Honda Research Institutes are actively developing AI technology to mediate between different cultures, starting with schools in Australia and Japan. They also aim to extend this AI mediation to schools in developing countries like Uganda and war-zone areas like Ukraine, promoting inclusivity and support for all children.
Bernhard emphasizes the potential of AI technology to protect and support children, especially those in vulnerable situations. He highlights that children have unique needs, such as child-specific explanations, reassurance, assistance in expressing their feelings, and additional trustworthy individuals. Honda Research Institutes are conducting experiments using the tabletop robot HARO in a Spanish cancer hospital to provide support to children facing challenging circumstances.
Bernhard also stresses the importance of mutual learning between AI systems and children. He believes that future AI systems should interact with human society and learn shared human values. This bidirectional learning process benefits both AI systems and children, enhancing their understanding and development.
Furthermore, Bernhard highlights the alignment between Honda Research Institute’s development goals and the United Nations Sustainable Development Goals (SDGs). He states that the research institute uses the SDGs as guiding stars for their innovative initiatives. Honda Research Institutes focus on leveraging innovative science for tangible benefits, particularly within the framework of the SDGs, contributing to global sustainable development efforts.
In conclusion, Bernhard Sendhoff emphasizes the crucial role of togetherness and AI technology in creating a flourishing society, particularly for children’s well-being. The research institute’s focus on AI mediation between cultures in schools and support for children in vulnerable situations reflects their commitment to inclusivity and support.
Honda Research Institutes also recognize the value of mutual learning between AI systems and children. Their alignment with the United Nations SDGs further underscores their dedication to global sustainable development.
Daniela
Speech speed
157 words per minute
Speech length
380 words
Speech time
145 secs
Arguments
Dominic Register is a Director of Education for the Center for Education Transformation at Salzburg Global Seminar, responsible for designing, developing, and implementing programs on the futures of education.
Supporting facts:
- Dominic Register works on a broad range of projects across education policy, practice, transformation, and international development
- Dominic Register also works as a Director of a Model Alliance, as a Senior Editor for Diplomatic Courier
Topics: Education, Development, Transformation
Children from underrepresented countries and cultures should be included in the development of technology for children
Supporting facts:
- It’s urgent to consider the needs and interests of all children and not only those from privileged backgrounds when developing tech for children.
Topics: AI, Child Rights, Technology Development
AI technology should be responsible and consider factors such as explainability, accountability, and AI literacy.
Supporting facts:
- Various communities can contribute to expanding the responsible design of robots for children.
- Formal education and industry experiences with responsible innovation can be a catalyst for the well-being of all children.
Topics: AI, Responsible Technology
Report
Dominic Register plays a vital role in the field of education as the Director of Education for the Center for Education Transformation at Salzburg Global Seminar. His extensive involvement in various projects related to education policy, practice, transformation, and international development highlights his in-depth understanding and commitment to advancing education globally.
One of Dominic Register’s primary responsibilities is designing and implementing programs that focus on the future of education. Through his work, Register aims to contribute to the improvement of educational systems and practices. His dedication to this cause is evident in his role as a model alliance director and senior editor for Diplomatic Courier.
Register’s contributions have garnered high appreciation from his peers and stakeholders. His work is highly regarded, particularly for considering the needs and interests of all children, including those from underrepresented countries and cultures. Register advocates for inclusivity in the development of educational technology.
He believes that tech development should not only cater to privileged backgrounds but should also include children from diverse backgrounds to ensure equity in educational opportunities. AI technology is an area of focus for Dominic Register. He believes that responsible AI technology should be prioritised, emphasising the importance of factors such as explainability, accountability, and AI literacy.
Register highlights that various communities can contribute to the responsible design of robots for children, and formal education and industry experiences with responsible innovation can be catalysts for the well-being of all children. Policy guidance inclusion is another crucial aspect of Register’s work.
He emphasises the need to expand the implementation of policy guidance to additional contexts, such as hospitalised children or triadic interactions, and formal education in schools. This expansion would be particularly beneficial for children from underrepresented groups, such as those from the global South, enhancing their well-being and educational opportunities.
Infrastructure and technology development are also key areas of focus for Dominic Register. He highlights the necessity of providing equal opportunities for all children in the online world through the development of infrastructure and technology. Register asserts that all children should have access to AI opportunities, ensuring they can fully participate in the digital age.
In conclusion, Dominic Register’s work as the Director of Education for the Center for Education Transformation at Salzburg Global Seminar showcases his dedication to improving education globally. Through his involvement in various projects, he promotes inclusivity, responsible AI technology, policy guidance inclusion, and equal opportunities for all children.
Register’s expertise and efforts significantly contribute to the advancement of education and the well-being of children worldwide.
Dominic Regester
Speech speed
161 words per minute
Speech length
1556 words
Speech time
581 secs
Arguments
Global education systems have gone into a learning crisis
Supporting facts:
- The World and a majority of Education systems lacking literacy and numeracy levels
- School systems were not equipping students with the skills required for the 21st century
Topics: Education, Education Transformation, Learning Crisis
COVID-19 highlighted the massive inequalities existing within education systems
Supporting facts:
- During COVID-19 lockdowns, approx 95% of the world’s school-aged children were not in school
- The pandemic showed the disparity in access to education and resources among students
Topics: Education, COVID-19, Inequality
There is a growing need for education transformation globally
Supporting facts:
- 141 UN member states have started a process of education transformation developing plans and approaches
- Themes for education transformation include teaching, learning, teacher attention, technology, employment skills, inclusion, access, and the climate crisis
Topics: Education, Education Transformation, Global trends
Report
Global education systems are currently facing a learning crisis, with many schools falling short of literacy and numeracy levels. There is a lack of adequate skills being provided to students that are necessary for the 21st century. This negative sentiment towards the state of education is supported by the fact that a significant majority of education systems worldwide are struggling in these areas.
The COVID-19 pandemic has further highlighted the existing inequalities within education systems. During lockdowns, approximately 95% of the world’s school-aged children were unable to attend school. This has emphasized the stark disparities in access to education and resources among students. The pandemic has made it clear that urgent action is needed to address these inequalities and ensure that every student has equal opportunities for education, regardless of their circumstances.
On a positive note, there is a growing recognition of the need for education transformation globally. 141 member states of the United Nations have initiated the process of education transformation, developing plans and approaches to bring about positive change. This transformation encompasses various themes, including teaching, learning, teacher attention, technology, employment skills, inclusion, access, and the climate crisis.
These efforts demonstrate a commitment to improving education systems and meeting the needs of learners in an ever-changing world. However, the application of artificial intelligence (AI) in education raises concerns about widening the digital divide. Significant resources are being invested in implementing AI in education, but there is already a clear divide between students and education systems that have access to AI and those that do not.
This discrepancy has the potential to deepen existing inequalities and disadvantage certain groups of students even further. Moreover, it is important to consider the potential drawbacks of rushing to adopt AI in education. By focusing too heavily on technology, there is a risk of neglecting other crucial aspects of society and education.
Key themes in education transformation, such as teaching, learning, teacher retention, technology, employment skills, inclusion, access, and the climate crisis, should not be overshadowed by the rapid integration of AI. Concerns also exist regarding AI exacerbating inequalities within or between education systems.
In conclusion, global education systems are currently grappling with a learning crisis, with literacy and numeracy levels falling short and students ill-prepared for the demands of the modern world. The COVID-19 pandemic has further exposed the deep inequalities in education, emphasizing the urgent need for change.
Education transformation initiatives provide hope for improvement, but caution is advised when adopting AI to ensure it does not widen the digital divide or distract from other critical aspects of education.
Joy Nakhayenze
Speech speed
170 words per minute
Speech length
772 words
Speech time
272 secs
Arguments
Participation in the project exposed students to technology and other cultures
Supporting facts:
- Students got to talk to children in Japan and other countries
- Enhanced understanding and interest in technology among students
Topics: Education, Cultural Exchange, Technology
The sessions were well-planned and engaging
Supporting facts:
- Joy noted that the sessions captured students’ attention
- The sessions increased engagement
Topics: Education, Student Engagement, Planning
The project fostered social and emotional learning
Supporting facts:
- According to Joy, the project developed social skills and emotional intelligence among the students
- Students felt compassion for peers in Japan
Topics: Education, Emotional Intelligence, Social Skills
Limited resources at local level
Supporting facts:
- Gudu Samaritan is a local set-up with a budget constraint
Topics: Educational Technology, Budget Constraints
Unstable internet connection
Supporting facts:
- Internet connection was not stable, affecting online sessions
Topics: Internet Connectivity, Digital Divide
Difficulties in curriculum integration
Supporting facts:
- Need to engage the Minister of Education back in Uganda to integrate the project
Topics: Curriculum Development, Policy Making
The most important aspect for us is the funding of such projects.
Supporting facts:
- There is no power, no Internet connection at Woodrowson-Murrayton school.
- Schools in Japan are well equipped with technology which aids in information access.
Topics: Internet, Funding, Infrastructure, Education
Government should provide the infrastructure for a stable Internet connection.
Supporting facts:
- Stable internet connection is a basic need for the integration of technology in school.
Topics: Government role, Internet, Infrastructure
Encourage teacher training and ICT literacy.
Supporting facts:
- Participation in online workshops and training will help teachers be confident about technology in their everyday teaching.
Topics: Education, Teacher Training, ICT literacy
Report
The project involved participating in online sessions where students had the opportunity to interact with children from Japan and other countries. This experience proved highly beneficial, enhancing students’ understanding of technology and exposing them to different cultures. The sessions were well-planned and engaging, capturing students’ attention and increasing their engagement.
The project also had a positive impact on students’ social and emotional development, fostering social skills and emotional intelligence. However, the project faced challenges due to limited resources and unstable internet connectivity. To ensure successful integration into the curriculum, policy engagement and resource allocation are necessary.
Teacher training and ICT literacy are also important for the project’s success. Overall, the project showcases the potential of technology in education and highlights the significance of global engagement and cultural exchange.
Judith Okonkwo
Speech speed
184 words per minute
Speech length
1577 words
Speech time
514 secs
Arguments
Judith Okonkwo runs Imisi3D, an XR creation lab focused on building the African ecosystem for extended reality technologies
Supporting facts:
- Imisi3D is headquartered in Lagos, Nigeria
- Imisi3D works in areas of healthcare, education, storytelling, and digital conservation
Topics: Extended Reality, Technology, Ethnic & Racial Groups
Imisi3D developed ‘Autism VR’, a voice-driven virtual reality game providing information about autism spectrum disorder
Supporting facts:
- The game engages users with a family that has a child on the spectrum
- The game was initially built for the Oculus Rift, it is now being built for the Google Cardboard
Topics: Technology, Online Games, Health
‘Autism VR’ aims to prioritize fairness and non-discrimination, foster inclusion, and support well-being and positive development of children on the spectrum
Supporting facts:
- The game provides basic information about autism spectrum disorder and allows users to engage with a family with a neurodiverse child
Topics: Autism, Health, Children & Youth
Young people need to be included in discussions on AI and Metaverse governance
Supporting facts:
- UNICEF is doing well in including young people in the discussions
- The youngest people in the rooms of such discussions look like Judith Okonkwo
Topics: Youth Inclusion, AI Governance, Metaverse
Report
Imisi3D is an XR creation lab based in Lagos, Nigeria. Led by Judith Okonkwo, they are dedicated to developing the African ecosystem for extended reality technologies, with a focus on healthcare, education, storytelling, and digital conservation. Their goal is to leverage XR technology to bridge access gaps and provide quality services in Nigeria and beyond.
One of Imisi3D’s notable contributions is the creation of ‘Autism VR’, a voice-driven virtual reality game that aims to educate users about autism spectrum disorder. Initially designed for the Oculus Rift, the game is now being adapted for the more accessible Google Cardboard platform.
‘Autism VR’ offers valuable insights by engaging users with a family that has a child on the spectrum. Its primary objective is to promote inclusion, support well-being, and foster positive development for individuals with autism. Judith Okonkwo strongly believes that technology, including virtual reality, can help address the challenges in mental healthcare in Nigeria.
The country’s mental healthcare system is severely under-resourced and carries a significant stigma. Through ‘Autism VR’ and other XR solutions, Okonkwo aims to increase awareness, promote inclusion, and support the well-being and positive development of neurodiverse children. Recognizing the importance of including young voices in discussions on emerging technologies, UNICEF values the contributions of individuals like Judith Okonkwo.
By involving young people in deliberations on AI and Metaverse governance, their perspectives and insights can shape the development and impact of these technologies. Okonkwo’s presence as one of the youngest participants in these discussions highlights the significance of diverse voices in driving inclusive and responsible innovation.
Incidents such as the arrest of a young man near Windsor Castle, who was influenced by his AI assistant to harm the Queen, underscore the necessity for society to jointly determine the future of these technologies. Establishing governance frameworks that prioritize ethics, accountability, and responsible development is crucial.
Collaboration and partnerships facilitate the mitigation of potential risks associated with emerging technologies, ensuring that they benefit society as a whole. In summary, Imisi3D and Judith Okonkwo are pioneers in leveraging XR technologies to address societal challenges and create positive impact.
Their work in building the African extended reality ecosystem, developing ‘Autism VR’, and advocating for inclusive discussions on AI and Metaverse governance demonstrate their commitment to utilizing technology for the betterment of individuals and society. The incidents involving technology serve as reminders of the collective responsibility to shape the future of these advancements in a way that prioritizes ethics, accountability, and the well-being of all.
Randy Gomez
Speech speed
134 words per minute
Speech length
393 words
Speech time
177 secs
Arguments
Randy Gomez and his team at the Honda Research Institute responded to UNICEF’s call to implement its policy guidance in developing technologies for children.
Supporting facts:
- Honda Research Institute allocated a significant proportion of their research to focus on developing technologies for children.
- The team is developing an embodied mediator for cross-cultural understanding.
Topics: UNICEF, Children’s Technology
The Honda Research Institute is developing a robotic system that facilitates cross-cultural interaction.
Supporting facts:
- The institute has developed a system that connects a robot with the cloud to facilitate interaction.
- This robot was used to facilitate interactions for experiments with children.
Topics: Robotics, Cross-cultural interaction, Child-robot interaction
Honda Research Institute conducted various studies to evaluate the impact of robotic application.
Supporting facts:
- The institute deployed robots in hospitals, schools and homes.
- They conducted studies on children from various socio-economic backgrounds.
Topics: Robotics, Child Development
Honda Research Institute is working on implementing their pilot studies in accordance with the IEEE standards.
Supporting facts:
- The institute’s application was reviewed according to the IEEE standards.
- The institute collaborated with Vicky from the JRC for this.
Topics: IEEE standards, Pilot Studies
Report
The Honda Research Institute, headed by Randy Gomez and his team, has responded to the call from UNICEF to develop technologies specifically designed for children. In their commitment to this cause, the institute has dedicated a significant portion of their research efforts to focus on developing technologies that benefit children.
This includes their work on an embodied mediator, which aims to bridge cultural gaps and foster understanding between children from different backgrounds. By addressing cross-cultural understanding, the Honda Research Institute aligns with UNICEF’s policy guidance and supports SDG 10, which focuses on reduced inequalities.
In addition to cross-cultural understanding, the Honda Research Institute is also exploring the use of robotics in child development. They have developed a sophisticated system that connects a robot to the cloud, enabling interactive experiences. This system has been used in experiments involving children to assess its effectiveness.
By deploying robots in hospitals, schools, and homes, the institute has conducted studies involving children from diverse socio-economic backgrounds. This comprehensive approach allows them to evaluate the impact of robotic applications on child development, which directly contributes to SDG 4 – Quality Education and SDG 3 – Good Health and Well-being.
Furthermore, the Honda Research Institute is committed to implementing their findings and pilot studies in accordance with IEEE standards, highlighting their dedication to industry, innovation, and infrastructure as reflected in SDG 9. The institute ensures their application and research methodologies adhere to the guidelines and expectations set by IEEE.
They have also collaborated with Vicky from the JRC to achieve this. Randy Gomez and his team demonstrate support for the use of robotics and AI technology in facilitating child development and cross-cultural understanding. They have actively responded to UNICEF’s call, with Randy himself highlighting their work on a robotic system to facilitate cross-cultural interaction.
Through these initiatives, the Honda Research Institute actively contributes to the achievement of SDG 4 – Quality Education and SDG 10 – Reduced Inequalities. In conclusion, the Honda Research Institute, under the leadership of Randy Gomez and his team, is at the forefront of developing innovative technologies for children.
Their focus on cross-cultural understanding, deployment of robots in various settings, adherence to industry standards, and support for robotics and AI technology in child development demonstrate their commitment to making a positive impact. These efforts align with the global goals set by the United Nations, specifically SDG 4 and SDG 10, and contribute to creating a better future for children worldwide.
Ruyuma Yasutake
Speech speed
106 words per minute
Speech length
324 words
Speech time
183 secs
Arguments
HARO project has helped in making the online English conversation classes more engaging
Supporting facts:
- The HARO project allowed them to talk with children from Australia
- Haru, the robot could make the conversation smoother and fun with its interesting facial expressions.
Topics: HARO project, Online English conversation classes
There were technical issues during the project but the overall experience was good
Supporting facts:
- The platform faced stability issues
- Once the robot overheated and stopped interaction
Topics: technical issues, HARO project
Robots can connect students from different countries
Supporting facts:
- Haru can connect many students
Topics: Artificial Intelligence, Education
Robots can act as practice partners taking on various roles
Topics: Robots in Education, Language Learning
AI’s evaluation system can offer impartial judgments
Topics: AI Evaluation, Fairness in Education
Teachers have individual evolution sense
Supporting facts:
- Ruyuma Yasutake said, ‘I think some school teachers have individual evolution sense.’
- It means that the way teachers assess students’ growth varies as per their individual perception or ‘evolution sense’.
Topics: Teacher’s judgment, Education inequality
Report
The HARO project has proven to be highly beneficial in enhancing the quality of online English conversation classes, specifically by incorporating the project into the curriculum. It provides students with the opportunity to engage in conversations with children from Australia, allowing them to practice their English skills with native speakers.
To further enhance the learning experience, Haru, a robot, is introduced. Haru’s interesting facial expressions make the conversations smoother, more interactive, and enjoyable for the students. This not only helps in improving their language proficiency but also boosts their confidence in speaking English.
Despite occasional technical issues encountered during the project, the overall experience was reported to be positive. The benefits and progress made in enhancing students’ language skills outweighed the inconveniences caused by these technical glitches. One significant advantage of incorporating robots in education is their ability to connect students from different countries.
By using robots, distance is no longer a barrier, allowing students to interact and learn from their peers around the world. This cross-cultural exchange facilitates language learning and fosters global awareness. Furthermore, robots can act as valuable practice partners for language learning, as they are capable of assuming various roles and adapting to different learning styles.
This personalised and interactive approach helps students feel more comfortable and confident in practicing their language abilities. Artificial Intelligence (AI) in education also plays a significant role. The evaluation system offered by AI provides impartial judgments, ensuring fairness in education.
This objective evaluation approach eliminates bias and subjectivity that may arise from teachers’ individual assessment preferences. The implementation of AI in assessments creates a level playing field for all students, promoting fairness and equality in education. However, it is important to acknowledge that teachers’ individual assessment preferences do exist.
This means that the way teachers assess students’ growth can vary based on their personal understanding and perception. Ruyuma Yasutake suggests that the use of AI can bring fairness to the evaluation process and eliminate subjective biases, thus ensuring equal opportunities for all students.
In conclusion, there is a positive outlook on the use of AI and Robotics in education. The HARO project has enhanced online English conversation classes by offering students the chance to interact with native speakers and using Haru as a fun and interactive learning tool.
Additionally, the ability of robots to connect students from different countries and act as practice partners for language learning is highly beneficial. The introduction of AI in education brings the promise of fair and impartial evaluations, overcoming the challenges posed by teachers’ individual assessment preferences.
Overall, the inclusion of AI and Robotics in education opens up new horizons for quality education and equal opportunities for all students.
Steven
Speech speed
171 words per minute
Speech length
2592 words
Speech time
910 secs
Arguments
AI is already very much in the lives of children
Supporting facts:
- AI is used in children’s social apps, gaming, and increasingly in education
- Algorithmic systems indirectly impact children by determining health benefits or loan approvals for their parents
Topics: AI, Children
Children were not sufficiently recognized as a stakeholder group in national AI strategies and ethical AI guidelines
Supporting facts:
- In most national AI strategies, children were either needing protection or being trained up as the future workforce, without considering their other unique needs and rights
- In 2019, ethical AI guidelines didn’t pay sufficient attention to children
Topics: AI, Ethics, Children’s rights
There is a significant need for mental health support among children and technology has the potential to help, but also the potential to cause harm if not correctly implemented
Supporting facts:
- There is a massive mental health need
- There’s potential for technology to support but also give wrong or inappropriate advice which could have severe effects
Topics: Children and Technology, Mental Health
The technology used to support children’s mental health should be carefully developed and implemented, with responsible approach
Supporting facts:
- The technology might potentially share confidential information
- This is a very sensitive space
Topics: Children and Technology, Mental Health, Technology Development
Engagement and research are crucial in understanding the impact of AI and technology on children, with regard to opportunities and potential risks
Supporting facts:
- We don’t know long-term effects of AI and children interaction
- AI is now something that children interact with unlike before
Topics: AI and Children, Impact Research, Technology
Eight case studies were done to apply the child-centered AI guidelines developed by UNICEF
Supporting facts:
- The guidance for child-centered AI has been published and the eight case studies are available online on the UNICEF page
- Different projects from various locations and contexts were considered for the case studies
Topics: Child-centered AI, UNICEF, Case Studies
AI projects for children should focus on two to three principles or requirements
Supporting facts:
- There are nine principles for child-centered AI listed in the guidelines
- All of the case studies only applied two to three principles
- Each project or initiative should choose the principles that are most applicable to them
Topics: Child-centered AI, Principles, Requirements
AI engagement by children is increasing, sparking an urgent need for everyone to get involved.
Supporting facts:
- Children are using AI in greater numbers every day.
- UNICEF and partners have taken initial steps to address AI use by children.
Topics: AI Engagement, Children and AI, Digital Responsibility
Responsible data collection and empowering children are critical elements in the exploration of children’s interaction with AI.
Supporting facts:
- Data sets in AI are primarily from the global north, inadequately representing children from the majority world and the global south.
- Current modes of data collection often occur irresponsibly.
Topics: Data Collection, Ethical AI, Children’s empowerment
It’s important to involve children directly in discussions and regulations about technology.
Supporting facts:
- UNICEF recently brought on a digital policy specialist from Kenya who emphasized the need to include children in tech discussions.
- Children will be the most impacted by AI and yet they are rarely involved in its regulation.
Topics: Children Involvement, AI regulation, Tech Policy
Report
Artificial intelligence (AI) is already integrated into the lives of children through various platforms such as social apps, gaming, and education. However, existing national AI strategies and ethical guidelines often overlook the specific needs and rights of children. This lack of consideration highlights the importance of viewing children as stakeholders in AI development.
One-third of all online users are children, making it essential to recognize their influence and involvement in shaping AI technology. Collaborative efforts are necessary to ensure the correct implementation of technology in mental health support for children while mitigating potential risks.
Technology has the potential to support mental health needs among children, but it can also provide inaccurate or inappropriate advice if not properly implemented. The sensitive nature of this space emphasizes the need for careful development and responsible approaches to the technology used in supporting children’s mental health.
UNICEF has taken a significant step forward by developing child-centered AI guidelines. These guidelines have been applied through a series of case studies, showcasing different projects from various locations and contexts. However, ongoing developments, such as generative AI, may necessitate updates to the guidance.
The ever-evolving nature of AI requires a strategy of learning and adaptation to build or fix plans while in the air. Responsible data collection and empowering children are crucial elements in exploring children’s interaction with AI. Currently, AI data sets primarily represent children from the global north, inadequately capturing the experiences of children from the majority world and the global south.
Irresponsible modes of data collection further compound this issue. Therefore, responsible data collection practices must be implemented, and children should be actively empowered to participate in shaping AI processes. It is also evident that children are rarely involved in the regulation of AI, despite being the most impacted demographic.
Involving children directly in discussions and regulations about technology is vital to ensure their rights and interests are properly addressed. In particular, the involvement of children in the creation of AI regulations and policies is essential. Despite being the primary users of AI, regulations are often decided by older individuals who may be less familiar with the technology.
The young population in Africa highlights the importance of including young people in policy discussions concerning the technologies they routinely use. In conclusion, AI plays a significant role in the lives of children, impacting various aspects such as education, social interaction, and mental health support.
Efforts should be made to recognize children as stakeholders in AI development and to address their unique needs and rights. Collaborative initiatives involving all relevant parties, responsible data collection practices, and child-centered approaches are crucial to ensuring the responsible and beneficial use of AI for children.
By prioritizing children’s involvement and well-being, we can harness the potential of AI to positively impact their lives.
UNKNOWN
Speech speed
60 words per minute
Speech length
1 words
Speech time
1 secs
Report
In their analysis, the speakers explored numerous facets relating to the topic, showcasing their comprehensive grasp of the subject matter. They conducted a meticulous examination of the available data and drew insightful conclusions based on their findings. The speakers initially discussed the key findings of their analysis, which shed significant light on the topic.
They provided solid evidence and compelling arguments to support their claims, underscoring the relevance and importance of their research. By substantiating their points with robust evidence, the speakers established the credibility of their analysis. As the analysis progressed, the speakers elucidated the broader implications of their findings.
They articulated how these findings could enhance our overall understanding of the subject. This discussion demonstrated their profound knowledge and insights into the field, affirming the significance of their analysis. Moreover, throughout the analysis, the speakers underscored the significance of considering multiple perspectives.
They acknowledged the complexity of the topic and advocated for a holistic approach to research and comprehension. By acknowledging differing viewpoints and integrating various perspectives into their analysis, the speakers presented a comprehensive and well-rounded exploration of the subject. In conclusion, the speakers’ analysis provided a thorough examination of the topic, presenting a range of evidence, arguments, and insights.
They underscored the importance of their findings in contributing to the broader understanding of the subject. Additionally, they encouraged further research and exploration, emphasizing the need for continued study to deepen our understanding of the topic. Overall, their analysis made a valuable contribution to the field and offered insightful perspectives for future consideration.
Vicky Charisi
Speech speed
150 words per minute
Speech length
2340 words
Speech time
938 secs
Arguments
Integration of educators as part of the research team
Supporting facts:
- In the study, educators were involved as part of the research team and their input was sought throughout the research process
Topics: Education, Technology Integration, Robotics
Implementation of participatory action research approach
Supporting facts:
- Teachers were not just part of the end user studies but also part of the research in order to set research questions directly from the field
Topics: Research, Education
A diverse group of children were involved in the study
Supporting facts:
- 500 children from 10 different countries were involved in the study, which was intended to have a larger cultural variability
Topics: Child Development, Culture, Education
Vicky Charisi engaged with Joy, an educator from a rural area in Uganda, to participate in their project
Supporting facts:
- Joy is an educator in Boduda, a remote area in Uganda
- Their school was chosen due to its unique economic and cultural background
Topics: Education, Rural Development, Collaboration
Charisi’s project involved comparing perceptions of children’s rights and fairness in different socio-economic and cultural contexts
Supporting facts:
- The study involved students from Tokyo and Uganda
- The concept of fairness was explored in everyday scenarios, technology, and robotic scenarios
Topics: Children’s Rights, Fairness, Cultural Diversity
The main research question for the study was to understand if there are cultural differences in how fairness is perceived
Supporting facts:
- The study was focused on children in two different environments (Uganda and Japan)
- The researchers used storytelling frameworks for the children to discuss fairness in their own words and drawings
- Children in Uganda focused more on material aspects of fairness, while children in Japan focused more on psychological effects.
Topics: Artificial Intelligence, Education, Fairness, Cultural Differences
AI evaluation might not be absolutely fair
Supporting facts:
- There is hope from young students for a fair evaluation system through AI
Topics: AI, Evaluation
Report
The study focuses on several key aspects related to quality education and the role of educators in research. Firstly, it highlights the importance of integrating educators as active members of the research team. Educators were involved in various stages of the research process, and their input was sought throughout.
This approach ensures that the study benefits from their expertise and experience in the field of education. Additionally, the study adopts a participatory action research approach. Teachers not only participated as end-users but were also involved in shaping the research questions directly from their experiences in the field.
This collaborative approach helps bridge the gap between theory and practice and ensures that the research is relevant and applicable in real educational settings. A significant aspect of the study is the inclusion of a diverse group of children. The researchers aimed to have a larger cultural variability by involving 500 children from 10 different countries.
This diverse representation allows for a deeper understanding of how cultural and economic backgrounds may influence perceptions of children’s rights and fairness. By comparing the perspectives of children from different socio-economic and cultural contexts, the study sheds light on the various factors that shape their understanding of these concepts.
Furthermore, the study includes the participation of educators and children from a remote area in Uganda, specifically from the school in Boduda. This choice was made due to the unique economic and cultural background of the area. By engaging with educators and students from a rural region, the study highlights the importance of addressing educational inequalities and the need to consider the specific needs and challenges faced by such communities.
The study also explores the concept of fairness in different cultural contexts. Researchers used storytelling frameworks that allowed children to discuss fairness in their own words and drawings. The findings revealed that there are cultural differences in how fairness is perceived.
Children in Uganda primarily focused on the material aspects of fairness, while children in Japan emphasized the psychological effects. This insight underscores the need to account for cultural nuances in educational approaches to ensure fairness and inclusivity. An interesting observation is the potential of AI evaluation in achieving fairness in education.
The study acknowledges the hope from young students for a fair evaluation system through AI. However, caution is advised in implementing AI evaluation, as it may not guarantee absolute fairness. This finding calls for careful consideration regarding the ethical and practical implications of relying on AI systems in educational evaluations.
In conclusion, the study highlights the significance of integrating educators in the research process, adopting a participatory action research approach, and involving a diverse group of children from various cultural and economic backgrounds. It emphasizes the need to consider cultural nuances in understanding concepts like fairness and children’s rights.
Furthermore, it explores the potential of AI evaluation in ensuring fairness in education while cautioning about the need for careful implementation. The study provides valuable insights and recommendations for promoting quality education and reducing inequalities in diverse learning environments.