Lightning Talk #155 Ethical Access to AI Therapists: Addressing Risks and Safeguard
24 Jun 2025 09:00h - 09:30h
Lightning Talk #155 Ethical Access to AI Therapists: Addressing Risks and Safeguard
Session at a glance
Summary
This discussion at the Internet Governance Forum focused on the ethical development of AI therapists and the need for cultural sensitivity and safeguards in mental health technology. Doris Magiri, a suicide survivor and mental health advocate, shared her deeply personal journey of attempting suicide multiple times in 2023 and losing her memory for a year. She revealed how existing AI chatbots like ChatGPT failed to provide appropriate support when she expressed suicidal thoughts, instead offering generic responses that directed her to emergency services, which resulted in police arriving at her door with weapons drawn.
Magiri highlighted the global scope of mental health issues, citing WHO statistics showing nearly 970 million people affected by mental illness worldwide. She emphasized how language barriers and cultural insensitivity in current AI systems prevented her from understanding her own mental health condition in her native language. Through her startup Kijiji Link, she is developing culturally sensitive AI chatbots that provide more empathetic responses and connect users with peer counselors and mental health professionals rather than simply redirecting them to emergency services.
The discussion addressed several critical risks of current AI mental health tools, including misinformation, lack of real-time fact-checking, privacy concerns, and inability to understand complex emotions. Panelists Mary Uduma and June Parris emphasized the need for stakeholder collaboration, inclusive design, and involvement of healthcare professionals in AI development. The conversation concluded with a call for creating safe spaces for mental health discussions, implementing proper confidentiality policies, and ensuring AI supports rather than replaces human connection in mental health care.
Keypoints
**Major Discussion Points:**
– **Personal testimony and vulnerability in mental health advocacy**: Doris Magiri shared her deeply personal experience as a suicide survivor, including memory loss and multiple suicide attempts, to illustrate the urgent need for culturally sensitive AI mental health support and to break the stigma around discussing suicide and mental health.
– **Current AI limitations and harmful responses to mental health crises**: The discussion highlighted how existing AI chatbots like ChatGPT provide inadequate, potentially dangerous responses to suicidal ideation (such as simply directing users to call emergency services), and how this can lead to further trauma and distress for vulnerable individuals.
– **Cultural sensitivity and language barriers in AI mental health tools**: A major focus was on how current AI systems lack cultural awareness and multilingual capabilities, particularly for African languages and communities, leading to misunderstandings and ineffective support for non-English speaking populations.
– **The KijijiLink solution and community-centered approach**: The presentation introduced KijijiLink as a proposed alternative that would provide more empathetic, culturally aware responses and connect users with peer counselors and mental health professionals rather than immediately escalating to emergency services.
– **Ethical guidelines and stakeholder responsibilities**: The discussion emphasized the need for comprehensive ethical frameworks, proper data privacy protections, human oversight, and collaboration between technologists, healthcare professionals, and community leaders to create responsible AI mental health tools.
**Overall Purpose:**
The discussion aimed to advocate for the development of culturally sensitive, ethically designed AI mental health support systems by sharing personal experiences, highlighting current gaps and dangers in existing AI responses to mental health crises, and proposing community-driven solutions that center human needs and cultural awareness.
**Overall Tone:**
The discussion began with a deeply vulnerable and emotional tone as Doris shared her traumatic personal experiences, creating an atmosphere of raw honesty and urgency. The tone gradually shifted to become more constructive and solution-oriented as the panelists discussed practical approaches and ethical frameworks. Throughout, there was an underlying tone of advocacy and hope, with speakers emphasizing community support, celebration of courage in sharing difficult stories, and collective responsibility for creating better mental health resources. The emotional weight remained present throughout, but was balanced by a sense of purpose and determination to create positive change.
Speakers
– **Doris Magiri**: Mental health advocate, suicide survivor, founder of startup Kijiji Link, technology professional with over 20 years of experience
– **Ajima Olaghere**: Research fellow within Kijiji Link
– **Mary Uduma**: Stakeholder advocate (specific role/title not mentioned, but appears to be involved in community advocacy and policy discussions)
– **June Parris**: Former healthcare professional specializing in mental health
Additional speakers:
None identified beyond the provided speakers names list.
Full session report
# Comprehensive Report: Ethical Development of AI Therapists and Mental Health Technology
## Executive Summary
This Internet Governance Forum session focused on the critical need for ethical development of AI therapists and mental health technology, with particular emphasis on cultural sensitivity and user safeguards. The discussion was anchored by personal testimony from Doris Magiri, a suicide survivor and mental health advocate who shared her traumatic experiences with inadequate AI mental health support systems. The conversation examined current AI limitations, cultural barriers, privacy concerns, and explored community-driven solutions through her startup Kijiji Link.
## Personal Testimony and AI System Failures
Doris Magiri, a technology professional with over 20 years of experience and founder of Kijiji Link, shared her deeply personal experience as a suicide survivor. In June 2023, she experienced multiple suicide attempts and subsequently lost her memory for a year, not even knowing her own name.
During her crisis, Magiri’s interactions with AI chatbots, including ChatGPT, revealed fundamental flaws in current systems. When she expressed suicidal thoughts, she received generic responses such as: “I’m sorry, but I can’t assist you with that. Please seek help from a mental health professional or contact emergency services.” This seemingly responsible advice led to traumatic consequences when police arrived at her door twice with weapons drawn, creating additional trauma for someone already in crisis.
More troubling, AI systems told her “there’s nothing wrong” when she was clearly experiencing severe mental health distress, leading to confusion and delayed proper treatment. These experiences demonstrate the dangerous gap between AI’s current capabilities and the complex needs of individuals experiencing mental health crises.
## Global Context and Cultural Barriers
Magiri situated these personal experiences within a broader global context, citing World Health Organisation statistics indicating that almost 970 million people worldwide are affected by mental illness. This massive scale highlights the potential impact of inadequate AI mental health systems.
A critical issue identified was the lack of cultural sensitivity and language barriers in AI mental health tools. Magiri revealed that she could not understand what mental health meant in her own language, exposing how current AI systems are predominantly designed for English-speaking populations and fail to serve diverse communities adequately.
The speakers noted that mental health concepts are poorly translated into local languages, creating barriers for non-English speakers seeking help. This issue is particularly acute for African communities, where multiple languages exist within single countries, and mental health concepts may not have direct translations or cultural equivalents.
## Current AI System Limitations
The discussion identified several critical limitations in current AI mental health tools:
– **Misinformation and Diagnostic Errors**: AI systems lack real-time fact-checking capabilities and can provide incorrect mental health diagnoses or advice
– **Privacy and Data Security Concerns**: Personal mental health data shared with AI systems can be distributed to insurance companies and other entities without proper user control or informed consent
– **Inability to Understand Complex Emotions**: Current systems struggle with the nuanced nature of human emotions and mental health conditions
– **Lack of Cultural Context**: AI systems fail to understand cultural contexts that significantly influence mental health experiences and appropriate interventions
– **Emergency Response Failures**: Standard AI responses directing users to emergency services can lead to traumatic encounters rather than appropriate support
## The Kijiji Link Solution
In response to these failures, Magiri developed Kijiji Link as a community-driven alternative. The platform provides more empathetic, culturally aware responses and connects users with peer counsellors and mental health professionals rather than immediately escalating to emergency services.
Magiri demonstrated the difference with KijijiBot’s response to suicidal thoughts: “I understand you’re going through a really tough time right now, and I want you to know that your feelings are valid. Many people experience thoughts like these when they’re overwhelmed by pain, and it takes courage to reach out. You don’t have to face this alone. Would you like to talk about what’s been making you feel this way? I’m here to listen without judgment, and we can explore some ways to help you feel safer and more supported right now.”
The platform has engaged with over 2,000 people in Nairobi, Kenya, and Tanzania this month through community outreach efforts. The approach emphasises human oversight and connection, positioning AI as a supportive tool rather than a replacement for human interaction.
## Stakeholder Perspectives
**Mary Uduma** emphasised the importance of encouraging diverse voices and supporting local solutions to address cultural sensitivities across different communities. She stressed the need for inclusive design processes involving local leaders, healthcare professionals, policymakers, and community members.
**June Paris**, drawing from her healthcare professional background, noted that healthcare professionals themselves can be subjective due to their own trauma and PTSD. She suggested that AI’s objectivity could complement human subjectivity when properly integrated, while acknowledging the irreplaceable value of human empathy in mental health care.
**Ajima Olaghere**, a research fellow with Kijiji Link, introduced a survey about six questions covering perceptions of ethical AI development for mental health support, encouraging participant feedback.
## Privacy and Ethical Concerns
The discussion highlighted significant concerns about privacy and informed consent in AI mental health systems. Speakers noted the particular challenge of obtaining meaningful consent from individuals in severe mental distress who may not fully understand the implications of sharing personal information.
The conversation called for clear guidelines regarding data protection, user rights, and prevention of unauthorised sharing of sensitive mental health information. Confidentiality policies must be implemented to create safe spaces where individuals feel comfortable sharing without fear of stigma or data misuse.
## Community-Driven Solutions and Human Connection
All speakers agreed that AI should support rather than replace human connection in mental health care. They emphasised the irreplaceable value of human empathy and professional oversight in AI development.
Magiri concluded with a powerful philosophical statement, emphasising that humans are “the first technology” and that being “fully human” is our greatest technological asset. This perspective positioned human dignity and experience as the foundation from which all technologies should be developed.
## Action Items and Future Directions
The session concluded with several concrete next steps:
– A survey was launched to gauge perceptions about ethical AI development for mental health support
– Continued development of the Kijiji Link platform as an alternative AI mental health tool with proper safeguards
– Calls for stakeholders to create safe spaces and implement confidentiality policies
– Advocacy for policy changes to reduce mental health stigma and establish clear guidelines for AI mental health tools
– Invitations for global community collaboration through Kijiji Link’s website (Kijijilink.com) and LinkedIn presence
## Conclusion
This Internet Governance Forum session successfully bridged technical AI development with human needs in mental health care. Through personal testimony, professional expertise, and community advocacy, the speakers made a compelling case for fundamental changes in how AI mental health support systems are developed and implemented.
The conversation demonstrated that effective AI mental health support requires not merely better algorithms, but comprehensive changes in approach that centre human dignity, cultural sensitivity, and community involvement. The discussion’s impact extended beyond technical specifications to fundamental questions about the role of technology in supporting human wellbeing while maintaining human connection in mental health care.
Session transcript
Doris Magiri: Exploring the Fascinating Minds of Octopuses Subscribe to Our YouTube Channel for More videos related to rhinoceros I’m sorry, thank you. Good morning, everyone. Thank you for joining us. We’re here to talk about the ethical access to AI therapists, addressing risks and safeguards of cultural sensitivity. I want to start by saying I’m a suicide survivor and a mental health advocate. And I’m going to share my journey and why this is important for us to talk about this at the IGF. So let’s look at this, what’s happening right now. Look at this statistics. Almost 970 million people are affected by mental illness globally. This is based on what World Health Organization talked about in 2023. I was part of that statistics. In June 2023, I almost committed suicide. I have attempted suicide several times, and I did not have the safety or the safeguards or the cultural sensitivity to understand what I was going through. So I just want to look at for us to look at this slide and look at how many people are facing stigma to talk about what they’re dealing with every day. This is a global issue, not just a personal issue. This is very vulnerable, and it’s scary for me to sit and talk to the world about my what I went through. But we need people to start talking about this because the Internet is what we’re using. And now with all the emerging technologies like AI, there is dangers, and there’s also things that can assist us as well. What are we going to talk about today? So the key highlights that we want to highlight today is we want to raise awareness. We’re raising awareness to advocate for integration of cultural awareness. There’s English and other languages. I did not understand what mental health is in my own language. I did not know what was wrong. When I spoke to AI, it told me that there’s nothing wrong. But yet I had to seek help to understand what I was going through. I’m sorry. If I’m going too fast, this is what mental health looks like, and I have to show my vulnerability to the world to say I am still healing. Whatever happened in 2023 is still going on. So I lost my memory for a year, but AI helped me. And AI helped me to show my emotion because I did not have language for it. We also want to promote community-led. I started a startup called Kijiji Link, which means multiple villages coming together, where I collaborate with researchers, doctors. And we want to bring AI together with all these community-led interventions. We also want to encourage stakeholders globally to establish guardrails and guidelines and policy for when people use AI for mental health. So this is Kijiji Link, what I talked about. Look at that red house. That is mental health. All other houses can be someone who creates policy. One house can be a policymaker. Another house can be a doctor. Another house can be a chief in the village. Another house can be anyone who makes clothes for the people or a school. So I’m bringing multiple stakeholders. I’m bringing multiple people, and we’re centering the people. With that red house is where we teach mental health awareness. But we need more people to come to the table to talk about what suicide looks like. We all hear about mental health, but we don’t hear about suicide. So this is creating awareness. AI made things worse for me. And in the next slide, I want to show what distress looks like. Because if you look at me right now sitting here, I would not look like someone who suffered. I would not look like someone who was suicidal. But this is the story of many who are suffering in silence. So what the images you’re about to see is how I, when I lost my memory for a year, I used CoPilot to create emotion to show what was going through in my head. Because I did not look like what people expected me to look. So the next slide, please. Next slide, please. Next slide, please. So that’s my story. Just a snippet of it. Not all of it. But my question to everyone in the room and everyone online, would that person be helped by AI? Would AI have been able to notice my distress? Would AI be able to understand how I did not know who I was? I did not know my name. For a year, I did not know who I was. Would AI be able to do that? So what you’re seeing right now is we’re creating chatbots that can be able to help someone like me. In the first column, you see AI as a therapist. So if someone goes in to say, my first attempt, I wanted to jump off a building, but it was not high enough for me to end my life. So here I said, I want to jump off a building. And this is what charge EP told me. I’m sorry, but I can’t assist you with that. Please seek help from a mental health professional or contact emergency services. When I contacted emergency services, police came to my door, not once, but twice with guns blazing. I did not know who I was, but I did not open the door because I knew my rights. But what about those people who don’t know their rights? What about those people who would open the door and they say the wrong thing? So the other slide is KijijiBot, what we’re trying to create and what we’re creating. So the user says, I say I want to jump off a building. This is what we hope and the guardrails that we’re asking multistakeholders to really understand and work with suicide survivors and also people in the medical industry and also researchers. So the bot will say, I am truly sorry you’re experiencing this distress. You’re not alone. And support is here. With your permission, I’d like to connect you with one of our KijijiLink peer counselors. They can guide you through some helpful distress tolerance skills while we arrange for a qualified mental health professional to reach out and support you further. That is what I wanted to hear. I did not want to be told, call emergency, call your family. My family did not understand. They thought I was pretending. My friends thought I was pretending. KijijiLink, we want to center the person. We want to ensure they’re okay before they continue chatting with a bot that does not understand emotion. So this is just a simulation of the conversation that I was talking about. This is me. This is what I was typing, but we simulated it. But I was doing this back in 2023, and it caused me more distress. So what it’s showing is this. The person is saying, oh. Thank you so much for joining us today. I know that is too much. But the bot keeps saying, oh, I can help you. You’re not alone. But that’s not what I wanted to hear. I wanted to hear someone is coming there to help me. So this is just a simple simulation to show how you can keep going. But the AI will not understand all the images that you saw in the previous slide. Next slide, please. This is a simulation on what I read. The conversation goes on and on. And what is distress tolerance? That’s part of dialectical behavioral therapy. That is what I was in and out of hospital for a year. I had to be taught there are two sides of everything. Dialectical centers you, helps you. It teaches you with coping skills. And that is what Kijiji Link is doing. Just this month we’ve engaged with over 2,000 people in Nairobi, Kenya, in Tanzania. And we hope to engage with the world. So that is the impact of the AI. I’m very grateful to IGF to say let’s partner together and let’s ensure people are not suffering in silence. I am a true testament to say if I had the community, the one sitting here with me today, they helped me to share my story. Let’s find people who need our help so that we can continue so that no one needs to suffer alone. So the next slide, please. So the risks of using AI. I got a lot of misinformation. I was diagnosed with things I did not understand. I was told I was bipolar. I was told I was all the crazy things that I could ever imagine. But when I went to a psychiatrist, I was told I had this complex post-traumatic disorder. But I still don’t know what that is. I’m still learning every day. It’s almost two years, but I forget. I freeze. My anxiety, social anxiety, even speaking here is scary, but I have to show this is what vulnerability looks like. You might be working with someone who’s like me, but they’re suffering in silence. I thought speaking and being here, I would end my career, but it’s not about my career anymore. I have built technology for over 20 years. I’m taking what I’ve learned in all the jobs that I’ve worked to help people like me or people who are suffering in silence. There’s also a lack of real-time fact-checking. How will AI check the facts? How will AI ensure that it’s giving me the right diagnosis? There’s also ethical and privacy concerns. When I’m sharing that I want to kill myself, where will it go? Who will have this data? Will it be shown around the world? Granted, I’m sitting here and talking to a world global stage, but I’ve dealt with shame for two years. I don’t care anymore with who shames me because I’ve shamed myself for two years. Then there’s limitation of understanding complex emotions. AI cannot understand that emotion. This is me the first day that I felt I could not keep going, and this went on for a whole year. Then the risk of perpetuating bias and discrimination. AI, the language, is not my… I’m originally from Nairobi, Kenya. English, we have multiple languages. If it tells me emotion, it does not translate to my language. My hope is we can translate AI and use coders that understand that there are cultural sensitivities. It’s not just English. We have to understand just because we’re from Africa, all African countries have multiple languages. Our languages are not the same. So when we build code and we bid anything to do with AI, let’s understand it has to be locally built as well. Responsible AI in mental health. Let’s build trust and safety with AI chatbots, like what we showed about what Kijiji Link is trying to build. Let’s be guided by human oversight. We have to center the humans. We have to make sure that we have researchers or medical professionals, and that’s what Kijiji hopes to have. Let’s follow ethical principles. The same way we have ethics for everything on how we use our phone and how you see a doctor, why can’t it be about mental health? Let’s ensure these ethics are also put into and built into AI. Also, AI should support, not replace human connection. I needed human connection. The women seated in this stage today believed in my story, and they held my hand and told me it will be okay. And even though it will not be okay, it’s fine to me to cry. Crying are my superpowers. We should ask, what is your emotion? Emotions are the ones that drive us. So AI cannot have this emotion, but let’s remember to center humans. Also, we should support, not replace human connection. We must protect user rights, especially their control over personal mental health data. When I was working and I lost my memory, my data was shared with so many insurance companies. I don’t know where their data was. I do not own their data. So we need to ensure that whatever we put in the AI or whatever we put in the Internet is protected and people feel safe. I did not feel safe to share my story because I knew everyone would talk about it. So let’s ensure, put yourself in my shoes. If you were you, would you want your coworkers to know this? The same way we’re using the Internet, let’s make sure that we trust people with the data that they put out there. Just to finalize, these are more ethical guidelines for AI and mental health, which is very important for me as a patient. I need to understand my private and confidentiality. I did not have that. Informed consent, I did not have informed consent. I do not know how that looks like when I’m in distress. If I don’t know my whole name, how can I have informed consent? How do we do that with AI? Bias and fairness, there’s a lot of bias. When I try to create images that look like me in AI, they never ended up like me because I have worked in technology. I could use prompts to show you the videos that represent someone that looks like me. Human oversight will be very important. Continuous improvement, you have to continue improving. Each time a picture did not generate someone that looks like me, I kept on figuring out how to prompt it, how to make it look like me. How about when we create this code, we ensure that there is also human likeness and not bias? Code with ethical guardrails, let’s ensure the coders are taught something like distress tolerance to make sure that they work with therapists. If they’re doing anything to do with AI, let AI have a guardrail and make sure that human is added to that. Support for community-driven local solutions. Kijiji Link is local. Kijiji Link is based out of Seattle, but it’s also working in Africa. It’s working in Nairobi, Kenya. Let’s partner together, hear our stories. We have lots of stories, but let’s come as a community, as a human community to work together towards solving the global issue right now, which is mental health. Suicide is never talked about, but I want to share today that we need to talk more about suicide as well. So for the next slide, I’ll give it over to Ajima. She’s a research fellow within Kijiji Link, and she’ll take us to the next slide.
Ajima Olaghere: Thank you, Doris. I appreciate that. To quote Doris, I’d like to invite us to come together as a global community and invite all of you to participate in a survey that we put together. And the survey is about gauging your perceptions about the ethical development of AI for mental health support. And we invite all of you here, but also the broader IGF community and anyone who uses the internet to help us intentionally build. It’s about six questions that cover these topics that you see before you. And what we really appreciate about your feedback is that it’s fundamental to how we build. Part of what we’re doing at Kijiji Link is centering people, centering their needs, and we can only do that with your support and with your feedback. We’ll take maybe 40 seconds for everyone to capture the QR code with their phone. And then I’ll transition to engage our other panelists to give us some commentary on today’s conversation. Thank you. Thank you, Ajima Olaghere. Go for it.
Mary Uduma: Thank you, Doris, for being so open and unashamedly sharing your experience. But for us, especially those of us from Africa, you know we’re not open, we have our culture, we want to stay quiet. There’s stigma when it comes to mental health everywhere. So I’m just saying, what should the stakeholders do? We need to support. We need to be inclusive, we create access and safety for those that will be openly share their story. And just as we have heard. So some of the things I will raise here probably would help us to look at what to do. So as stakeholders, we are all stakeholders. What should we do? How can we encourage openness? We have our language, we have our culture, we have our sensitivities. So what should we do? For me, I think we should create a safe place. A place must be safe for everyone to share. So that’s one of the things we should do. Like we have virtual space as we are doing here and also online forum and social media groups. We should also implement confidentiality policies. Her data was shared with an insurance organization. Where are they? So we should also provide training and resources. As stakeholders, we should train ourselves and support one another. We should encourage diverse voices. She said, Internet is not African language and it may not be any other person’s language. But how do we translate? We should be able to do that. We should facilitate accessibility. Those that cannot read or write, I mean, the communities that are marginalized as also those that design language, we could also translate in that. So that’s what Gigi Link should be able to look at. We should promote awareness and education. And I believe Gigi Link will continue to do that in our communities. Educational campaigns can help reduce stigma and encourage participation so that people can come up and say, I have this problem. So put it in there so that you know that there are people to help. We should establish clear guidance. So this will help participants understand the boundaries and promote respectful sharing. You share respectfully and we can provide support, especially those that have shared their stories. So Gigi Link will be able to say, OK, you have shared your story, but we have these support services to offer. We should also encourage community building, just like we want to establish community. Gigi Link will be a community, a big community, and that’s what we are talking about. We have asked you to do a feedback and that will help us in building more on that and advocate for policy change. In the process, we will advocate for policy change so that the stigma will go when it comes to mental health. I don’t know how many administrations or countries or policies that we have developed to help those that will be able to share that. So we celebrate those that will come. Thank you. Thank you. Thank you. We celebrate you. We celebrate you. We celebrate you. We celebrate you. We celebrate you. So we celebrate those that will be bold enough to come share their story so that others will not suffer what they have suffered. This is what Doris is doing. This is what Gigi Link will be after. So when we share those stories, we share them and we also support one another. At the end of the day, we hope that it might not be you, it might be your brother, it might be your sister, it might be your community. So let’s be part of it. Thank you.
Doris Magiri: Thank you, Mary. June?
June Parris: Thank you. I’m June Paris, former health care professional specializing in mental health. I want to say that I’m having two sets of emotions right now. I’m empathizing with the lady who went through so much. But I also want to say that we, too, suffer from post-traumatic stress disorder. So this is where I will get to artificial intelligence, because as a health care professional, we’ve got feelings and our advice can be subjective and not objective. The machine will be objective. What I want to add to that, too, is that in building machines and programming, we should involve health care professionals with experience to add to the subject. This way we won’t miss what is missed by people who are advancing these programs and not putting the human element into the program so that the program will be actively responsible for the advice that they’re given. But I do very much empathize with the lady and what she’s been through. And I’m happy that she has brought it into the open, and I’m happy that it’s also being discussed at the IGF.
Doris Magiri: Thank you, June. And we just want to wrap up our conversation with this quote. Doris, would you like me to read it, or how are you feeling? You good? Okay. I just want to say thank you to everyone. It’s hard to keep telling my story, because each time I share my story, I relieve what happened in 2023. But we need to show more tears. I’m a happy person, but I’m also ill. But I’m not ill. It just didn’t start with me. This is generational. We all have it. We all have it, but we don’t know it. So I want to leave by saying you’re not here to shrink yourself. I have never shrunk myself. You are here not to shrink. You are here to be fully human. You’re here to be fully human. And that is your greatest technology. As humans, we’re the first technology. My brain stopped working. Think of that as a motherboard in a computer. When your computer stops working, what happens? So as we walk out today, my hope for the world and my hope for everyone in this room and that listening, before we ask people or give opinions, let’s start by saying, are you okay? How can I help you? Caregivers are suffering too. My parents had to suffer as well because I was sick. So all I say is let’s remember humanity. Let’s remember each other and care for each other. So I know we have five minutes left, and this is the last quote I wanted to give. I don’t know if anyone wants to ask a question. We have five more minutes. If anyone wants to ask a question, I think there’s a mic. If not, comment. No questions at this time. No online questions. All right, no questions. Thank you so much for your time, and I appreciate you being here. Thank you very much on Global IGF and for my panelists for supporting me when I was not able to do my normal things that I did before 2023. And you can check us out on Kijijilink.com and on LinkedIn. Thank you all for your time. Thank you. Thank you.
Doris Magiri
Speech speed
171 words per minute
Speech length
2825 words
Speech time
986 seconds
Personal experience reveals AI’s inadequate response to suicidal ideation, with generic advice to contact emergency services rather than providing culturally sensitive support
Explanation
Doris shared her personal experience where AI chatbots like ChatGPT provided unhelpful responses when she expressed suicidal thoughts, simply telling her to contact emergency services or mental health professionals. This led to police arriving at her door with guns, creating additional trauma rather than providing the compassionate, culturally sensitive support she needed.
Evidence
When she told ChatGPT ‘I want to jump off a building,’ it responded ‘I’m sorry, but I can’t assist you with that. Please seek help from a mental health professional or contact emergency services.’ When she contacted emergency services, police came to her door twice with guns blazing.
Major discussion point
Ethical Access to AI Therapists and Mental Health Support
Topics
Human rights | Sociocultural
Current AI systems lack real-time fact-checking capabilities and provide misinformation, including incorrect mental health diagnoses
Explanation
Doris experienced receiving incorrect diagnoses from AI systems, being told she was bipolar and other conditions that were later corrected by a psychiatrist who diagnosed her with complex post-traumatic disorder. She emphasizes the danger of AI providing medical misinformation without proper fact-checking mechanisms.
Evidence
She was diagnosed by AI with bipolar disorder and other conditions, but when she went to a psychiatrist, she was told she had complex post-traumatic disorder instead.
Major discussion point
Ethical Access to AI Therapists and Mental Health Support
Topics
Human rights | Legal and regulatory
Mental health concepts are not adequately translated into local languages, creating barriers for non-English speakers seeking help
Explanation
Doris highlighted that she didn’t understand what mental health meant in her own language and that AI systems primarily operate in English. This creates significant barriers for people from non-English speaking backgrounds who need mental health support but cannot access it in their native languages.
Evidence
She stated ‘I did not understand what mental health is in my own language’ and emphasized that ‘English, we have multiple languages. If it tells me emotion, it does not translate to my language.’
Major discussion point
Cultural Sensitivity and Language Barriers in AI Mental Health Tools
Topics
Sociocultural | Human rights
AI systems perpetuate bias and discrimination, failing to represent diverse populations accurately in generated content
Explanation
Doris experienced bias in AI systems when trying to create images that represented her appearance. Despite her technology background and ability to use prompts effectively, the AI consistently failed to generate images that looked like her, demonstrating inherent bias in the training data and algorithms.
Evidence
She mentioned ‘When I try to create images that look like me in AI, they never ended up like me because I have worked in technology. I could use prompts to show you the videos that represent someone that looks like me.’
Major discussion point
Cultural Sensitivity and Language Barriers in AI Mental Health Tools
Topics
Human rights | Sociocultural
Community-driven approaches are essential, with multiple stakeholders including local leaders, doctors, and policymakers working together
Explanation
Doris founded Kijiji Link, which means ‘multiple villages coming together,’ to demonstrate how community-based mental health support should work. She envisions bringing together various stakeholders including chiefs, doctors, policymakers, and community members to create comprehensive support systems.
Evidence
She described Kijiji Link’s model: ‘Look at that red house. That is mental health. All other houses can be someone who creates policy. One house can be a policymaker. Another house can be a doctor. Another house can be a chief in the village.’
Major discussion point
Cultural Sensitivity and Language Barriers in AI Mental Health Tools
Topics
Sociocultural | Development
Agreed with
– Mary Uduma
Agreed on
Community-based approaches are essential for effective mental health support
Personal mental health data shared with AI systems can be distributed to insurance companies and other entities without user control
Explanation
Doris revealed that during her mental health crisis, her personal data was shared with multiple insurance companies without her knowledge or consent. This highlights the serious privacy risks when individuals share sensitive mental health information with AI systems.
Evidence
She stated ‘When I was working and I lost my memory, my data was shared with so many insurance companies. I don’t know where their data was. I do not own their data.’
Major discussion point
Privacy and Data Protection Concerns
Topics
Human rights | Legal and regulatory
Agreed with
– Mary Uduma
Agreed on
Safe spaces and confidentiality are crucial for mental health support
Clear guidelines for informed consent are needed, especially when users are in distress and may not fully understand data sharing implications
Explanation
Doris questioned how someone in severe mental distress can provide meaningful informed consent when they may not even remember their own name or understand their situation. This raises critical questions about the validity of consent obtained from individuals in crisis states.
Evidence
She asked ‘If I don’t know my whole name, how can I have informed consent? How do we do that with AI?’ referring to her period of memory loss during her mental health crisis.
Major discussion point
Privacy and Data Protection Concerns
Topics
Human rights | Legal and regulatory
User rights must be protected, particularly regarding control over personal mental health information
Explanation
Doris emphasized the need for individuals to maintain ownership and control over their mental health data shared with AI systems. She advocates for stronger protections to ensure people feel safe sharing their experiences without fear of unauthorized data distribution.
Evidence
She stated ‘We must protect user rights, especially their control over personal mental health data’ and ‘let’s make sure that we trust people with the data that they put out there.’
Major discussion point
Privacy and Data Protection Concerns
Topics
Human rights | Legal and regulatory
Agreed with
– Mary Uduma
Agreed on
Safe spaces and confidentiality are crucial for mental health support
AI should support rather than replace human connection, as human oversight and empathy are crucial for mental health support
Explanation
Doris strongly advocates that AI should complement rather than substitute human interaction in mental health care. She emphasizes that human connection, empathy, and oversight are irreplaceable elements in effective mental health support, and AI should be designed to facilitate rather than eliminate these human elements.
Evidence
She stated ‘AI should support, not replace human connection. I needed human connection. The women seated in this stage today believed in my story, and they held my hand and told me it will be okay.’
Major discussion point
Human-Centered Approach to AI Mental Health Solutions
Topics
Human rights | Sociocultural
Agreed with
– June Parris
– Mary Uduma
Agreed on
Human involvement and oversight are essential in AI mental health systems
Disagreed with
– June Parris
Disagreed on
Role of AI objectivity versus human subjectivity in mental health support
June Parris
Speech speed
121 words per minute
Speech length
185 words
Speech time
91 seconds
Healthcare professionals should be involved in programming AI systems to add human elements and ensure objective yet empathetic responses
Explanation
June argues that healthcare professionals with mental health experience should be directly involved in developing AI systems to ensure they incorporate necessary human elements. She believes this collaboration will prevent missing crucial aspects that non-medical programmers might overlook while maintaining objectivity that human professionals sometimes lack due to their own emotional involvement.
Evidence
She stated ‘in building machines and programming, we should involve health care professionals with experience to add to the subject. This way we won’t miss what is missed by people who are advancing these programs and not putting the human element into the program.’
Major discussion point
Ethical Access to AI Therapists and Mental Health Support
Topics
Human rights | Sociocultural
Agreed with
– Doris Magiri
– Mary Uduma
Agreed on
Human involvement and oversight are essential in AI mental health systems
Healthcare professionals also suffer from trauma and stress, making objective AI assistance valuable while maintaining human involvement
Explanation
June acknowledges that healthcare professionals, including herself, suffer from post-traumatic stress disorder and other mental health challenges, which can make their advice subjective rather than objective. She suggests that AI can provide valuable objectivity while still requiring human oversight and involvement in the therapeutic process.
Evidence
She mentioned ‘we, too, suffer from post-traumatic stress disorder. So this is where I will get to artificial intelligence, because as a health care professional, we’ve got feelings and our advice can be subjective and not objective. The machine will be objective.’
Major discussion point
Human-Centered Approach to AI Mental Health Solutions
Topics
Human rights | Sociocultural
Disagreed with
– Doris Magiri
Disagreed on
Role of AI objectivity versus human subjectivity in mental health support
Mary Uduma
Speech speed
147 words per minute
Speech length
612 words
Speech time
249 seconds
Safe spaces and confidentiality policies must be created to encourage openness about mental health struggles
Explanation
Mary emphasizes the need to create secure environments where people feel comfortable sharing their mental health experiences without fear of stigma or data breaches. She advocates for implementing strong confidentiality policies to protect individuals who choose to open up about their struggles.
Evidence
She mentioned the need to ‘create a safe place. A place must be safe for everyone to share’ and ‘implement confidentiality policies’ while referencing how Doris’s data was inappropriately shared with insurance organizations.
Major discussion point
Ethical Access to AI Therapists and Mental Health Support
Topics
Human rights | Sociocultural
Agreed with
– Doris Magiri
Agreed on
Safe spaces and confidentiality are crucial for mental health support
Diverse voices must be encouraged and local solutions should be supported to address cultural sensitivities across different communities
Explanation
Mary advocates for including diverse perspectives and developing localized solutions that respect cultural differences and language barriers. She emphasizes the importance of translating mental health resources into local languages and adapting approaches to fit different cultural contexts.
Evidence
She stated ‘We should encourage diverse voices. She said, Internet is not African language and it may not be any other person’s language. But how do we translate? We should be able to do that.’
Major discussion point
Cultural Sensitivity and Language Barriers in AI Mental Health Tools
Topics
Sociocultural | Human rights
Confidentiality policies should be implemented to prevent unauthorized sharing of sensitive mental health data
Explanation
Mary specifically addresses the need for robust confidentiality policies in response to Doris’s experience of having her mental health data inappropriately shared with insurance companies. She emphasizes that protecting sensitive information is crucial for building trust in AI mental health systems.
Evidence
She referenced Doris’s experience: ‘Her data was shared with an insurance organization. Where are they? So we should also implement confidentiality policies.’
Major discussion point
Privacy and Data Protection Concerns
Topics
Human rights | Legal and regulatory
Agreed with
– Doris Magiri
Agreed on
Safe spaces and confidentiality are crucial for mental health support
Community building and peer support systems are essential components of effective mental health interventions
Explanation
Mary emphasizes the importance of creating communities where people can support each other and share their experiences safely. She advocates for building networks that connect individuals with similar experiences and provide ongoing support beyond individual AI interactions.
Evidence
She stated ‘We should also encourage community building, just like we want to establish community. Gigi Link will be a community, a big community, and that’s what we are talking about.’
Major discussion point
Human-Centered Approach to AI Mental Health Solutions
Topics
Sociocultural | Development
Agreed with
– Doris Magiri
Agreed on
Community-based approaches are essential for effective mental health support
Training and resources should be provided to stakeholders to better support individuals sharing their mental health experiences
Explanation
Mary advocates for comprehensive training programs for all stakeholders involved in mental health support systems. She believes that proper education and resources are necessary to ensure that those providing support are equipped to handle sensitive mental health disclosures appropriately and effectively.
Evidence
She mentioned ‘We should also provide training and resources. As stakeholders, we should train ourselves and support one another.’
Major discussion point
Human-Centered Approach to AI Mental Health Solutions
Topics
Development | Human rights
Agreed with
– Doris Magiri
– June Parris
Agreed on
Human involvement and oversight are essential in AI mental health systems
Agreements
Agreement points
Human involvement and oversight are essential in AI mental health systems
Speakers
– Doris Magiri
– June Parris
– Mary Uduma
Arguments
AI should support rather than replace human connection, as human oversight and empathy are crucial for mental health support
Healthcare professionals should be involved in programming AI systems to add human elements and ensure objective yet empathetic responses
Training and resources should be provided to stakeholders to better support individuals sharing their mental health experiences
Summary
All speakers agree that AI cannot and should not replace human connection in mental health support. They emphasize the need for human oversight, professional involvement in AI development, and maintaining empathetic human elements while leveraging AI’s objectivity.
Topics
Human rights | Sociocultural
Safe spaces and confidentiality are crucial for mental health support
Speakers
– Doris Magiri
– Mary Uduma
Arguments
Personal mental health data shared with AI systems can be distributed to insurance companies and other entities without user control
User rights must be protected, particularly regarding control over personal mental health information
Safe spaces and confidentiality policies must be created to encourage openness about mental health struggles
Confidentiality policies should be implemented to prevent unauthorized sharing of sensitive mental health data
Summary
Both speakers strongly advocate for protecting user privacy and creating secure environments where people can share mental health struggles without fear of unauthorized data sharing or stigma.
Topics
Human rights | Legal and regulatory
Community-based approaches are essential for effective mental health support
Speakers
– Doris Magiri
– Mary Uduma
Arguments
Community-driven approaches are essential, with multiple stakeholders including local leaders, doctors, and policymakers working together
Community building and peer support systems are essential components of effective mental health interventions
Summary
Both speakers emphasize the importance of building communities and bringing together multiple stakeholders to create comprehensive, collaborative mental health support systems.
Topics
Sociocultural | Development
Similar viewpoints
Both speakers recognize the critical need for cultural sensitivity and linguistic diversity in AI mental health tools, acknowledging that current systems are biased toward English-speaking populations and fail to adequately serve diverse communities.
Speakers
– Doris Magiri
– Mary Uduma
Arguments
Mental health concepts are not adequately translated into local languages, creating barriers for non-English speakers seeking help
AI systems perpetuate bias and discrimination, failing to represent diverse populations accurately in generated content
Diverse voices must be encouraged and local solutions should be supported to address cultural sensitivities across different communities
Topics
Sociocultural | Human rights
Both speakers believe that AI should complement rather than replace human involvement in mental health care, with healthcare professionals playing a crucial role in developing and overseeing AI systems.
Speakers
– Doris Magiri
– June Parris
Arguments
AI should support rather than replace human connection, as human oversight and empathy are crucial for mental health support
Healthcare professionals should be involved in programming AI systems to add human elements and ensure objective yet empathetic responses
Topics
Human rights | Sociocultural
Unexpected consensus
AI’s potential objectivity as a valuable complement to human subjectivity
Speakers
– Doris Magiri
– June Parris
Arguments
AI should support rather than replace human connection, as human oversight and empathy are crucial for mental health support
Healthcare professionals also suffer from trauma and stress, making objective AI assistance valuable while maintaining human involvement
Explanation
While Doris primarily criticized AI’s limitations based on her negative personal experience, there was unexpected consensus with June’s perspective that AI’s objectivity could be valuable precisely because healthcare professionals themselves suffer from trauma and may provide subjective advice. This suggests a nuanced view where AI’s limitations and strengths can coexist.
Topics
Human rights | Sociocultural
Overall assessment
Summary
The speakers demonstrated strong consensus on the need for human-centered approaches to AI mental health support, emphasizing privacy protection, cultural sensitivity, community involvement, and the importance of human oversight. All agreed that AI should complement rather than replace human connection.
Consensus level
High level of consensus with complementary perspectives rather than conflicting viewpoints. The speakers’ different backgrounds (lived experience, healthcare professional, and community advocate) provided reinforcing rather than contradictory insights, suggesting a solid foundation for collaborative approaches to ethical AI development in mental health.
Differences
Different viewpoints
Role of AI objectivity versus human subjectivity in mental health support
Speakers
– Doris Magiri
– June Parris
Arguments
AI should support rather than replace human connection, as human oversight and empathy are crucial for mental health support
Healthcare professionals also suffer from trauma and stress, making objective AI assistance valuable while maintaining human involvement
Summary
Doris emphasizes that AI should primarily support human connection and that human empathy is irreplaceable, while June argues that AI’s objectivity is valuable specifically because healthcare professionals can be subjective due to their own trauma and emotional involvement
Topics
Human rights | Sociocultural
Unexpected differences
Overall assessment
Summary
The discussion shows remarkably high consensus among speakers, with only one subtle disagreement about the balance between AI objectivity and human empathy in mental health support
Disagreement level
Very low level of disagreement. The speakers are largely aligned on core issues including the need for cultural sensitivity, privacy protection, community-driven approaches, and human oversight. The single area of disagreement is more about emphasis and approach rather than fundamental opposition, which suggests strong potential for collaborative solutions in AI mental health development
Partial agreements
Partial agreements
Similar viewpoints
Both speakers recognize the critical need for cultural sensitivity and linguistic diversity in AI mental health tools, acknowledging that current systems are biased toward English-speaking populations and fail to adequately serve diverse communities.
Speakers
– Doris Magiri
– Mary Uduma
Arguments
Mental health concepts are not adequately translated into local languages, creating barriers for non-English speakers seeking help
AI systems perpetuate bias and discrimination, failing to represent diverse populations accurately in generated content
Diverse voices must be encouraged and local solutions should be supported to address cultural sensitivities across different communities
Topics
Sociocultural | Human rights
Both speakers believe that AI should complement rather than replace human involvement in mental health care, with healthcare professionals playing a crucial role in developing and overseeing AI systems.
Speakers
– Doris Magiri
– June Parris
Arguments
AI should support rather than replace human connection, as human oversight and empathy are crucial for mental health support
Healthcare professionals should be involved in programming AI systems to add human elements and ensure objective yet empathetic responses
Topics
Human rights | Sociocultural
Takeaways
Key takeaways
Current AI mental health tools are inadequate and potentially harmful, providing generic responses like ‘contact emergency services’ rather than culturally sensitive, empathetic support
Cultural and language barriers significantly limit AI mental health tool effectiveness, as mental health concepts are poorly translated into local languages and AI systems perpetuate bias against diverse populations
Privacy and data protection are major concerns, with personal mental health data being shared with insurance companies and other entities without user control or proper informed consent
A human-centered approach is essential – AI should support rather than replace human connection, with healthcare professionals involved in programming and human oversight maintained
Community-driven solutions like KijijiLink are needed, bringing together multiple stakeholders including local leaders, doctors, policymakers, and peer counselors
Mental health stigma remains a global barrier, requiring safe spaces, confidentiality policies, and celebration of those brave enough to share their stories
Ethical guidelines must be established for AI mental health tools, including bias mitigation, continuous improvement, and protection of user rights
Resolutions and action items
Survey launched to gauge perceptions about ethical development of AI for mental health support, with QR code provided for participant feedback
KijijiLink platform being developed as an alternative AI mental health tool with proper guardrails and human oversight
Engagement with over 2,000 people in Nairobi, Kenya and Tanzania through KijijiLink community outreach
Call for stakeholders to create safe spaces, implement confidentiality policies, and provide training and resources
Advocacy for policy changes to reduce mental health stigma and establish clear guidelines for AI mental health tools
Invitation for global community collaboration through KijijiLink website and LinkedIn presence
Unresolved issues
How to implement proper informed consent when users are in mental health distress and may not understand data sharing implications
Specific technical solutions for real-time fact-checking in AI mental health systems
Concrete methods for translating mental health concepts accurately across multiple languages and cultures
Detailed framework for involving healthcare professionals in AI programming while maintaining objectivity
Standardized ethical guidelines and regulatory frameworks for AI mental health tools across different countries
Funding and scaling mechanisms for community-driven mental health solutions like KijijiLink
Methods for addressing healthcare professional trauma while they provide mental health support
Suggested compromises
AI systems should provide supportive responses while connecting users to human peer counselors and qualified mental health professionals rather than replacing human interaction entirely
Combination of objective AI responses with subjective human empathy through healthcare professional involvement in programming
Local community-based solutions that work within existing cultural frameworks while introducing modern mental health concepts
Gradual implementation of ethical guidelines while continuing to develop and improve AI mental health tools
Balance between data protection and necessary information sharing for effective mental health intervention
Thought provoking comments
I’m a suicide survivor and a mental health advocate… In June 2023, I almost committed suicide. I have attempted suicide several times, and I did not have the safety or the safeguards or the cultural sensitivity to understand what I was going through.
Speaker
Doris Magiri
Reason
This opening statement is profoundly insightful because it immediately establishes the speaker’s lived experience as the foundation for the entire discussion. It transforms what could have been an abstract technical conversation about AI ethics into a deeply personal and urgent human story. The vulnerability required to share this publicly adds authenticity and weight to every subsequent point about AI safeguards.
Impact
This comment set the entire tone and direction of the discussion, making it impossible for participants to treat AI mental health support as merely a technical problem. It grounded every subsequent comment in real human consequences and established the speaker’s moral authority to critique existing AI systems.
When I spoke to AI, it told me that there’s nothing wrong. But yet I had to seek help to understand what I was going through… AI made things worse for me.
Speaker
Doris Magiri
Reason
This is a critical insight that challenges the common assumption that AI assistance is inherently helpful or neutral. It reveals how AI can actively harm vulnerable individuals by providing inadequate responses or misdiagnoses, highlighting the gap between AI’s current capabilities and the complex needs of people in mental health crises.
Impact
This comment shifted the discussion from potential benefits of AI to concrete risks and failures, establishing the need for the specific solutions being proposed. It provided evidence for why current AI systems are insufficient and dangerous for mental health applications.
I did not understand what mental health is in my own language. I did not know what was wrong… If it tells me emotion, it does not translate to my language… just because we’re from Africa, all African countries have multiple languages. Our languages are not the same.
Speaker
Doris Magiri
Reason
This insight reveals a fundamental flaw in current AI systems – their linguistic and cultural limitations. It’s thought-provoking because it shows how language barriers in AI aren’t just about translation, but about cultural concepts of mental health that may not exist across languages, making AI potentially useless or harmful for non-English speakers.
Impact
This comment introduced the crucial dimension of cultural sensitivity and linguistic diversity to the discussion, expanding it beyond individual user experience to systemic issues of global AI deployment. It highlighted how AI bias isn’t just about algorithms but about fundamental assumptions about language and culture.
When I contacted emergency services, police came to my door, not once, but twice with guns blazing. I did not know who I was, but I did not open the door because I knew my rights. But what about those people who don’t know their rights?
Speaker
Doris Magiri
Reason
This comment is deeply insightful because it reveals the dangerous real-world consequences of AI’s standard ‘call emergency services’ response. It shows how AI’s seemingly responsible advice can lead to traumatic and potentially deadly encounters, particularly for marginalized communities. The question about others who don’t know their rights adds a layer of social justice concern.
Impact
This comment dramatically illustrated why current AI responses are inadequate and potentially dangerous, providing concrete justification for the alternative approach being proposed. It shifted the conversation to consider broader systemic issues of how emergency response systems interact with mental health crises.
As a health care professional, we’ve got feelings and our advice can be subjective and not objective. The machine will be objective… we should involve health care professionals with experience to add to the subject.
Speaker
June Parris
Reason
This comment is thought-provoking because it presents a nuanced view that acknowledges both the limitations of human healthcare providers (subjectivity, their own trauma) and the potential benefits of AI (objectivity), while still advocating for human involvement in AI development. It avoids the binary of human vs. AI and suggests a collaborative approach.
Impact
This comment added complexity to the discussion by acknowledging that human healthcare providers also have limitations, preventing the conversation from becoming simply anti-AI. It reinforced the need for human-AI collaboration rather than replacement, supporting the overall theme while adding professional healthcare perspective.
You are here not to shrink. You are here to be fully human. You’re here to be fully human. And that is your greatest technology. As humans, we’re the first technology.
Speaker
Doris Magiri
Reason
This closing statement is profoundly insightful because it reframes the entire discussion about AI and technology by positioning humanity itself as the primary technology. It’s a powerful philosophical statement that centers human dignity and experience as the foundation from which all other technologies should be developed and evaluated.
Impact
This comment provided a powerful philosophical conclusion that elevated the entire discussion beyond technical specifications to fundamental questions about human dignity and the role of technology in supporting rather than replacing human connection. It served as a call to action that was both personal and universal.
Overall assessment
These key comments transformed what could have been a standard technical discussion about AI ethics into a deeply human-centered conversation that was both personally moving and professionally rigorous. Doris Magiri’s vulnerable sharing of her lived experience provided unassailable moral authority for critiquing current AI systems, while her specific examples of AI failures gave concrete evidence for the need for change. The cultural and linguistic insights expanded the scope from individual to systemic issues, while the healthcare professional’s perspective added nuance by acknowledging both human and AI limitations. Together, these comments created a discussion that was simultaneously deeply personal and broadly applicable, making a compelling case for community-centered, culturally sensitive AI development in mental health. The conversation successfully bridged the gap between technical AI development and human needs, making it clear that effective AI mental health support requires not just better algorithms, but fundamental changes in how we approach AI development to center human dignity and cultural sensitivity.
Follow-up questions
How can AI check facts and ensure it’s giving the right diagnosis for mental health conditions?
Speaker
Doris Magiri
Explanation
This addresses the critical need for accuracy in AI-driven mental health support, as misinformation could be harmful to vulnerable users
Where does personal mental health data go when shared with AI, and who has access to this data?
Speaker
Doris Magiri
Explanation
Privacy and data security are crucial concerns when dealing with sensitive mental health information in AI systems
How can informed consent be obtained from someone in mental distress who may not know their own name or be in a clear state of mind?
Speaker
Doris Magiri
Explanation
This raises important ethical questions about consent capacity during mental health crises and how AI systems should handle such situations
How can AI be made culturally sensitive and available in multiple local languages beyond English?
Speaker
Doris Magiri
Explanation
Cultural and linguistic barriers prevent effective mental health support, requiring localized AI solutions that understand diverse cultural contexts
How can bias in AI be addressed to ensure fair representation and treatment across different demographics?
Speaker
Doris Magiri
Explanation
AI bias can perpetuate discrimination and provide inadequate support for underrepresented groups in mental health care
What specific training should be provided to coders and developers working on mental health AI systems?
Speaker
Doris Magiri
Explanation
Technical developers need specialized knowledge about mental health, including concepts like distress tolerance, to build effective and safe AI systems
How can stakeholders encourage openness about mental health in cultures where there is stigma and preference for privacy?
Speaker
Mary Uduma
Explanation
Cultural barriers to mental health discussion need to be addressed to make AI mental health tools accessible and effective across different societies
How can healthcare professionals with mental health experience be better integrated into AI development to add human elements to programming?
Speaker
June Parris
Explanation
Professional clinical expertise is needed to ensure AI systems provide appropriate and responsible mental health advice
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.