Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Caio Machado
In the discussion about the impact of artificial intelligence (AI), several key areas were highlighted. The first area of focus was the importance of data quality, model engineering, and deployment in AI systems. An example provided was the Compas case, where an algorithmic tool used for risk assessment began being used to determine the severity of sentences. This case illustrates the potential consequences of relying on AI systems without ensuring the accuracy and quality of the underlying data and models.
Another concern was how AI tools become the infrastructure for accessing information. It was noted that, similar to how Google search results differ based on the keywords used, it becomes harder to verify and compare information when it is presented as a single, compact answer by a chatbot. This raises questions about the reliability and transparency of the information provided by AI systems.
The lack of accountability in AI systems was identified as a major issue that can contribute to the spread of disinformation or misinformation. Without proper proofreading mechanisms and quality control, distorted perceptions of reality can arise, leading to potential harm. It was argued that there should be a focus on ensuring accountability and fairness at the AI deployment level to mitigate these risks.
Furthermore, the discussion highlighted the need for more inclusive and ethical approaches to handling uncertainty and predictive multiplicity in AI models. It was emphasized that decisions regarding individuals who are uncertain or fall into multiple predictive categories should not be solely made by the developing team. Instead, there should be inclusivity and ethical considerations to protect the rights and well-being of these individuals.
Policy, regulation, and market rules were mentioned as important factors to address in order to limit the circulation of deepfake tools. Evidence was provided for this, citing the common use of deepfake voices to run scams over WhatsApp in Brazil. It was argued that effective policies and regulations need to be implemented to tackle the challenges of deepfake technology.
Promoting digital literacy and increasing traceability were seen as positive steps towards addressing the challenges posed by AI. These measures can enable individuals to better understand and navigate the digital landscape, while also enhancing accountability and transparency.
In conclusion, it was acknowledged that there is no single solution to address the impact of AI. Instead, a series of initiatives and rules should be promoted to ensure the responsible use of AI and mitigate potential harms. By focusing on data quality, accountability, fairness, inclusivity, and ethical considerations, along with effective policies and regulations, society can navigate the challenges and reap the benefits of AI technology.
Audience
Advancements in AI technology have led to the development of systems capable of mimicking human voices and generating messages that are virtually indistinguishable from those produced by actual individuals. While this technological progress opens up new possibilities for communication and interaction, it also raises concerns about the potential misuse of generative AI for impersonation in cybercrime.
The ability to mimic voices and generate realistic messages allows malicious actors to deceive individuals in various ways. For example, they can impersonate someone known to the target, such as a relative or a friend, to request money or engage in other forms of scams. This poses a significant threat, as victims can easily fall for these manipulated and convincing messages, believing them to be genuine.
Given the potential harm and impact of the misuse of generative AI for impersonation in cybercrime, there is a growing consensus on the need for regulation and discussion to address this issue effectively. It is crucial to establish guidelines and frameworks that ensure the responsible use of AI technology and protect individuals from deceptive practices.
By implementing regulations, policymakers can help deter and punish those who misuse generative AI for malicious purposes. This includes imposing legal measures that specifically address the impersonation and fraudulent use of AI-generated messages. Additionally, discussions among experts, policymakers, and industry stakeholders are essential to raise awareness, share knowledge, and explore potential solutions to mitigate the risks associated with the misuse of AI technology.
The concerns surrounding the misuse of generative AI for impersonation in cybercrime align with the Sustainable Development Goals (SDGs), particularly SDG 9 (Industry, Innovation, and Infrastructure) and SDG 16 (Peace, Justice, and Strong Institutions). These goals emphasize the importance of promoting innovation while ensuring the development of robust institutions that foster peace, justice, and security.
In conclusion, while advancements in AI technology have brought about remarkable capabilities, they have also introduced new challenges regarding the potential misuse of generative AI for impersonation in cybercrime. To address these concerns effectively, regulation and discussion are crucial. By establishing guidelines, imposing legal measures, and fostering open dialogues, we can strive for the responsible use of AI technology and protect individuals from the harmful consequences of impersonation in the digital sphere.
Heloisa Candello
Generative AI and large language models have the potential to significantly enhance conversational systems. These systems possess the capability to handle a wide range of tasks, allowing for parallel communication, fluency, and multi-step reasoning. Moreover, their ability to process vast amounts of data sets them apart. However, it is important to note that there is a potential risk associated with the use of such systems, as they may produce hallucinations and false information due to a lack of control over the model.
In order to ensure that vulnerable communities are not negatively impacted by the application of AI technologies, careful consideration is required. AI systems have the capacity to misalign with human expectations and the expectations of specific communities. Therefore, transparency, understanding, and probe design are crucial for mitigating any harmful effects that may arise. It is essential for AI systems to align with user values, and the models selected should accurately represent the data pertaining to their intended users.
In addition, the design of responsible generative AI systems must adhere to certain principles. This will help to ensure that the models are built in a way that is responsible and ethical. By considering productivity, fast performance, speed, efficiency, and faithfulness in the design of AI systems, their impact on vulnerable communities can be effectively addressed.
Overall, exercising caution when utilizing generative AI and large language models in conversational systems is essential. While these systems have the potential to greatly improve communication, the risks of producing hallucinations and false information must be addressed. Additionally, considering the impact on vulnerable communities and aligning user values with the selected models are key factors in responsible AI design. By following these principles, the potential benefits of these technologies can be harnessed while minimizing any potential harm.
Diogo Cortiz
The discussion explores multiple aspects of artificial intelligence (AI) and its impact on society, education, ethics, regulation, and crime. One significant AI tool mentioned is JGPT, which rapidly gained popularity and attracted hundreds of millions of users within weeks of its launch last year. This indicates the increasing penetration of generative AI in society.
The potential of AI is seen as limitless and exciting by students and learners. Once users realized the possibilities of AI, they started using it for various activities. The versatility of AI allows it to be combined with other forms of AI, enhancing its potential further.
However, there are conflicting views on AI. Some individuals perceive AI as harmful and advocate for its avoidance, while others express enthusiasm and desire to witness further advancements in AI technology.
The ethical and regulatory discussions surrounding AI have emerged relatively recently, with a focus on addressing the evolving challenges and implications. The ethical aspects of AI usage and the establishment of a regulatory framework have gained attention within the past five years.
In the academic field, AI has brought about drastic changes. Many individuals are utilizing AI, potentially even for cheating or presenting work not developed by students themselves. This development has led to teachers and students organizing webinars and seminars to share their knowledge and experiences with AI.
The prohibition of AI tools is not considered a solution by the speakers. Instead, they advocate for adapting to new skills and tools that AI brings. They draw parallels with the emergence of pocket calculators, which necessitated adapting and evolving curricula to incorporate these tools. As AI tools reduce time and effort on various tasks, students need to acquire new skills pertinent for the future.
It is emphasized that regulation alone cannot resolve all AI-related issues. AI, particularly generative AI, can be employed for harmful purposes like mimicking voices, and existing laws may not be equipped to address these new possibilities. Hence, a comprehensive approach encompassing both regulation and adaptation to the new reality of generative AI is imperative.
In conclusion, the discussion highlights the increasing impact of AI on society, education, ethics, regulation, and crime. The rapid penetration of generative AI, like the JGPT tool, signifies the growing influence of AI in society. While AI holds unlimited potential and excites students and learners, there are conflicting views on its impact, with concerns about its harmful effects. The ethical and regulatory discussions around AI are relatively recent. The academic field is experiencing significant changes due to the adoption of AI, necessitating the acquisition of new skills by students. Prohibiting AI tools is not the solution; instead, adapting to the new skills and tools that AI offers is necessary. Regulation alone is insufficient to address AI-related challenges, as AI can be misused for harmful purposes. Overall, a well-rounded approach encompassing both regulation and adaptation is needed to navigate the complex landscape of AI.
Reinaldo Ferraz
The network session on generative AI commenced with a diverse panel of speakers who shared their insights. Eloisa Candelo from IBM Research and Caio Machado from Instituto Vero and Oxford University participated remotely, while Roberto Zambrana and Mateus Petroni were physically present. Each speaker brought a unique perspective to the discussion, addressing various aspects of generative AI.
The session began with Eloisa Candelo expressing her appreciation for being a part of the esteemed panel. She highlighted the significance of generative AI for the wider community and shared her thoughts on its potential impact. Despite some initial technical issues with the microphone, Eloisa’s remarks eventually became audible to the audience.
Following Eloisa’s presentation, Roberto Zambrana offered his industry-oriented views on generative AI. He emphasized the practical applications and benefits, shedding light on the potential for innovation and growth. Roberto’s insights provided valuable perspectives from an industry standpoint.
Next, Caio Machado provided a different viewpoint, representing civil society and academia. Caio discussed the societal implications of generative AI and considered its impact on various sectors. His presentation drew attention to ethical concerns and raised questions about the involvement of civil society in the development and deployment of AI technologies.
Mateus Petroni then shared his insights, further enriching the discussion. Mateus contributed his thoughts and experiences related to generative AI, offering a well-rounded understanding of the subject.
By incorporating inputs from diverse stakeholders, the session presented a comprehensive view of generative AI. The speakers represented various sectors, including industry, academia, and civil society. This multidimensional approach added depth to the discussions and brought forth different perspectives on the topic.
Following the initial presentations, the audience had the opportunity to ask questions, albeit briefly due to time constraints. Only one question could be addressed, but this interactive engagement facilitated a deeper understanding of the topic among the participants.
In summary, the session on generative AI successfully united speakers from different backgrounds to explore the subject from multiple angles. Their valuable insights stimulated critical thinking and provided knowledge about the potential implications and future directions of generative AI. The session concluded with gratitude expressed towards the speakers and the audience for their participation and engagement.
Matheus Petroni
Advancements in artificial intelligence (AI) have the potential to revolutionise the field of usability and enhance user engagement. One prime example of this is Meta’s recent introduction of 28 AI personas modelled after public figures. These AI personas provide users with valuable advice and support, addressing usability challenges and improving user engagement. This development is a positive step forward, demonstrating how AI can bridge the gap between technology and user experience.
However, there are potential negative implications associated with AI chatbots. Users may inadvertently develop strong emotional relationships with these AI entities, which could be problematic if the chatbots fail to meet their needs or if users become overly dependent on them. It is crucial to carefully monitor and manage the emotional attachment users develop with AI chatbots to ensure their well-being and prevent harm.
In addition to the impact on user engagement and emotional attachment, the increase in AI-generated digital content poses its own challenges. With AI capable of creating vast amounts of digital content, it becomes imperative to have tools in place to discern the origin and nature of this content. The issue of disinformation becomes more prevalent as AI algorithms generate content that may be misleading or harmful. Therefore, improvements in forensic technologies are necessary to detect and label AI-generated content, particularly deepfake videos with harmful or untruthful narratives.
To address the challenges posed by AI-generated content, promoting a culture of robust fact-checking and content differentiation is vital. Presenting essential information alongside user interfaces can facilitate this process. By providing users with transparent and reliable information, they can make informed decisions about the content they consume. This approach aligns with the sustainable development goals of peace, justice, and strong institutions.
In conclusion, while AI advancements hold enormous potential for enhancing usability and user engagement, there are also potential risks and challenges associated with emotional attachment and AI-generated content. Carefully managing the development and deployment of AI technologies is essential to harness their benefits while mitigating potential drawbacks. By promoting transparent and informative user interfaces, investing in forensic technologies, and fostering a robust fact-checking culture, we can unlock the full potential of AI while safeguarding against potential negative consequences.
Session transcript
Reinaldo Ferraz:
Hello, good afternoon. We are going to start our network session about generative AI, and for this session we will have different speakers that will contribute to our discussion here. We will have two online participants, that is Eloisa Candelo from IBM Research, Caio Machado from Instituto Vero and Oxford University. We have Roberto Zambrana, present in person, and also Mateus Petroni. So I will invite the online speakers to start our discussion. So Eloisa, could you please start your initial remarks?
Heloisa Candello:
Thank you, Diogo. I’m going to start. Hello everyone. I’m going to share my screen, and then we can start. Thank you so much for the introduction. One second. So I’m Eloisa Candelo, I’m a research scientist and a manager at IBM Research Brazil. I have a group that’s called the Human Centered Responsible Tech, and we have several projects that the aim is to have social impact and using AI. For the last eight years, I’m conducting and researching in the intersection of HCI and AI. particularly in conversational systems. So this picture illustrates one of my current projects that aims to measure social impact of financial initiatives using AI. Okay, in the area of conversational systems, we had several projects to understand the perception of text-based machine outputs, for example, in this first one. This is an example, this is just a series of examples to look at the conversational systems and the main challenge that we are studying for a long time and now with large language models, how those challenges are enhanced and how can we take care of those issues that were before that, but with the new technologies, we have to pay more attention and think deeply how is the impact of those new technologies. So for example, the first one that I was mentioning was in 2017 and we measure how typography text was perceived by humans in chatbots. So we did the kind of twin tests to understand about the human as of machines. And then we worked with multi-agents and multi-bots and how people collaborated with agents representing financial products to make investment decisions. And with the same platform, we did an art exhibition where bots talk to each other and the humans talk to the bots. And this exhibition we did in a cultural venue. in Brazil, and the idea is to have the same platform as the mute bots that we had before. And in this one, we had three characters of a book, Capitu, Bentinho, and Escobar, that are characters of a book from a famous novel in Brazil. And we measured how audiences perceived the interaction of those chatbots on the table. So people type it, and there was a projector that projected their answers that were designed and draw it on the table. So we also look at that, how the engagement was, if the chatbots, they asked people for their names, and they addressed it, they used the direct address. So this was something that we also look at. We also did our work when we answer this one, that people are looking at the pictures. Actually, they are talking to the paintings as well. They are asking, oh, what’s this yellow color? And then the system answers, what is that? So we can think about, now that we are going to reflect about prompts as well. And last year, we launched this exhibition in a science museum in Brazil that children can teach the robots. So they teach examples of how humans talk and similar examples of the same statement. So the robots can learn with them. And we also have a kit for teachers to work in the school with them. And finally, the last one is one of my… by recent studies, one of the research studies that we did with a collaboration with a big bank in Brazil. And we studied how people, they train machines, chatbots in the banks. So those people were the best people that worked in the call centers, the best employees. And they train Watson, that’s the chatbot there. So it’s a room full of people to make sure that the bot will understand the clients. And there are a lot of articulation of work happening there. So how the curators, they interact with each other to create those answers, the chatbot answers. So we see that we can have a screen full of a challenge that we research it. And a lot of people research it in the HCI community. So we have, for example, errors, how can we minimize and mitigate errors? We have a turn taking. If you have more than one chatbot, for example, we have the problem of interfacing humanization and how people can be deceived by bots. We also have the scope visibility in that time of conversational user interface. Because if the chatbot does not know how to answer, if you answer, I don’t understand, or please, can you repeat your question? With the new technologies, this is not an issue because it always answers something. Malicious uses as well, resolutions of ambiguities or something that those creators that I just mentioned, they use it to do every day. Transparency was also ensured, discrimination and harms, and bias, and we’re going to talk more about this in this session. So with generative AI and the use of large language models, what changed, if we think? As I mentioned in the beginning, the scale is much higher, like the ability to ingest and process huge amounts of data. It’s huge compared to the conversational systems that we had before. So we can have the same task adapted to multiple tasks, and this could be that we have an automation also, and also maybe different contexts. So for example, we had a client that worked with cars, and for each car, they had to do a different chatbot. So we’ve used certain models. We can use the same parameters and just change the car, the model of the car, and use the same corpus. Emergency as well, and the scale. So it can do like parallel communication as well, fluency, and the multi-step reasoning. And it can learn and continue doing in certain models. I’m going to focus more in conversational systems. That’s the main area that I came from. So now we can think about all those challenges, and we have the additional challenge about hallucination, for example, and false and harmful language generation due to the lack of model control and safeguards. That’s why now we are creating several platforms that we can. control the models and fine tuning those models. Misaligning of expectations, so you have the human expectation and actually what happens and what the model can deliver. So generate contents that are not aligned to the human expectation or expectations of certain communities. We are going to talk in a little bit about vulnerable communities, so we can understand a little bit better which kind of values we also look at. And lack of transparency, so it’s difficult to inspect because the quantity of data that is there and also how the algorithm was made. So for example, before we have this exhibition that I mentioned to you, that was an exhibition that we could have three bots and you have the three heads and people could interact with that. And what happened is like if people they type at something that the bots didn’t recognize the characters of this book and it was a closed scope here, it’s not like an open scope, it’s just phrases from the book statements. Then one of the chatbots would say more coffee or something like that. But in the case of the generative AI, you have the hallucinations and the Taoist answer is more reactive than proactive. So we experienced in some projects that if you have interfaces, conversational interfaces that are more proactive, you have less errors as well because it’s more like a script conversation and now this is not a reality anymore, it’s more reactive to prompt. And if you have a prompt to design a way to insert information. information that ask the system based on large language models what you want with more details. Maybe you increase the chance that the system will answer you what you want. Automation, we talked about that, large data sets as well, and the harmful language. So in this case that I showed you, which was a public space, so we had like people, we had a character that was all women, all women, and we had like several not suitable language that was typed to the bot. So everything that was typed on the tablet actually didn’t show in the table, but the chatbots answered the phrases of the book. But we saw in the corpus because we analyzed the corpus as well. We published this paper too. So harmful language is there, is inherent in the norm. Now it can be more evident. So going on that, I also mentioned this. It was a project that we did. You can see that’s 2017, and I brought for purpose to see that it’s the same thing in the way that now we have conversational systems that are more eloquent and can deceive people. So in this study, people look at a conversational system with a financial agent, and they should say if the financial advisor, financial agent, was a human or a machine, and then why. And we saw that when people received a text, that they could see the typeface of the agent, the typeface. has a script-type face, like a handwriting-type face. They said, oh, it’s a machine anyway. So most of the people, they said that were machines. But this one wants to deceive me. So what’s the limit to be human? So this is one thing that we can think. And one of my favorite books is The Most Human Human. So Brian Christian, actually, I’m going to go to him again later, he studied the Turing test. And instead of looking at people that pretend to be machines, he looked at the qualities that humans should have to be humans. So what are the qualities that describe a human? So we maybe should look and pay more attention on that. Yes. OK, so when we look at that, at transparency as well, and if it’s a human, if it’s not a human, we maybe should think about communities that the access to education, to AI education, to technology education is not so close to them. So what they have, for example, this is a community in Brazil, low-income, small business women. And they have access to technology because they have mobile phones. And you can see this mobile phone, their mobile phones. Actually, they’re paying several installments, so they have this. And their contact is with WhatsApp, for example. So we did an experiment with them. And we asked them, what question does an AI need to answer to be used? for effective and trustworthy, to be trustworthy and respect the human rights and also democratic and so on. So we asked that. And this system, what’s the output of this system? So this system, they are part of financial education course as well. It’s an NGO. And when they enter the course, they answer a questionnaire. When they leave the course, they answer a questionnaire. And after three and after six months, they answer another questionnaire. So what we did, we worked with the NGO and we had those questionnaires and we redesigned the questions to add in a chatbot. And those women, they answered. And while they were answering, they were answering about their business. They were answering questions related to women empowerment. They answered questions related to business growth as well and about revenue. But the main thing about this system and the questionnaires was to extract some indicators to measure the social impact of the program. So we used this with them. We tested with 70 women. And as an output for them, they could see how is the health of their business, their business health. So we had like a scale and they could see that. But then when we tested with them, we had several that had like zero, for example. And why zero? So one of them said, this result means nothing to me. It won’t not like zero. So we tested with them. And as an output for them, they could see how is the health of their business. So one of them said, this result means nothing to me. It won’t not like zero. that I will continue to engage it to do my business. So the index was zero because my business is not really running. I’m not going to say it’s dying. I’m going to say it’s being born. I would like to know how it’s my advertisement. So I can talk a little bit about that. But before about the zero, it’s important because for some of them, it was like not exciting and very frustrating to see zero. And we needed to understand why. So one of them, the husband paid, the ex-husband paid the rent and she count that in the expense. But in the end she had profits as well. So those things that are so, how can I say, so little, but makes a lot of difference because they are intrinsic in the context. Other things that women that wanted the chatbot to tell them I would like to know how it’s my advertisement. And if I’m doing, I’m in the right path. What are the recommendations? We asked about their vision about the future. And this was something, ah, I like this. I want to consider answering this because then it makes me reflect about. And it means, for example, ah, the score I can improve. Yeah, but I don’t have a structure yet. So this is like a kind of delicate because maybe they are not in this stage that they feel well about that. Yeah. So I think, ah, and some mistakes about education. So mistakes about the terms. for example, education and polite is something that it’s a word that’s similar in Portuguese. And religion is an interesting fact. The NGO, we said, oh, should we take off this question? And they said religion is one of the main things that they disagree about because they are in the same economic level, more or less, the same status. But then we have people from different religion and we put in the same WhatsApp group, then usually we have friction there. OK. So how can we legitimate what the chatbot answers? So maybe in the future, this is one provocation paper that we did, we could have a score for each kind of generative system. And with this score, we can see how legitimate this is, how transparent this is, and where is this data came from, right? So in our project, we used closed scopes, closed domains to avoid hallucinations or at least mitigate a little bit of that because then at least the corpus is from the clients. And the third one that I would like to mention, I’m almost finishing, we have the expectation alignment that I mentioned. So this is another one. So if we have generative systems, how the values of people, those are the values that we collected in the field, could be aligned to the values that we have from other stakeholders as well. And the AI is there in the middle. So here’s an example of call center. For example, we expect productivity, fast performance, speed, efficiency, faithful, and we need all that. But then when we look at the model, we need to choose the models that are aligned to that. So we want a model that reduce hallucinations and that has the data representation of the public that is going to use, right? So I’m going to end, yes, a joke. And I’m going to end with that. We have some design principles as well that we can think about. How can we build generative AI systems in a responsible way? So thank you so much. Thank you, Heloisa, for your great presentation, share your wonderful work with us. So we had a view from the industry. So now I invite Roberto to bring a perspective from the technical community about those topics. So please, Roberto. Thank you very much.
Reinaldo Ferraz:
I think that you should use mic for online people. Thank you. Thank you very much, Diogo. It’s a pleasure for me to be with this distinguished panel. Sorry? It’s okay, right? It’s listening, okay. I think it will be nice.
Diogo Cortiz:
I totally agree with Heloisa about her intervention. So I would like to switch a little bit my comments regarding how it emerged specifically, of course, generative AI, since we have artificial intelligence for many years now in different forms, like using translators when we have image recognition, software, and different other ways of using different forms as well of AI. But I think one game changer indeed was JGPT. And it’s not because there isn’t any other tools. There are many, but of course, this one was, I will say the initial that was presented, I think it was in October last year. And in a matter of. of maybe weeks, many people started to use it, starting to be thrilled using this tool, and then spreading the word. And in times of, I don’t know, maybe in weeks, it passes from thousands of users to hundreds of millions of users. So this one, indeed, I would say it’s a particular phenomenon to analyze. I don’t remember any other tool that was very, very rapidly penetrated to society. And I will say there is a factor that perhaps was included regarding the use of this tool. It’s not because of the fact that many people already used different bots. But in this case, initially, many people were experimenting. But once they realized the potential of this tool, then everyone started to use it for many, many other activities. I mean, formal activities, what now, in some cases, in the academic world, we can even talk about maybe cheating or presenting elements that are not necessarily developed by academic students, learners, et cetera. But I will say that many people felt that this tool was really without limits. And again, I will say that it can be applied in different ways, now combined with some other forms of AI. Actually, there are people that are even making money now. They found this as a way of making money. What I can talk about, my particular perspective is related to the technical side and related to the academia, because I am a teacher for the last 20 years, more or less, at the university, mostly in IT-related. subjects, and as happened with some other areas, in our case, the teachers, the students, when they learn about this tool, of course, they were thrilled, and they many, and this, I would like to maybe comment this story, because perhaps this happens in some other parts, but in my country, maybe not only in my university, but the people that was encountering this tool started to, wanted to formally tell the others about this, and then started to organize webinars, seminars, and things like that, in a way trying to call them such as experts in this field. Many people started to feel like that, just because they use it, and they discovered this fantastic tool, and they wanted everyone to know I use, so I think that’s another part, another important part that we need to reflect on. The other comment that I wanted to make is that yes, AI is with us for several years now, but maybe the ethical aspects, the regulatory framework is being discussed, I will say, maybe the last five years, and I can witness about that, because I was a member of the MAG during the last, well, past three years, last year was my last year as a MAG member, and then I had a chance to see how the discussion regarding the regulation of AI was evolving as well, and then it reaches the academic sector regarding all these possibilities, or maybe even negative impacts that this may cause, and this is something, and I think we are in that moment now, back in Bolivia, and perhaps in the region, or even in the world, with, again, different parts, I mean, different sides of the coin. People that feels that, again, this is like the devil, and we should try to avoid it, the use, maybe we should try to prohibit the use of this tool. because they are teaching bad things to our learners because the learners are trying to do or trying to pass for for persons that they are not etc. You you understand my point regarding this and then of course there is the other side that actually will love to have this even more evolved and when we talk about regulation and we talk about the adjustment of maybe policies that will apply even in the academic sector I think that will not that shouldn’t be the way and I will I always like to put this example I know that we should respect the difference of the scenarios but if you we remember back on back in the 70s 60s maybe no one here is going to remember that moment when we were using the sliding rule of course one of the skills that we required from our students was also of course to know how to to manage that kind of of tool right but then the pocket calculators appear so immediately of course it was important to adjust the big curriculum designs in the different areas and start to use I mean they start to to evolve in in a way in what was the need for our learners to to learn and I think that’s the kind of reflection we need to do at the university it’s not about prohibiting the use of this kind of tools but to adjusting what skills the new skills we need and we want for our students to have in the in the near future in the near future knowing that now we have tools like this one then of course are going to reduce a lot many many of the activities in terms of time of course many of the activities that our students can do and of course our teacher and of course the academic
Reinaldo Ferraz:
community as a whole so I will stop there thank you very much thank you Roberto. So we had views from industry, from technical community. Now I invite Caio Machado to give us a perspective from civil society but also from the academia. Welcome Caio and the floor
Caio Machado:
is yours. Thank you very much. It’s great seeing all of you. I’m going to quickly put a slide up with my contacts but I won’t use slide for my speech. It’s just for having an opportunity to network with the folks over in Japan. So if anyone wants to reach out I’d be glad to continue our conversations later on. So I hope you guys are seeing the slide okay. Yeah. Can I get a nod? Yeah you can see it. That’s perfect. Thank you. Great. So my concerns when we’re talking about generative AI and the title of our talk Synthetic Realities, let’s lay down a premise here. I think of issues related to artificial intelligence in three major layers. So the data, quality of the data, you know, diversity of the data set, whatever is used to train and develop the models, the engineering of the models themselves, and a final layer which is deployment. And that’s when we get a tool, throw it into society, and then it behaves in ways that are unexpected. I think a great case for that, and it’s kind of a cliche case, it’s an algorithm tool, it’s not even AI from what I understand, is COMPAS case where algorithmic tools were used in certain states in the United States. And on the one hand the algorithm was biased, so we do have an issue in the bottom layers in terms of data and development of that tool. But also judges started using something that was intended to attribute risk to the defendants and use them to determine the severity of the sentences. So what was intended for one purpose, once it was thrown out into the world, people incorporated and it was embedded into society in different ways. And that is harder for us to foresee, and I think that is an issue that is much greater than we were discussing. I do agree that hallucination, error, all of this is a very severe problem, but we’re not thinking as much as what happens once the AI is out in the world. For example, I know that lawyers, judges around the world are using generative AI. What is the impact of that when a judge decides to pay $20 a month to use ChatDPT and all of a sudden ChatDPT is deciding the cases and making a precedence? So I think that’s a big concern. My second concern, again, addressing the issue of synthetic realities, is not so much the fabrication of extremely realistic content, which isn’t an issue, I acknowledge deepfakes and so on, but I think that will be addressed in the midterm with new mechanisms of developing trust. What I’m really concerned about is how these tools become infrastructure of access to information. The same way we use Google to access information today and you get 10 results, and depending on the words you put in, you get different results for… where dinosaurs came from. It could be a evolutionist theories. It could be a creationist theory. When you have a chat doing that and everything is compacted into a single answer, what sort of tools do we have to double check that and to equip the users to be able to fact check that, to get different perspectives? So I think in the sea of information we have, the eyedrop is getting smaller and more complex and less transparent. And I think that plays a big role in creating distortions in our readings of reality. So speaking of disinformation or even malinformation, I think these tools and the lack of accountability around these tools and how they operate can have severe effects in that regard. And I’m trying to be quick so we can all speak. That obviously refers back to things that were brought before by the previous speakers. So fairness, accountability. I think there’s still little debate on how we can ensure at the development level means of accountability and fairness at the deployment level. So metrics, ways of keeping people from using the AI tools for unintended purposes. This is a more conceptual proposition. I don’t see any, I’m throwing this issue to the engineers. As a lawyer, I can throw it to the engineers that you think of solutions. But this was something I was discussing with some folks here at the School of Engineering is how can we think of fairness metrics and somehow have that dialogue with the user and have the user think through how the AI is being deployed. And that also speaks to what was mentioned before. on AI literacy and tech literacy in general. And finally, just to point to some of the work that we’re doing right now, academically, I’m at Oxford. But right now, I’m also a fellow at the School of Engineering here, learning a lot with the engineers. And we’re thinking a lot about the uncertainty around different models of machine learning, where, OK, you might have 95% of accuracy across different models. But then you have that 5%, where you’re getting predictive multiplicity. And what do you do with these people? And who has the legitimacy to decide what should be done with these people? So you can look at the work from Professor Flavio Calmon, Lucas Monteiro. They’re really going off into this topic. And we’re working together. And for me, the fundamental question here is, OK, there’s a whole section that algorithmic tools, a section of the population, or users, or you name it, of the data, that the algorithmic tools don’t know what to do with. And who should be able to decide? And so far, obviously, this is being answered by the team developing those models. But once this is deployed in society, the effects aren’t restricted to code. These have social ethical effects, which perhaps should be discussed in other spaces as well. With that, I’ll conclude my speech. And thank you once again for having me. Please feel free to reach out so we can continue the conversation. Thank you, Caio.
Reinaldo Ferraz:
We have more five minutes. So I invite Mateus to give his contribution to the session. Amazing. Thank you so much, Diogo. So hello, everyone.
Matheus Petroni:
My name is Mateus Petro. I’m a master degree student at the Pontifical Catholic University of Sao Paulo. And I am in the field of design, human-computer interaction, and artificial intelligence. I’m also actively engaging the user experience designer with the Latin industry. So I will add just a few things here to bring more of this user-centric perspective, and also to not repeat with the other remarks that I am aligned with. So on one hand, there are plenty of expectations concerning the potential benefits of these advancements. Even with the content generated by AI being considered as syntactic realities, the proximity to users’ actual experiences is so striking that this has the potential to overcome longstanding challenges within the usability domain, such as the learning curve is associated with new digital technologies, the enhancement of engagement through personalized experiences, and a more accessible way to obtain knowledge. This potential value extends to diverse domains, such as education, health care, well-being support services, digital communications, and even customer support. The human-like AI techniques showcased in specific chatbots serve as a prime illustration of this trend. Meta’s recent introduction of 28 AI personas modeled after well-known public figures is a case in point. The aim to provide users with valuable devices within the realms of the celebrity’s expertise. In doing so, it significantly broadens the scope of engagement and diversifies the ways through which individuals can access digital support to address their needs. From another side, despite the promises that these innovations hold, numerous concerns deserve our attention before we take further steps. In a world where a significantly part of digital content could be created partially, or even entirely by AI in the next few years, facilitating tools to user to discern the origin. nature of this content becomes imperative. This underscores not only a governance and technical challenge, but also a design one, as we need to allow users to analyze a small mobile screen and recognize visual clues such as color typography, iconography, or other elements that help them to get informed to make better decisions regarding its utilization. Additionally, we must remain vigilant regarding the potential dangers associated with the establishment of intimate and effective bonds with such technologies. Users may inadvertently develop strong emotional attachments to chatbots, which could prove problematically if these chatbots fail to adequately meet their needs, or if users become overly reliant on them. In this realm of education and mental health support, such attachments could compromise social and learning skills, and the significance of sharing experiences with peers, families, and the surrounding community. Beyond that, if we start to prospect a little bit more about possible futures, we could consider the possibility of users simulating their own presences to automated chatbots on social media platforms. This idea invites us to have a critical examination of what is inherent human, such as having a unique personality, and how we can effectively communicate the capabilities of these emerging technologies without over-promising features that a current state of AI may not, or even should not, deliver. In conclusion, I believe that there are huge room for improvements in our forensic technologies to detect and label content created by generated AI, sometimes to indicate the user about its nature, sometimes to prevent the dissemination of content that threatens human rights, democracy, or propagates misinformation. As example, the same case of artists being used for personalized chatbots as Meta launched could be applied for artists performing deep fake videos with harmful or untruthful narratives, a phenomenon that is increasingly prevalent. Say that, I invite you to reconsider the significance of presencing essential informations alongside the user interface, promoting a robust culture of fact-checking and content differentiation. These emerging challenges require collective efforts from government, society, and research to safeguard democratic values and individual freedom in the face of this rapidly evolving landscape. So that’s it for me, thank you so much.
Reinaldo Ferraz:
Thank you, Mateus. So we had inputs from different stakeholders groups and now we have time for just one question if someone wants to ask a question. Yes, please, you can go to the mic.
Audience:
Oh, the mic is on. Okay, thank you very much. So my name is Valerius, I’m representing the KCGI, it’s a university here and I’m a master degree student. So my question and maybe one point that I would like to speak about is how the generative AI can be used for the crime and cyber security. So as we all know now we can generate images, now we can chat with the LLMs. My thinking is like now we can also mimic voice and what’s stopping the bad people or the really people who want to do the harm using those tools to, for example, to generate somebody’s grandma voice or to generate my voice and call my parents requesting for money or for something closely related to that. So I just like thinking this is the point that needs further discussion and maybe regulation, like how are we going to deal with this possible crime. This is going to be, in my eyes, extremely fast growing in the couple years when the algorithms going to become much more efficient and output will be barely recognizable by human beings. Thank you. Thank you, so Roberto do you want to start answering? Sure, I will go back to my previous point. I will say that it’s really, really hard to start thinking that regulation
Diogo Cortiz:
is going to resolve everything, more if we’re going to come up with some creative ways of dealing that kind of examples. Everything needs to change now. We need to adjust to this new reality. I can talk about the academic area. I’m not an expert in the crime, of course, but I will say, just take an example, that it will be hard now to consider that one image and a voice is a concrete evidence of a crime due to these new possibilities. And that is now fixed in the laws, in our current laws. So that’s an example of the things that need to be changed based on that reflection. And I will say that that will have to be in all different areas, thank you. Okay, so Caio, please, the floor is yours. Yeah, just to quickly compliment,
Caio Machado:
I mean, that’s already a reality, for sure in the US, for sure in Brazil, the use of deepfake voices to run scams over WhatsApp in Brazil is very, very common and becoming even more common. So that’s something we need to deal with. I think we can look back at the knife. We had knives around for thousands of years and still we created laws and that hasn’t prevented people from stabbing each other, meaning that the tools around, it will be used for good and for bad. I think that the policy, not only crime, as in regulation, market regulation, all sorts of rules we can think of need to be addressed to limit the circulation of these tools. in whatever context they’re used for criminal purposes, increase traceability, increase, we should promote, so public policy to promote digital literacy, sorry, it’s late here, to promote digital literacy and to get people to mistrust these audios and have other means of checking. So it’s more of a, let’s say an ecosystem solution than passing one rule that will outlaw the misuse of deepfakes and voice and video, you name it. We don’t have a silver bullet. It’s a series of initiatives and rules that we need to promote. Thank you, Caio.
Reinaldo Ferraz:
So our time is over. I’d like to thank all the speakers and the audience and the session is closed. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you, Caribe. Thank you, Caribe. Thank you, Di, d Andres and Clara. Thank you, Caribe, and thank you for your time. It was nice knowing you. And sorry for the your silence. It was nice knowing you. Aren’t you going to post a message of belief for everybody that you want this conversation to reach you? I’m past that. Thank you for the invitation. Thank you. Starting from you and for the second question, I’d like to invite, let’s hear an attitude. The Padre spiritually who begot Enrique화�. Are you watching my video? Yes. Thank you. Amazing contributions. Nice to meet you. Thank you so much. It’s a great pleasure. Nice to be in touch. Thank you. Keep in touch. Nice to meet you. Bye-bye. So that’s just helping me. Hi. Nice to meet you as well. Thank you. Thank you for listen to me. Hi. Nice to meet you. I love you.
Speakers
Audience
Speech speed
157 words per minute
Speech length
241 words
Speech time
92 secs
Arguments
Concern about misuse of generative AI for impersonation in cyber crime
Supporting facts:
- With advancements in AI technology, it is possible to mimic voices and generate messages that sound indistinguishable from human beings
- Such technology can be used to impersonate individuals, for example to request money from relatives or in other forms of scams
Topics: Generative AI, Impersonation, Cyber Crime
Report
Advancements in AI technology have led to the development of systems capable of mimicking human voices and generating messages that are virtually indistinguishable from those produced by actual individuals. While this technological progress opens up new possibilities for communication and interaction, it also raises concerns about the potential misuse of generative AI for impersonation in cybercrime.
The ability to mimic voices and generate realistic messages allows malicious actors to deceive individuals in various ways. For example, they can impersonate someone known to the target, such as a relative or a friend, to request money or engage in other forms of scams.
This poses a significant threat, as victims can easily fall for these manipulated and convincing messages, believing them to be genuine. Given the potential harm and impact of the misuse of generative AI for impersonation in cybercrime, there is a growing consensus on the need for regulation and discussion to address this issue effectively.
It is crucial to establish guidelines and frameworks that ensure the responsible use of AI technology and protect individuals from deceptive practices. By implementing regulations, policymakers can help deter and punish those who misuse generative AI for malicious purposes. This includes imposing legal measures that specifically address the impersonation and fraudulent use of AI-generated messages.
Additionally, discussions among experts, policymakers, and industry stakeholders are essential to raise awareness, share knowledge, and explore potential solutions to mitigate the risks associated with the misuse of AI technology. The concerns surrounding the misuse of generative AI for impersonation in cybercrime align with the Sustainable Development Goals (SDGs), particularly SDG 9 (Industry, Innovation, and Infrastructure) and SDG 16 (Peace, Justice, and Strong Institutions).
These goals emphasize the importance of promoting innovation while ensuring the development of robust institutions that foster peace, justice, and security. In conclusion, while advancements in AI technology have brought about remarkable capabilities, they have also introduced new challenges regarding the potential misuse of generative AI for impersonation in cybercrime.
To address these concerns effectively, regulation and discussion are crucial. By establishing guidelines, imposing legal measures, and fostering open dialogues, we can strive for the responsible use of AI technology and protect individuals from the harmful consequences of impersonation in the digital sphere.
Caio Machado
Speech speed
151 words per minute
Speech length
1367 words
Speech time
542 secs
Arguments
There are three major layers to consider when discussing AI – data quality, model engineering and deployment
Supporting facts:
- Compas case is a great example, where an algorithm tool used for risk assessment started being used to determine sentence severity
Topics: Artificial Intelligence, Data Quality, AI Deployment
AI and lack of accountability can contribute to disinformation or malinformation
Supporting facts:
- Lack of proofreading mechanisms and quality control can result in distorted reality perception
Topics: Artificial Intelligence, Accountability, Disinformation, Malinformation
There is little debate on ensuring accountability and fairness at the AI deployment level
Topics: Artificial Intelligence, Accountability, Fairness
Policy, regulation, and market rules need to be addressed to limit the circulation of deepfake tools
Supporting facts:
- The use of deepfake voices to run scams over WhatsApp in Brazil is very common
Topics: Policy, Regulation, Deepfake
Promote digital literacy and increase traceability
Topics: Digital Literacy, Traceability
There is no silver bullet, only a series of initiatives and rules can be promoted
Topics: Initiatives, Regulation
Report
In the discussion about the impact of artificial intelligence (AI), several key areas were highlighted. The first area of focus was the importance of data quality, model engineering, and deployment in AI systems. An example provided was the Compas case, where an algorithmic tool used for risk assessment began being used to determine the severity of sentences.
This case illustrates the potential consequences of relying on AI systems without ensuring the accuracy and quality of the underlying data and models. Another concern was how AI tools become the infrastructure for accessing information. It was noted that, similar to how Google search results differ based on the keywords used, it becomes harder to verify and compare information when it is presented as a single, compact answer by a chatbot.
This raises questions about the reliability and transparency of the information provided by AI systems. The lack of accountability in AI systems was identified as a major issue that can contribute to the spread of disinformation or misinformation. Without proper proofreading mechanisms and quality control, distorted perceptions of reality can arise, leading to potential harm.
It was argued that there should be a focus on ensuring accountability and fairness at the AI deployment level to mitigate these risks. Furthermore, the discussion highlighted the need for more inclusive and ethical approaches to handling uncertainty and predictive multiplicity in AI models.
It was emphasized that decisions regarding individuals who are uncertain or fall into multiple predictive categories should not be solely made by the developing team. Instead, there should be inclusivity and ethical considerations to protect the rights and well-being of these individuals.
Policy, regulation, and market rules were mentioned as important factors to address in order to limit the circulation of deepfake tools. Evidence was provided for this, citing the common use of deepfake voices to run scams over WhatsApp in Brazil.
It was argued that effective policies and regulations need to be implemented to tackle the challenges of deepfake technology. Promoting digital literacy and increasing traceability were seen as positive steps towards addressing the challenges posed by AI. These measures can enable individuals to better understand and navigate the digital landscape, while also enhancing accountability and transparency.
In conclusion, it was acknowledged that there is no single solution to address the impact of AI. Instead, a series of initiatives and rules should be promoted to ensure the responsible use of AI and mitigate potential harms. By focusing on data quality, accountability, fairness, inclusivity, and ethical considerations, along with effective policies and regulations, society can navigate the challenges and reap the benefits of AI technology.
Diogo Cortiz
Speech speed
154 words per minute
Speech length
1247 words
Speech time
485 secs
Arguments
Generative AI like JGPT has rapidly penetrated society
Supporting facts:
- JGPT was presented in October last year and within weeks it had hundreds of millions of users
Topics: Artificial Intelligence, JGPT, Society
AI has unlimited potential which thrills students and learners.
Supporting facts:
- Once users realized the potential of this tool, everyone started to use it for various activities.
- AI can be applied in different ways and combined with other forms of AI
Topics: Artificial Intelligence, Education
AI is drastically changing the academic field.
Supporting facts:
- Many people are using AI in the academic world, even potentially for cheating or presenting elements not necessarily developed by students.
- Teachers and students were thrilled when they learned about this tool, and many started to organize webinars and seminars to share their knowledge.
Topics: Artificial Intelligence, Education
The ethical aspects and regulatory framework of AI have only been discussed in the last five years.
Supporting facts:
- The discussion regarding the regulation of AI has been evolving.
Topics: Artificial Intelligence, Ethics, Regulation
There are conflicting views on AI with some viewing it as harmful and others wanting more advanced AI.
Supporting facts:
- Some people feel AI is harmful and should be avoided.
- Others love AI and want to see it evolve further.
Topics: Artificial Intelligence, Society, Ethics
Regulation alone cannot resolve all AI-related issues
Supporting facts:
- Generative AI can be used in harmful ways, such as mimicking voices
- Current laws may not be equipped to deal with new AI possibilities
Topics: Artificial Intelligence, Regulation, Crime, Cybersecurity
Need to adapt to this new reality of generative AI
Supporting facts:
- AI can now generate images, voice and chat with algorithms
- These capabilities could be used for harmful purposes
Topics: Artificial Intelligence, Adaptation, Crime, Cybersecurity
Report
The discussion explores multiple aspects of artificial intelligence (AI) and its impact on society, education, ethics, regulation, and crime. One significant AI tool mentioned is JGPT, which rapidly gained popularity and attracted hundreds of millions of users within weeks of its launch last year.
This indicates the increasing penetration of generative AI in society. The potential of AI is seen as limitless and exciting by students and learners. Once users realized the possibilities of AI, they started using it for various activities. The versatility of AI allows it to be combined with other forms of AI, enhancing its potential further.
However, there are conflicting views on AI. Some individuals perceive AI as harmful and advocate for its avoidance, while others express enthusiasm and desire to witness further advancements in AI technology. The ethical and regulatory discussions surrounding AI have emerged relatively recently, with a focus on addressing the evolving challenges and implications.
The ethical aspects of AI usage and the establishment of a regulatory framework have gained attention within the past five years. In the academic field, AI has brought about drastic changes. Many individuals are utilizing AI, potentially even for cheating or presenting work not developed by students themselves.
This development has led to teachers and students organizing webinars and seminars to share their knowledge and experiences with AI. The prohibition of AI tools is not considered a solution by the speakers. Instead, they advocate for adapting to new skills and tools that AI brings.
They draw parallels with the emergence of pocket calculators, which necessitated adapting and evolving curricula to incorporate these tools. As AI tools reduce time and effort on various tasks, students need to acquire new skills pertinent for the future. It is emphasized that regulation alone cannot resolve all AI-related issues.
AI, particularly generative AI, can be employed for harmful purposes like mimicking voices, and existing laws may not be equipped to address these new possibilities. Hence, a comprehensive approach encompassing both regulation and adaptation to the new reality of generative AI is imperative.
In conclusion, the discussion highlights the increasing impact of AI on society, education, ethics, regulation, and crime. The rapid penetration of generative AI, like the JGPT tool, signifies the growing influence of AI in society. While AI holds unlimited potential and excites students and learners, there are conflicting views on its impact, with concerns about its harmful effects.
The ethical and regulatory discussions around AI are relatively recent. The academic field is experiencing significant changes due to the adoption of AI, necessitating the acquisition of new skills by students. Prohibiting AI tools is not the solution; instead, adapting to the new skills and tools that AI offers is necessary.
Regulation alone is insufficient to address AI-related challenges, as AI can be misused for harmful purposes. Overall, a well-rounded approach encompassing both regulation and adaptation is needed to navigate the complex landscape of AI.
Heloisa Candello
Speech speed
140 words per minute
Speech length
3027 words
Speech time
1299 secs
Arguments
Generative AI and large language models have the potential to drastically upscale conversational systems.
Supporting facts:
- The scale of processing large amounts of data with these systems is much higher.
- They can adapt to multiple tasks.
- These systems can handle parallel communication, fluency, and multi-step reasoning.
- Their use does increase the risk of hallucinations and false information due to lack of model control.
Topics: Artificial Intelligence, Generative AI, Large language models, Conversational Systems
The application of AI technologies requires careful consideration of their potential impact on vulnerable communities.
Supporting facts:
- AI systems can misalign with human expectations or the expectations of certain communities.
- Transparency, understanding, and probe design of AI systems are key to mitigating harmful effects.
- Candello’s experiment with low-income, small business women offered insights into their business health, but it was also an emotional journey for participants highlighting the need for human-centric considerations.
Topics: Artificial Intelligence, Ethics in AI, Social impact, Vulnerable Communities
Report
Generative AI and large language models have the potential to significantly enhance conversational systems. These systems possess the capability to handle a wide range of tasks, allowing for parallel communication, fluency, and multi-step reasoning. Moreover, their ability to process vast amounts of data sets them apart.
However, it is important to note that there is a potential risk associated with the use of such systems, as they may produce hallucinations and false information due to a lack of control over the model. In order to ensure that vulnerable communities are not negatively impacted by the application of AI technologies, careful consideration is required.
AI systems have the capacity to misalign with human expectations and the expectations of specific communities. Therefore, transparency, understanding, and probe design are crucial for mitigating any harmful effects that may arise. It is essential for AI systems to align with user values, and the models selected should accurately represent the data pertaining to their intended users.
In addition, the design of responsible generative AI systems must adhere to certain principles. This will help to ensure that the models are built in a way that is responsible and ethical. By considering productivity, fast performance, speed, efficiency, and faithfulness in the design of AI systems, their impact on vulnerable communities can be effectively addressed.
Overall, exercising caution when utilizing generative AI and large language models in conversational systems is essential. While these systems have the potential to greatly improve communication, the risks of producing hallucinations and false information must be addressed. Additionally, considering the impact on vulnerable communities and aligning user values with the selected models are key factors in responsible AI design.
By following these principles, the potential benefits of these technologies can be harnessed while minimizing any potential harm.
Matheus Petroni
Speech speed
162 words per minute
Speech length
746 words
Speech time
277 secs
Arguments
Advancements in AI have the potential to overcome challenges within the usability domain and enhance user engagement
Supporting facts:
- Meta’s recent introduction of 28 AI personas modeled after public figures as a method to provide valuable advice and support to users
Topics: Artificial Intelligence, Usability, User Engagement
Users may inadvertently develop strong emotional relationships with AI chatbots, which could be problematic if they fail to meet their needs or if users become overly dependent on them
Topics: Artificial Intelligence, Emotional Attachment, Dependence
With the increase in digital content created by AI, tools to help users discern the origin and nature of this content become imperative
Topics: Artificial Intelligence, Digital Content, Disinformation
Report
Advancements in artificial intelligence (AI) have the potential to revolutionise the field of usability and enhance user engagement. One prime example of this is Meta’s recent introduction of 28 AI personas modelled after public figures. These AI personas provide users with valuable advice and support, addressing usability challenges and improving user engagement.
This development is a positive step forward, demonstrating how AI can bridge the gap between technology and user experience. However, there are potential negative implications associated with AI chatbots. Users may inadvertently develop strong emotional relationships with these AI entities, which could be problematic if the chatbots fail to meet their needs or if users become overly dependent on them.
It is crucial to carefully monitor and manage the emotional attachment users develop with AI chatbots to ensure their well-being and prevent harm. In addition to the impact on user engagement and emotional attachment, the increase in AI-generated digital content poses its own challenges.
With AI capable of creating vast amounts of digital content, it becomes imperative to have tools in place to discern the origin and nature of this content. The issue of disinformation becomes more prevalent as AI algorithms generate content that may be misleading or harmful.
Therefore, improvements in forensic technologies are necessary to detect and label AI-generated content, particularly deepfake videos with harmful or untruthful narratives. To address the challenges posed by AI-generated content, promoting a culture of robust fact-checking and content differentiation is vital.
Presenting essential information alongside user interfaces can facilitate this process. By providing users with transparent and reliable information, they can make informed decisions about the content they consume. This approach aligns with the sustainable development goals of peace, justice, and strong institutions.
In conclusion, while AI advancements hold enormous potential for enhancing usability and user engagement, there are also potential risks and challenges associated with emotional attachment and AI-generated content. Carefully managing the development and deployment of AI technologies is essential to harness their benefits while mitigating potential drawbacks.
By promoting transparent and informative user interfaces, investing in forensic technologies, and fostering a robust fact-checking culture, we can unlock the full potential of AI while safeguarding against potential negative consequences.
Reinaldo Ferraz
Speech speed
157 words per minute
Speech length
501 words
Speech time
192 secs
Report
The network session on generative AI commenced with a diverse panel of speakers who shared their insights. Eloisa Candelo from IBM Research and Caio Machado from Instituto Vero and Oxford University participated remotely, while Roberto Zambrana and Mateus Petroni were physically present.
Each speaker brought a unique perspective to the discussion, addressing various aspects of generative AI. The session began with Eloisa Candelo expressing her appreciation for being a part of the esteemed panel. She highlighted the significance of generative AI for the wider community and shared her thoughts on its potential impact.
Despite some initial technical issues with the microphone, Eloisa’s remarks eventually became audible to the audience. Following Eloisa’s presentation, Roberto Zambrana offered his industry-oriented views on generative AI. He emphasized the practical applications and benefits, shedding light on the potential for innovation and growth.
Roberto’s insights provided valuable perspectives from an industry standpoint. Next, Caio Machado provided a different viewpoint, representing civil society and academia. Caio discussed the societal implications of generative AI and considered its impact on various sectors. His presentation drew attention to ethical concerns and raised questions about the involvement of civil society in the development and deployment of AI technologies.
Mateus Petroni then shared his insights, further enriching the discussion. Mateus contributed his thoughts and experiences related to generative AI, offering a well-rounded understanding of the subject. By incorporating inputs from diverse stakeholders, the session presented a comprehensive view of generative AI.
The speakers represented various sectors, including industry, academia, and civil society. This multidimensional approach added depth to the discussions and brought forth different perspectives on the topic. Following the initial presentations, the audience had the opportunity to ask questions, albeit briefly due to time constraints.
Only one question could be addressed, but this interactive engagement facilitated a deeper understanding of the topic among the participants. In summary, the session on generative AI successfully united speakers from different backgrounds to explore the subject from multiple angles.
Their valuable insights stimulated critical thinking and provided knowledge about the potential implications and future directions of generative AI. The session concluded with gratitude expressed towards the speakers and the audience for their participation and engagement.