Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153

12 Oct 2023 04:30h - 05:15h UTC

Event report

Speakers and Moderators

Speakers:
  • Heloisa Candello, IBM Research, Private Sector, Latin America
  • Caio Machado, University of Oxford, Civil Society, Western Europe
  • Diogo Cortiz, Brazilian Network Information Center, Technical Community, Latin America
  • Hiroshi YAMAGUCHI, University of Tokyo, Civil Society, Asia
Moderators:
  • Diogo Cortiz, Brazilian Network Information Center, Technical Community, Latin America
  • Reinaldo Ferraz, Brazilian Network Information Center, Technical Community, Latin America

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Caio Machado

In the discussion about the impact of artificial intelligence (AI), several key areas were highlighted. The first area of focus was the importance of data quality, model engineering, and deployment in AI systems. An example provided was the Compas case, where an algorithmic tool used for risk assessment began being used to determine the severity of sentences. This case illustrates the potential consequences of relying on AI systems without ensuring the accuracy and quality of the underlying data and models.

Another concern was how AI tools become the infrastructure for accessing information. It was noted that, similar to how Google search results differ based on the keywords used, it becomes harder to verify and compare information when it is presented as a single, compact answer by a chatbot. This raises questions about the reliability and transparency of the information provided by AI systems.

The lack of accountability in AI systems was identified as a major issue that can contribute to the spread of disinformation or misinformation. Without proper proofreading mechanisms and quality control, distorted perceptions of reality can arise, leading to potential harm. It was argued that there should be a focus on ensuring accountability and fairness at the AI deployment level to mitigate these risks.

Furthermore, the discussion highlighted the need for more inclusive and ethical approaches to handling uncertainty and predictive multiplicity in AI models. It was emphasized that decisions regarding individuals who are uncertain or fall into multiple predictive categories should not be solely made by the developing team. Instead, there should be inclusivity and ethical considerations to protect the rights and well-being of these individuals.

Policy, regulation, and market rules were mentioned as important factors to address in order to limit the circulation of deepfake tools. Evidence was provided for this, citing the common use of deepfake voices to run scams over WhatsApp in Brazil. It was argued that effective policies and regulations need to be implemented to tackle the challenges of deepfake technology.

Promoting digital literacy and increasing traceability were seen as positive steps towards addressing the challenges posed by AI. These measures can enable individuals to better understand and navigate the digital landscape, while also enhancing accountability and transparency.

In conclusion, it was acknowledged that there is no single solution to address the impact of AI. Instead, a series of initiatives and rules should be promoted to ensure the responsible use of AI and mitigate potential harms. By focusing on data quality, accountability, fairness, inclusivity, and ethical considerations, along with effective policies and regulations, society can navigate the challenges and reap the benefits of AI technology.

Audience

Advancements in AI technology have led to the development of systems capable of mimicking human voices and generating messages that are virtually indistinguishable from those produced by actual individuals. While this technological progress opens up new possibilities for communication and interaction, it also raises concerns about the potential misuse of generative AI for impersonation in cybercrime.

The ability to mimic voices and generate realistic messages allows malicious actors to deceive individuals in various ways. For example, they can impersonate someone known to the target, such as a relative or a friend, to request money or engage in other forms of scams. This poses a significant threat, as victims can easily fall for these manipulated and convincing messages, believing them to be genuine.

Given the potential harm and impact of the misuse of generative AI for impersonation in cybercrime, there is a growing consensus on the need for regulation and discussion to address this issue effectively. It is crucial to establish guidelines and frameworks that ensure the responsible use of AI technology and protect individuals from deceptive practices.

By implementing regulations, policymakers can help deter and punish those who misuse generative AI for malicious purposes. This includes imposing legal measures that specifically address the impersonation and fraudulent use of AI-generated messages. Additionally, discussions among experts, policymakers, and industry stakeholders are essential to raise awareness, share knowledge, and explore potential solutions to mitigate the risks associated with the misuse of AI technology.

The concerns surrounding the misuse of generative AI for impersonation in cybercrime align with the Sustainable Development Goals (SDGs), particularly SDG 9 (Industry, Innovation, and Infrastructure) and SDG 16 (Peace, Justice, and Strong Institutions). These goals emphasize the importance of promoting innovation while ensuring the development of robust institutions that foster peace, justice, and security.

In conclusion, while advancements in AI technology have brought about remarkable capabilities, they have also introduced new challenges regarding the potential misuse of generative AI for impersonation in cybercrime. To address these concerns effectively, regulation and discussion are crucial. By establishing guidelines, imposing legal measures, and fostering open dialogues, we can strive for the responsible use of AI technology and protect individuals from the harmful consequences of impersonation in the digital sphere.

Heloisa Candello

Generative AI and large language models have the potential to significantly enhance conversational systems. These systems possess the capability to handle a wide range of tasks, allowing for parallel communication, fluency, and multi-step reasoning. Moreover, their ability to process vast amounts of data sets them apart. However, it is important to note that there is a potential risk associated with the use of such systems, as they may produce hallucinations and false information due to a lack of control over the model.

In order to ensure that vulnerable communities are not negatively impacted by the application of AI technologies, careful consideration is required. AI systems have the capacity to misalign with human expectations and the expectations of specific communities. Therefore, transparency, understanding, and probe design are crucial for mitigating any harmful effects that may arise. It is essential for AI systems to align with user values, and the models selected should accurately represent the data pertaining to their intended users.

In addition, the design of responsible generative AI systems must adhere to certain principles. This will help to ensure that the models are built in a way that is responsible and ethical. By considering productivity, fast performance, speed, efficiency, and faithfulness in the design of AI systems, their impact on vulnerable communities can be effectively addressed.

Overall, exercising caution when utilizing generative AI and large language models in conversational systems is essential. While these systems have the potential to greatly improve communication, the risks of producing hallucinations and false information must be addressed. Additionally, considering the impact on vulnerable communities and aligning user values with the selected models are key factors in responsible AI design. By following these principles, the potential benefits of these technologies can be harnessed while minimizing any potential harm.

Diogo Cortiz

The discussion explores multiple aspects of artificial intelligence (AI) and its impact on society, education, ethics, regulation, and crime. One significant AI tool mentioned is JGPT, which rapidly gained popularity and attracted hundreds of millions of users within weeks of its launch last year. This indicates the increasing penetration of generative AI in society.

The potential of AI is seen as limitless and exciting by students and learners. Once users realized the possibilities of AI, they started using it for various activities. The versatility of AI allows it to be combined with other forms of AI, enhancing its potential further.

However, there are conflicting views on AI. Some individuals perceive AI as harmful and advocate for its avoidance, while others express enthusiasm and desire to witness further advancements in AI technology.

The ethical and regulatory discussions surrounding AI have emerged relatively recently, with a focus on addressing the evolving challenges and implications. The ethical aspects of AI usage and the establishment of a regulatory framework have gained attention within the past five years.

In the academic field, AI has brought about drastic changes. Many individuals are utilizing AI, potentially even for cheating or presenting work not developed by students themselves. This development has led to teachers and students organizing webinars and seminars to share their knowledge and experiences with AI.

The prohibition of AI tools is not considered a solution by the speakers. Instead, they advocate for adapting to new skills and tools that AI brings. They draw parallels with the emergence of pocket calculators, which necessitated adapting and evolving curricula to incorporate these tools. As AI tools reduce time and effort on various tasks, students need to acquire new skills pertinent for the future.

It is emphasized that regulation alone cannot resolve all AI-related issues. AI, particularly generative AI, can be employed for harmful purposes like mimicking voices, and existing laws may not be equipped to address these new possibilities. Hence, a comprehensive approach encompassing both regulation and adaptation to the new reality of generative AI is imperative.

In conclusion, the discussion highlights the increasing impact of AI on society, education, ethics, regulation, and crime. The rapid penetration of generative AI, like the JGPT tool, signifies the growing influence of AI in society. While AI holds unlimited potential and excites students and learners, there are conflicting views on its impact, with concerns about its harmful effects. The ethical and regulatory discussions around AI are relatively recent. The academic field is experiencing significant changes due to the adoption of AI, necessitating the acquisition of new skills by students. Prohibiting AI tools is not the solution; instead, adapting to the new skills and tools that AI offers is necessary. Regulation alone is insufficient to address AI-related challenges, as AI can be misused for harmful purposes. Overall, a well-rounded approach encompassing both regulation and adaptation is needed to navigate the complex landscape of AI.

Reinaldo Ferraz

The network session on generative AI commenced with a diverse panel of speakers who shared their insights. Eloisa Candelo from IBM Research and Caio Machado from Instituto Vero and Oxford University participated remotely, while Roberto Zambrana and Mateus Petroni were physically present. Each speaker brought a unique perspective to the discussion, addressing various aspects of generative AI.

The session began with Eloisa Candelo expressing her appreciation for being a part of the esteemed panel. She highlighted the significance of generative AI for the wider community and shared her thoughts on its potential impact. Despite some initial technical issues with the microphone, Eloisa’s remarks eventually became audible to the audience.

Following Eloisa’s presentation, Roberto Zambrana offered his industry-oriented views on generative AI. He emphasized the practical applications and benefits, shedding light on the potential for innovation and growth. Roberto’s insights provided valuable perspectives from an industry standpoint.

Next, Caio Machado provided a different viewpoint, representing civil society and academia. Caio discussed the societal implications of generative AI and considered its impact on various sectors. His presentation drew attention to ethical concerns and raised questions about the involvement of civil society in the development and deployment of AI technologies.

Mateus Petroni then shared his insights, further enriching the discussion. Mateus contributed his thoughts and experiences related to generative AI, offering a well-rounded understanding of the subject.

By incorporating inputs from diverse stakeholders, the session presented a comprehensive view of generative AI. The speakers represented various sectors, including industry, academia, and civil society. This multidimensional approach added depth to the discussions and brought forth different perspectives on the topic.

Following the initial presentations, the audience had the opportunity to ask questions, albeit briefly due to time constraints. Only one question could be addressed, but this interactive engagement facilitated a deeper understanding of the topic among the participants.

In summary, the session on generative AI successfully united speakers from different backgrounds to explore the subject from multiple angles. Their valuable insights stimulated critical thinking and provided knowledge about the potential implications and future directions of generative AI. The session concluded with gratitude expressed towards the speakers and the audience for their participation and engagement.

Matheus Petroni

Advancements in artificial intelligence (AI) have the potential to revolutionise the field of usability and enhance user engagement. One prime example of this is Meta’s recent introduction of 28 AI personas modelled after public figures. These AI personas provide users with valuable advice and support, addressing usability challenges and improving user engagement. This development is a positive step forward, demonstrating how AI can bridge the gap between technology and user experience.

However, there are potential negative implications associated with AI chatbots. Users may inadvertently develop strong emotional relationships with these AI entities, which could be problematic if the chatbots fail to meet their needs or if users become overly dependent on them. It is crucial to carefully monitor and manage the emotional attachment users develop with AI chatbots to ensure their well-being and prevent harm.

In addition to the impact on user engagement and emotional attachment, the increase in AI-generated digital content poses its own challenges. With AI capable of creating vast amounts of digital content, it becomes imperative to have tools in place to discern the origin and nature of this content. The issue of disinformation becomes more prevalent as AI algorithms generate content that may be misleading or harmful. Therefore, improvements in forensic technologies are necessary to detect and label AI-generated content, particularly deepfake videos with harmful or untruthful narratives.

To address the challenges posed by AI-generated content, promoting a culture of robust fact-checking and content differentiation is vital. Presenting essential information alongside user interfaces can facilitate this process. By providing users with transparent and reliable information, they can make informed decisions about the content they consume. This approach aligns with the sustainable development goals of peace, justice, and strong institutions.

In conclusion, while AI advancements hold enormous potential for enhancing usability and user engagement, there are also potential risks and challenges associated with emotional attachment and AI-generated content. Carefully managing the development and deployment of AI technologies is essential to harness their benefits while mitigating potential drawbacks. By promoting transparent and informative user interfaces, investing in forensic technologies, and fostering a robust fact-checking culture, we can unlock the full potential of AI while safeguarding against potential negative consequences.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more