Town Hall: How to Trust Technology

17 Jan 2024 13:15h - 14:00h

Event report

AI and immersive technologies will fundamentally change how humanity interacts with society, government and even the environment.

How can we meet the challenge presented by the complex risks we face while building trust in our technological future?

Join this interactive town hall with leading voices to understand why it is essential to build trust in technology.

More info @ WEF 2024.

Table of contents

Disclaimer: This is not an official record of the WEF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the WEF YouTube channel.

Full session report

Ayanna Howard

The analysis covered various topics related to artificial intelligence (AI) and robotics, presenting different perspectives. One main point discussed was the tendency of humans to place excessive trust in technology, despite its known flaws. Dr. Howard’s 2011 research on human trust in robotics during emergency scenarios supported this observation. The study found that people overwhelmingly followed a robot’s directions, even when they conflicted with visible exit signs or when the robot exhibited poor behavior. This highlights the issue of overtrust in technology, where people sometimes disregard their own common sense.

However, an alternative viewpoint argued that the real problem lies in how we react when technology fails or makes mistakes. Instead of solely focusing on the level of trust we have in technology, Dr. Howard suggests that we should also emphasize how we respond and manage when technology fails. She cited instances such as airplane crashes, where people have demonstrated adverse overreactions. According to this perspective, improving our ability to appropriately react when technology fails is crucial.

Another key point discussed was the need to incorporate human emotional intelligence (EQ) into robots and AI tools to prevent errors. Despite the valuable and widespread use of AI tools like ChatGPT, they are not flawless and can make mistakes, potentially leading to harmful outcomes such as errors in legal briefs. To mitigate these issues, it was argued that robots and AI tools should integrate human EQ and possibly limit their usage to avoid negative consequences.

The lack of necessary rules and regulations in the AI industry was also highlighted. Currently, the AI industry lacks the standards and certifications seen in other industries, such as electricity during its early days. This can be problematic, as anyone with minimal knowledge can create AI and connect it to machines, potentially endangering consumers who trust these products. Strengthening rules, certifications, and validations in the AI industry is essential for consumer safety and trust.

The convergence between the digital and physical worlds in robotics and AI was also discussed. The ability to connect to the cloud and learn in real-time has accelerated this convergence. However, the implications of a world where digital personas transition to physical forms remain uncertain. Therefore, it is important to consider the societal and ethical implications as technology progresses.

The analysis also emphasized the need to design AI systems while assuming flaws or bad elements exist. While the cybersecurity field adheres to a zero-trust approach, assuming the presence of bad actors and hacks, the same approach is not yet prevalent in AI system design. AI systems should be designed with the understanding that flaws or bad elements may exist, similar to the cybersecurity approach.

Establishing policies and regulations governing the use of AI was deemed crucial for accountability and trust. Such policies provide the necessary standards and define the expectations and consequences for companies utilizing AI. However, for consistency and clarity, these policies should be uniform across different regions and jurisdictions.

Lastly, the analysis stressed the importance of collaboration between technologists and experts from other fields. Collaborating with professionals from diverse disciplines enables technologists to gain a comprehensive understanding of both the risks and benefits associated with technology. This interdisciplinary collaboration is crucial for a holistic appraisal of technology.

Overall, the analysis explored a wide range of considerations in the field of AI and robotics. It uncovered the tendency to overtrust technology, the importance of addressing our reactions when technology fails, the integration of human EQ into AI systems, the need for rules and regulations, the convergence of the digital and physical worlds, the significance of assuming flaws in AI system design, the establishment of policies and regulations, and the necessity of collaboration across different disciplines. These insights provide a comprehensive understanding of the challenges and key aspects in the realm of AI and robotics.

Mustafa Suleyman

The discussion revolves around the topic of artificial intelligence (AI) and large language models (LLMs). One viewpoint argues that people should be critical, skeptical, doubtful, and ask tough questions of LLM technology. This perspective is based on the probabilistic nature of LLMs, which can provide multiple responses. Furthermore, the previous mental model of default trust in technology may not apply in the case of LLM technology.

On the other hand, another viewpoint highlights two elements that can build trust in LLM technology: the factual accuracy (IQ) of the models and the emotional connection (EQ) with them. The extent to which the models are factually correct can be formally measured, and emotional connection plays a significant role in decision-making. It is argued that trust can be established by focusing on both IQ and EQ aspects.

Mustafa Suleyman, a key figure in the discussion, shares a positive outlook on the progress and capabilities of AI systems. He predicts a 99.9% accuracy rate in factual outputs from AI within the next three years. Moreover, he anticipates that AI will evolve from a one-shot question-answer engine to a provider of accurate predictions that can take actions on our behalf. Suleyman believes in the potential of AI to support and assist humans in various tasks, envisioning a future where everyone will have their own personal AI.

Transparency, accountability, and careful consideration of values embedded in the emotional side of AI models are stressed for building trust. EQ plays a significant role in decision-making, and LLMs are becoming more dynamic and interactive.

The discussion touches upon the obsession with artificial general intelligence (AGI) in Silicon Valley. While some have a negative sentiment towards this obsession, others see AGI as having the potential to address societal challenges such as food, climate, transportation, education, and health.

The integration of AI systems into various fields is highlighted, with Mustafa Suleyman expressing trust in technology. He has observed a significant improvement in the quality of AI models over the past two years and regularly uses them to access knowledge in a fluent conversational style. Additionally, the advent of large language models has lowered the barrier to accessing information, making it easier to ask AI models wide-ranging questions.

The potential risks and challenges of AI are also discussed. It is suggested that stress testing of models is necessary to identify flaws and weaknesses. Attention is drawn to the need for caution when AI deals with sensitive or conflicting information. Additionally, the inherent risk of AI systems improving without human oversight is raised, along with the potential for AI to be misused in elections.

Regulation of AI is deemed necessary due to the increasing risks associated with its deployment. However, there is a debate regarding the appropriate legislative approach, with some arguing that formal regulations may not be needed at present. Bias and fairness testing are highlighted as areas of focus, and there is growing concern regarding potential risks, such as biohazards, posed by AI.

The discussion emphasizes the importance of transparency, testing, and evaluation in the development and deployment of AI systems. The ability to estimate uncertainties, be aware of errors, and communicate confidence intervals is seen as a way to increase trustworthiness. It is acknowledged that AI systems are held to a higher standard than humans, particularly in fields like healthcare and autonomous vehicles.

Finally, the discussion explores the progress and efficiency gains in AI models. The focus is on creating models that perform better while being smaller, and the positive implications for the open-source ecosystem and startups. Mustafa Suleyman expresses awe and hesitation to bet against any technological development, indirectly supporting it.

Overall, the discussion explores the complex and multifaceted nature of AI and large language models. It raises important points about trust, regulation, transparency, testing, and the future implications of AI in various fields and society as a whole.

Audience

During a recent event, speakers engaged in discussions about technology and artificial intelligence (AI), exploring different aspects of the field. One speaker raised concerns about Silicon Valley’s preoccupation with Artificial General Intelligence (AGI). They argued that Silicon Valley’s obsession with AGI may divert attention away from more pressing issues such as climate change and the full automation of manufacturing. Emphasising the importance of addressing these challenges, the speaker suggested that Silicon Valley should shift its focus towards solving climate change and achieving full automation in manufacturing processes. This stance aligns with the Sustainable Development Goals (SDGs) of Climate Action and Decent Work and Economic Growth.

The potential misuse of AI also emerged as a prominent topic of discussion. Concerns were raised about the possibility of malicious entities exploiting AI for harmful purposes. One audience member, employed in a company that takes risks in cyber insurance, shared their apprehensions about the misuse of AI. While acknowledging that AI is trusted due to its good intentions, the speaker cautioned that it could also be misused by “bad people.” This discussion emphasised the need for vigilant monitoring and regulation to prevent the malicious use of AI.

Another debate centred around the deployment of AI and the consideration of potential risks and misuse. It was argued that while legislation and rules might be in place, they may not be sufficient to prevent misuse. The speaker raised questions about potential risks associated with AI deployment that may have been overlooked, highlighting the importance of taking into account not only the societal benefits but also the potential negative consequences. This perspective aligns with the SDG of Industry, Innovation, and Infrastructure.

The audience also expressed curiosity about how trustworthiness in AI is assessed. Drawing a parallel with human trust, where trust is based on the capability and character of individuals, the audience questioned if a similar framework of trustworthiness could be applied to AI. This discussion revealed the importance of context when evaluating trust in AI. The audience argued that trust is only valuable when it is contextual, suggesting that trust in AI should be evaluated within specific situations and applications.

However, there were doubts about the application of a human behaviour framework onto AI. One audience member questioned whether there might be dangers in trying to apply a human framework of behaviour onto AI. This uncertainty highlights the challenges in understanding and predicting the behaviour of AI systems based on human patterns.

In conclusion, the event provided a platform for speakers and audience members to engage in discussions on various topics related to technology and AI. The concerns raised about Silicon Valley’s focus on AGI, the need to prioritise solving climate change and achieving full automation of manufacturing, and the potential misuse of AI highlighted the importance of considering wider societal impacts and risks. The audience’s inquiries about trust in AI and the applicability of human behaviour frameworks underscored the complexities involved in assessing AI’s trustworthiness and behaviour. The event offered valuable insights into the current debates surrounding AI and technology, stimulating further reflection and exploration in these areas.

Ben Thompson

In an insightful discussion on trust in technology, Ben Thompson raises thought-provoking questions and offers unique insights. One of his key questions is whether it is necessary to communicate to people their over-reliance on technology or if this responsibility lies with technologists themselves.

Thompson argues that it is crucial to consider people’s actual preferences and behaviors, rather than solely relying on their stated feelings, when examining trust in technology. Many individuals express mistrust in technology but continue to use it extensively, revealing a discrepancy between their stated opinions and their actions. This idea of “revealed versus stated preferences” sheds light on the true level of trust people have in technology.

Thompson also questions the boundaries between the digital and physical worlds. He wonders if there is a clear distinction between the two realms or if they are becoming increasingly intertwined. This inquiry highlights the evolving nature of technology and its impact on our daily lives.

Additionally, Thompson raises concerns about potential job losses, particularly in the digital space, due to emerging technologies. With advancements in automation and artificial intelligence, there is genuine fear that certain roles may become obsolete. This concern emphasizes the need to consider the socio-economic implications of technological progress.

Within the realm of artificial intelligence (AI), Thompson speculates on the progress and development of artificial general intelligence (AGI). He asks whether AGI will make more significant strides in the digital space compared to its progress in the physical world. This speculation reflects ongoing efforts to unlock the full potential of AGI and its various applications.

Trust in technology also brings up concerns about the role of government regulation and policy support. Thompson expresses doubts about solely relying on government regulations and suggests exploring alternative approaches to establish trust in technology. The adequacy of current regulatory measures in ensuring technology’s trustworthiness is questioned, leading to potential considerations for additional means.

Transparency in AI usage and development is another area that Thompson examines. He challenges the idea that companies should expose their full prompts for the sake of transparency. This viewpoint raises important questions about the trade-offs between transparency and the protection of proprietary information in AI technology development.

Furthermore, Thompson suggests that excessive regulation may hinder the development of certain technological benefits and capabilities. This concern highlights the delicate balance between regulation and innovation in the technology sector.

Lastly, Thompson emphasizes the need for the tech industry to effectively communicate the importance and excitement of technology development to the public. Despite potential issues and challenges, bridging the gap between technological advancements and public perception is crucial. This observation underscores the significance of public education and engagement in ensuring the positive reception of technology.

In conclusion, Ben Thompson’s discussion provokes critical analysis of trust in technology. His questions and insights shed light on the complexities surrounding this topic. The interplay between trust, societal implications, regulation, and the role of technology in our lives calls for ongoing dialogue and examination.

A

Ayanna Howard

Speech speed

222 words per minute

Speech length

2093 words

Speech time

565 secs

A

Audience

Speech speed

179 words per minute

Speech length

383 words

Speech time

128 secs

BT

Ben Thompson

Speech speed

218 words per minute

Speech length

1741 words

Speech time

479 secs

MS

Mustafa Suleyman

Speech speed

210 words per minute

Speech length

4211 words

Speech time

1201 secs