Impact the Future – Compassion AI | IGF 2023 Town Hall #63
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Audience
The analysis explores various aspects of AI development and its relationship with compassion. It underscores the significance of engaging in philosophical discussions and ethical considerations during the AI development process. The speakers argue that such discussions are essential to ensure that AI development aligns with ethical principles and human values.
One crucial aspect is the need to establish the limits of AI and what is considered compassionate for AI to undertake. Concerns are raised about whether AI actions are enhancing our humanity or pushing us further away from it. The speakers propose that AI that promotes human development and preserves our humanity can be deemed more compassionate.
The ethical complexity of employing AI for genetic manipulation in healthcare is also a topic of discussion. The speakers delve into the question of whether it is ethical to modify the genetics of animals, like sheep, to cure human diseases such as cancer. They argue that this issue challenges us to consider the bounds of AI’s compassion within the healthcare context.
Child safety in the era of AI is a pressing concern, with speakers highlighting the capability of generative AI to produce materials related to child sexual abuse. They stress the importance of including children’s voices in AI development to ensure their protection and well-being. Additionally, the significance of strong guardianship to prevent exploitation and abuse of children is emphasized.
The analysis also touches upon the necessity for appropriate incentives for for-profit corporations. It suggests that regulations and incentives are essential to promote responsible consumption and production.
Furthermore, there is a call to redefine intelligence by recognizing compassion as a fundamental aspect of it. The speakers argue that authentic intelligence should encompass compassion as a crucial characteristic.
The possibility of sentient machines is another area of discussion. The speakers mention the perspectives of David and Ray Kurzweil, who suggest the potential for machines to achieve sentience. This raises questions about the future development and implications of AI.
Overall, the analysis highlights the multifaceted nature of AI development and its impact on compassion. It acknowledges the importance of philosophical discussions, ethical considerations, and the inclusion of diverse stakeholders in shaping the future development of AI. Additionally, it raises crucial concerns about child safety, ethical boundaries, and the need for responsible practices in AI development. The discussion concludes with an optimistic outlook on the future of compassion in AI.
Robert Kroplewski
The discussion surrounding the ethical considerations and deployment of artificial intelligence (AI) highlights a significant gap between theoretical ethics and practical implementation. The utilitarianism approach, which prioritises the greatest overall benefit, remains prevalent in the deployment of AI despite ethical concerns.
In response to these concerns, several policy recommendations and acts have been proposed by various organisations. The OECD, UNESCO, and the European Union have all put forth guidelines, recommendations, and acts aiming to promote responsible and trustworthy AI. These efforts reflect a growing recognition of the need to address the ethical implications of AI.
Furthermore, there is a strong emphasis on ensuring that AI benefits both people and the planet. The OECD’s primary principle regarding AI is to ensure benefits for both humanity and the environment. To achieve this, there is a call to democratise AI, allowing the participation of all sectors, including small and medium-sized enterprises (SMEs) and academics. This inclusive approach aims to avoid the concentration of AI power in a few dominant entities and to ensure that its benefits are widely distributed.
The development of AI is an ongoing process, and there is still much work to be done. It is believed that the Compassion AI approach can fill the remaining gaps in the ethical considerations of AI. Compassion AI refers to an approach that upholds human dignity, promotes well-being, avoids harm, and strives to benefit both people and the planet. This approach is seen as promising and necessary to address the multifaceted challenges of AI deployment.
Robert Kroplewski, in his advocacy for prioritising UNESCO ethical recommendations over the Sustainable Development Goals (SDG) agenda, highlights the need to have a strong impact on how ethical recommendations are prioritised. He proposes a call for action to produce an AI Compassion Bridge Charter and engage in a network for the implementation of a compassionate approach to AI. His viewpoint stresses the importance of understanding and appreciating compassion as a guiding principle in AI development.
Overall, the discussions and arguments on AI ethics and deployment reveal the complexity and ongoing nature of the AI development process. It is essential to bridge the gap between ethical considerations and practical implementation to ensure that AI benefits both people and the planet. The Compassion AI approach and prioritisation of ethical recommendations over the SDG agenda are put forth as potential solutions to address these challenges.
Marc Buckley
The analysis highlights the role of technology in historical transformations. Throughout history, technology has played a pivotal role in shifting from one age to another. Examples such as the steam engine, printing press, and computer demonstrate how transformative technologies have shaped human history. The emergence of artificial intelligence (AI) and technology in the present era is seen as another transformational point in human history.
The argument put forward is that innovation is essential to guide humanity towards the right direction in this transformational period. The development of technology that can provide knowledge, wisdom, and training is necessary to avoid making significant errors. This argument acknowledges the importance of leveraging technological advancements to positively impact society.
Moving on to the Sustainable Development Goals (SDGs), it is evident that they are a globally agreed-upon roadmap for the future. Proposed by 197 countries, the SDGs are seen as the first-ever global moonshot or earth shot. They aim to address pressing challenges and provide a plan for humanity’s protection and insurance. However, the analysis highlights that there is debate and controversy surrounding the SDGs due to a lack of collective intelligence. This points towards the need for better collaboration and cooperation on a global scale to effectively achieve the goals outlined in the SDGs.
The SDGs also represent a new economic model. They propose a budget of 90 Trillion US dollars by 2030, indicating substantial financial support and a clear path for achieving the targets. This economic model aligns with the goal of promoting decent work and economic growth (SDG 8) while also considering environmental sustainability.
Another argument raised is the importance of programming AI to uphold values of compassion and ethics. This notion suggests that AI should be capable of negotiating and resolving conflicts between AI systems or cultures, acting as intelligent beings rather than adding to divisions among humans. The positive impact of AI is emphasized when it is programmed to make wise decisions when confronted with situations that may harm life or humanity.
Furthermore, the analysis highlights the potential of AI as a tool for positive change in transitioning from the Anthropocene to the Symbiocene. By instilling ethics and compassion in AI, there is a belief that a symbiotic relationship between all life beings on Earth can be achieved. Harnessing technology to make history and creating a harmonious coexistence between humans and AI is seen as a key pathway towards the Symbiocene.
In conclusion, technology has always played a significant role in historical transformations, and the emergence of AI and technology marks another pivotal point in human history. The Sustainable Development Goals provide a roadmap for the future but need greater collective intelligence to overcome challenges. The SDGs also introduce a new economic model with substantial financial support. AI can be a powerful tool for positive change when programmed with compassion and ethics, while also helping humanity transition to the Symbiocene. This analysis underscores the need for responsible and innovative approaches to harness the potential of technology for the betterment of society and the environment.
David Hanson
The discussions revolve around the multifaceted aspects of artificial intelligence (AI) and its potential implications. There is an overall positive sentiment towards AI, acknowledging its ability to potentially become sentient and its role in driving technological advancements.
One aspect of AI’s development highlighted in the discussions is the influence of the corporate sector. It is argued that advancements in AI technology are largely driven by corporations, which take risks and raise funds to propel AI technologies forward. This highlights the significant role that companies play in shaping the future of AI.
Compassion and appreciation for all life are emphasized as important values that should be integrated into AI development. It is highlighted that appreciation extends to life in all its diversity and the interdependence of humans on the web of life. Additionally, the concept of compassion is shared across many traditions, reinforcing the importance of incorporating these values into AI systems.
The broader picture of sustainable economics is brought into perspective, noting that corporate activities need to consider long-term implications for sustainable economic development. The discussions stress the need to look beyond the present and consider the economic impact on future generations. By taking a more holistic approach, corporations can contribute to sustainable and inclusive economic growth.
An interesting point raised in the discussions is the human ability to filter their sense of compassion. It is observed that humans possess the neural architecture of chimpanzees and can desensitize themselves to certain situations. This raises questions about the potential impact of this filtering ability on compassion and ethical decision-making.
Another noteworthy argument is the aim to enhance human caring through creations like AI robots. It is acknowledged that current AI models, like GPT-4, do not actually care. However, the aim is to develop AI that can assist and enhance human caring, potentially benefiting various domains such as healthcare and social services.
The need to democratise AI technologies and prioritise the greater good is emphasised. It is argued that technologies should be accessible to all and not be driven solely by the interests of a select few corporations or governments. The Global Artificial Intelligence Alliance (GAIA) is highlighted as an entity that aims to democratise AI access by encouraging collaboration and participation from individuals, corporations, governments, and NGOs.
Data is viewed as a commons, and the discussions advocate for individuals to have the ability to license in and benefit from their own data. Market dynamics and crowdsourcing are seen as potential mechanisms that can benefit a democracy of action. This approach is believed to empower individuals’ voices and provide access to valuable information.
Inclusive and transparent AI development is considered crucial. It is stressed that people from developing nations should be included in the development process, and leadership should involve individuals from indigenous communities and children. This reflects the importance of diverse perspectives in creating AI technologies that address the needs and aspirations of different populations.
Ethical considerations are highlighted throughout the discussions. Regulations are mentioned as a means to protect animal rights in research, and ethics review boards are acknowledged for weighing the costs and benefits of research involving animals. The use of technologies like simulations is proposed as a way to make smarter decisions without sacrificing ethics or causing animal suffering.
Notably, the discussions also recognise the potential for technologies to enhance human compassion. While specific evidence or arguments are not provided, this observation suggests that AI and related technologies have the potential to positively impact human emotions and empathy.
In conclusion, the discussions on AI and its implications focus on the need for inclusive and transparent development, incorporating compassion and appreciation for all life, sustainable economics, ethical considerations, and the democratization of AI technologies. The insights gained from these discussions highlight the potential benefits and challenges associated with AI, as well as the importance of considering diverse perspectives in its development.
Marko Grobelnik
Regulation of AI by international organisations began prior to the recent advancements in AI. However, the rapid development of AI, particularly with the emergence of Chat GPT, has caused confusion among regulators. This accelerated progress has posed challenges for policymakers as they try to keep up with new technologies and their potential implications.
The competition for market control in AI is intensifying, with Western companies such as Microsoft, AWS, Google, and Meta vying for dominance. This competition extends beyond companies and extends to a geopolitical level, with the United States, Europe, and China being the main players. The strategic positioning and control of AI technologies have become crucial in shaping global power dynamics.
To address the balance between the power of AI and public trust, an innovative approach suggests the establishment of a voluntary conduct between big tech companies and the government. This approach aims to ensure responsible and ethical use of AI, addressing concerns surrounding data privacy, bias, and algorithmic decision-making.
China is recognised as a rising power in the field of AI. While the country has made significant progress in AI development, it currently faces a challenge in terms of lacking the necessary hardware infrastructure.
The concept of developing compassionate AI is gaining traction. The current AI technology allows for AI systems to understand and mimic text to a certain degree, which opens avenues for the development of compassionate AI. Large language models like GPT-3 can reflect the knowledge fed into them and exhibit a form of “text understanding.” However, it is important to note that AI’s inferencing and reasoning capabilities are still limited.
Interestingly, proponents argue that elements like empathy, positive human values, and societal understanding can be ingrained into AI systems mathematically. By incorporating these elements and leveraging a reflective human knowledge base, AI has the potential to exhibit compassion, further expanding the horizons of AI applications.
Additionally, an additional layer of compassionate AI can be integrated into existing AI and IT systems to guide their decision-making. Some companies have already started implementing forms of compassionate AI by blocking negative queries, highlighting the potential for improving AI systems’ ethical decision-making.
The development of AI is currently dominated by a few big tech companies, giving them significant control over the direction and advancements in the field. This concentration of power raises important questions about accessibility, diversity, and fair competition.
Despite the existing limitations, there is optimism about the progress and future of AI. The past year has witnessed unexpected advancements in AI technology, pushing the boundaries and inspiring confidence in its continued growth and potential societal benefits.
In conclusion, the regulation of AI has a history preceding the recent AI progress, but it now faces challenges due to the accelerated development caused by technologies like Chat GPT. The competition for market control in AI is intensifying on a global scale. An innovative approach to strike a balance between AI power and public trust is advocated through voluntary conduct between big tech companies and governments. China is emerging as a major player in the field of AI, although it currently lacks necessary hardware. The concept of developing compassionate AI is gaining traction, with the potential to integrate empathy and positive human values into AI systems. The development of AI is currently concentrated in the hands of a few big tech companies. Despite limitations, optimism about the progress and future of AI persists due to witnessed advancements in recent times.
Edward Pyrek
During the discussion on artificial intelligence (AI) and its potential impact, the speakers focused on several key points. One area of importance was the concept of compassionate AI, which involves developing AI systems that possess empathy and understanding. The speakers argued that compassion should be considered a common thread across religions and cultures and can, therefore, serve as a foundation for the development of compassionate AI. They mentioned the creation of the Gaia Global Artificial Intelligence Alliance in 2020, which aims to concentrate on creating decentralised and compassionate AI. This alliance can potentially contribute to the development of AI systems that have a positive impact on society.
Another crucial aspect discussed was the need for collective action and interdisciplinary approaches in shaping the future of AI. The speakers stressed the significance of involving various fields, including technology, spirituality, psychology, arts, and more, to ensure a well-rounded approach toward AI-driven advancements. They highlighted the formation of the Virtual Florence group, consisting of experts from diverse disciplines, who work collaboratively to explore the potential of AI in creating a better future. The inclusion of AI in discussions regarding its future was highly emphasised.
The speakers also acknowledged the potential of AI in addressing global challenges such as climate change, combating illnesses, and reducing wars. However, they cautioned against the dangers posed by AI if it lacks ethics or compassion. The GPT-3 model, created by OpenAI, was referenced as an example of AI systems without ethics or compassion, which can potentially be dangerous. They mentioned Edward’s support for the AI Impact Summit in March 2024, which aims to address these challenges and encourage the development of AI with compassion and ethics.
Furthermore, the speakers emphasised the importance of asking the right questions when working with AI, suggesting that it may be more vital than seeking answers. By framing proper questions and exploring various possibilities, the speakers believed that AI can be utilised more effectively and ethically. They also argued that ethics and personal values should form the foundation of AI development, emphasising the need to prioritise these aspects when creating AI systems or any technology.
The potential of AI in understanding human nature and enhancing compassion was also a significant point of discussion. The speakers posited that AI can be leveraged to understand humans better, ultimately leading to the creation of “super compassion”. This understanding of human nature can contribute to various aspects of human well-being.
Overall, the speakers expressed both positive and negative sentiments about AI. While recognising its potential to address global challenges and enhance compassion, they also highlighted the risks that AI without ethics or compassion can bring. Through this discussion, it is evident that thoughtful and responsible development is crucial for ensuring the positive impact of AI on society.
One noteworthy observation from the discussion was the recognition that the future of AI is an arena where imagination is lacking. The speakers noted that imagining the future we want, with AI playing a beneficial role, is a challenge that needs to be overcome. This highlights the need for creative thinking and envisioning the possibilities of AI in a way that aligns with human values and aspirations.
In conclusion, the conversation on AI and its potential impact covered the importance of compassionate AI, the need for collective action and interdisciplinary approaches, the potential of AI in addressing global challenges, the significance of ethics and values in AI development, the value of asking the right questions, and the exploration of AI’s potential in understanding human nature better. By considering these insights, it becomes clear that responsible and ethical development of AI is vital for a future where AI can bring positive contributions to society.
Emma Ruttkamp-Bloem
Artificial Intelligence (AI) technology is advancing rapidly and has the potential to significantly impact human agency and autonomy. AI can process and analyze vast amounts of data in ways that exceed human capabilities, leading to both positive and negative outcomes for individuals and society as a whole. Therefore, it is essential to consider the ethical implications of AI and ensure that it benefits humanity.
The UNESCO recommendation on the ethics of AI is a significant development in this field. Its focus is on promoting technology that prioritizes humans and establishing a responsible framework for AI systems. The recommendation emphasizes the importance of global and intercultural dialogue in shaping ethical guidelines for AI. It aims to enable all stakeholders to share responsibility for the development and application of AI technology, aligning it with human values and societal well-being.
In November 2021, the recommendation was adopted by 193 member states, indicating a global consensus on the need for ethical guidelines in AI. This recognition highlights the importance of addressing the potential implications and consequences of AI technology on a global scale, particularly in relation to Sustainable Development Goals (SDGs) such as SDG 9: Industry, Innovation and Infrastructure, and SDG 16: Peace, Justice and Strong Institutions.
Moreover, the recommendation underscores the translation and actualization of ethical entitlements, such as the right to privacy, to promote positive liberty through AI ethics. This approach places positive obligations on all AI actors, including developers, policymakers, and users, to respect and protect individual rights and well-being. By prioritizing ethical considerations and facilitating meaningful interaction between technology and society, this approach aims to promote individual flourishing and maintain the integrity of technological processes.
In conclusion, the rapidly advancing AI technology requires a comprehensive and ethical approach to ensure its alignment with the well-being of humanity. The UNESCO recommendation on the ethics of AI is a significant milestone in the promotion of responsible AI systems. By prioritizing human-centered technology and fostering global dialogue, the recommendation aims to ensure that AI technology works to the benefit of humanity, while promoting positive liberties and preserving the integrity of technological processes.
Tom Eddington
The analysis explores the impact of artificial intelligence (AI) on businesses and the environment, with a focus on several key points. It begins by mentioning Amazon’s recent $4 billion acquisition in the field of AI, which raises concerns about companies prioritizing commercialization over ethical considerations. This suggests that businesses may be driven solely by profit and neglect the potential negative consequences of AI.
However, an alternative viewpoint is presented, arguing that businesses should be guided by an AI charter to ensure ethical decision-making. This aligns with the principle that businesses need a clear framework to address the ethical challenges posed by AI. An example is the Earth Charter, created in the 1990s, which provides guidance for decision-making with regard to environmental concerns.
Another positive aspect highlighted in the analysis is the potential of AI to address the problem of resource overshoot. It is noted that on August 22nd, World Overshoot Day marks the point when the planet’s resources are used up faster than they can regenerate. The analysis suggests that AI offers the potential to manage resources more efficiently and mitigate this issue.
Moreover, the analysis emphasizes the need to manage ourselves and our ethics as generative AI rapidly evolves. Nicholas Robinson at Pace University warns that generative AI is advancing faster than our ability to adapt and cope. This serves as a reminder that ethical considerations and responsible management are crucial as AI progresses.
Regarding AI business models, the analysis argues that compassion and decentralization should be incorporated into their creation. It mentions that the effects of centralization and decentralization have been observed in the power generation sector. By incorporating compassion and decentralization, AI business models can ensure a more human-centric and sustainable approach.
Furthermore, the intentional design of AI is essential. The analysis states that AI should not be allowed to evolve without intentional design and emphasizes the importance of enabling it to exhibit compassion. This reinforces the need to consider ethical aspects during the development of AI technologies.
In conclusion, the analysis highlights the necessity of ethical and responsible approaches to AI. It acknowledges the potential benefits of AI while emphasizing the importance of avoiding potential negative consequences and ensuring that AI is developed with intentional design and compassion. Additionally, it underscores the need for businesses to have clear guidance, such as an AI charter, to make ethical decisions in the rapidly evolving AI landscape.
Session transcript
Robert Kroplewski:
Okay, good morning. Welcome to the Town Hall, the special panel dedicated to impact the future under the challenge Compassion AI. Personally me, it’s Robert Koplewski, I’m a plenipotentiary of Minister of Digital Affairs in Poland, responsible for information society. I’m engaged in many international expert group designing the artificial intelligence approach to policy and some law and recommendations. I have a very special guest in our Town Hall. Some of them are in present here, some of online. With us here in the room, we have David Hanson, Hanson Robotics, so you know him probably from the robot Sophia. With me on the right side is a host of Gaia Foundation, what is a reason of our meeting today, Eddie Perek, a visionary and even a good time creator of approach to the Compassion AI. With me is my co-moderator, Damian Ciechorowski, and online, the chief, the president of the board, the Gaia Foundation, and online we have Tom Eddington from, yes, and we have a other guy, Mark Buckley, and also Marco Grubelnik from the Josef Stefan Institute. Online could be, but it could be also difficult to participate, Emma Rundkamp, professor from the University of Pretoria. We had her intervention by recording video, what we would like to present during our sessions. On the beginning, I would like to present as a first thoughts of overview worldwidely, some outputs. delivery of international engaging to produce some recommendations for artificial intelligence. But first of all, we need to say why we organized that meeting, that town hall. Words produce many papers to artificial intelligence, to recommendation, how to responsibly implement it, and how to define the best ethical approach to the artificial intelligence. But still, we have a competition run. It’s some asymmetry between the ethical approach, what was developed as a trustworthy artificial intelligence approach, to practical and deployment of artificial intelligence. We still, from the ethical point of view, are in the utilitarianism, what means we can exploit any resources and scale our business model. It’s the theory goes to the practice from the ethical perspective, but we’re still in the process. The landscape of policies and recommendation comes from the OECD policy recommendations, UNESCO ethics for artificial intelligence, also European Union with the guidelines for trustworthy artificial intelligence, and Artificial Intelligence Act, what probably will be the first binding instrument around the globe from the legal perspective, how to empower the ethics and the implementation of ethical dimensions to the artificial intelligence system and organizations. Next binding instrument will come from the Council of Europe, what is the first organization around the globe, what would like to promote the first treaty in the domain of human rights, democracy, rule of law, and artificial intelligence. But serious talk is still continuing. I mean, among the Transatlantic Technology Council and experts, when in that, the teams is discussing the topic of value chains, of course, microelectronics, and also the approach to the artificial intelligence trustworthy or responsible. This is very, very important. NATO is also engaged, but from the standardization point of view, how to share data, how to share the artificial intelligence algorithms among the members of the NATO. But we must say that the road from artificial intelligence, from the scientist perspective to today, it’s still not finished. We started a deal from HALES, anybody could do anything with artificial intelligence from the technical point of view. We got the trust as the main element of any recommendations, what was shifted and convergenced to the trustworthy artificial intelligence. But we still feel and know that some gap over that recommendations are like a compassion approach. And because of that, we invited the Gaia Foundation to say about this a bit more. And as experts and as policy makers, we get some difficulties how approaches to find to solve problems and to deal with benefits of artificial intelligence and how to manage the risk. And the stage started from the control perspective and supervisories, especially the human oversight, to the very good approach, which is something more than governing, stewardship. We’re still before the care approaches. And finally, maybe this is a good point now on IGF. IGF to talk about the compassion approach to artificial intelligence. And from that perspective, we would like to underline some values, what is in the loop of our discussion today, coming from the many papers what I would like to underline. And the main compass for solving any conflicts among values was produced by UNESCO recommendation. It’s a special triangle between the human individual dignity, well-being, and no harm. This is a compass for everything. We of course, as policymakers, started to deal with asymmetry of access to knowledge, computing power, experience and participation from the democratization point of view and from the informative point of view and educational point of view. But still, it’s a very beginning stage of flourishing the ecosystem, engaging the SMEs, engaging the scientists, engaging even the policymakers to build a solid ecosystem, not for only ones, for giants, but for everybody who would like to participate in producing benefits for planet and benefits for people. That conjunction is very important. That conjunction was developed in the OECD that we must see not only benefits of people, not only benefits of planet, but in that conjunction. That kind of approach is the main of the principle of OECD recommendation. And because of that, we try to look for how to find the new approaches, what could cover the gaps. The gaps is still oneness in diversity. This is the beginning base of developing the Compassion AI on my approach. And because of that, we have our guest today. And now I would like to give the mic to my co-host. Edip to present the roadmap. What is the Gaia Foundation? What was your work since some years today?
Edward Pyrek:
A few years ago, we understood very well with David Hansel and my friend Pion Traisch that we are on the crossroad. Every decision we take, it can change everything. I mean, just we didn’t have the time to make the mistakes, not only because of the climate changing, the war and pandemics, etc., but mainly because of the artificial intelligence. We already know that before we started creating the Internet, we didn’t ask ourselves how dangerous can be the Internet. And now we have time. We still have the time to decide how the future of AI will look like. First, we come to the conclusion with David and Piotr that we should create a global and ethical AI. At first, the question was ethics. What kind of ethics we can have? Polish ethics, Russian ethics, Chinese ethics, Buddhist ethics, Muslim ethics. The ethics depends on the culture and depends on the religion. But when we started to study different religions and different civilizations and different cultures, we understood that each of these religions, each of these philosophical systems have one thing in common. Without these things, we have no religion. It’s compassion. Without compassion, we have no Buddhists, no Muslims. Without the compassion, we didn’t have civilization and evolution. Because we evolved, because we know how to cooperate. Without the compassion, we have no evolution. And because of this, in 2020, yes, we created Gaia Global Artificial Intelligence Alliance. We decided to concentrate on the creating decentralized and base of the compassion AI. And in 2021, during the preparation for AIGF, we announced Gaia to the public. And we started talking about the compassion, about the things which we would like to achieve. One year later, in Warsaw, during the virtual Florence, the first meeting on virtual Florence, and now I’d like to explain what is the virtual Florence. Virtual Florence, it’s an international group of experts from different fields. We split them for four groups. First, business, but not only business, the politics and media. Second, technology, science, and the fourth one, spirituality. But spirituality not only meaning the spiritual teachers or religion leaders, but spirituality, it means psychology, it means arts, and it means spiritual teacher too. And why we do this? Because we are thinking that we couldn’t create AI of future just based of the IT guys. The AI of the future should be created by the people from different fields. Because AI, it is our future. It couldn’t be created by just one group of the people who decide in which direction we should go. Especially when you are looking at our civilization and religion, each civilization have mix of the amazing geniuses, amazing ideas. And during our first virtual Florence meeting, first of all, we develop a special tools for collective creativity. We collect these experts from different fields and give them the tools to create the idea in which direction we should go. First, we create the definition of the compassion because if you would like to create compassion AI, first we should know what is compassion. Second, we create a special IP. It is compassion AI models where we understood that if you would like to create AI of future, we should use the loop in which we have not only human and AI, but when we have the two very important things which always appear, especially this is what we understood our workshop that the biggest things which we are facing now it’s a. Fear, fear because we are afraid of AI. We are afraid of the future. And when we are afraid, we couldn’t do anything because the fear is stopping us. Then we understood that we should not only with compassion, but we should work with the fear of the humans too. Then this is our compassion AI models which help us, which we believe it’s help us later in future to teach AI compassion or compassionate. On the first, on the second, yes? If I can intervene, because that is interesting. Please. What I see in that model, you think is the proposition how to deal with fear and convert this on compassion, one approach, and how to deal with humans and to redesign the artificial intelligence system, be able finally to deal and to express the compassion experience. Exactly. Thank you very much, Robert, for your explanation. You do it better than me. During the second virtual Florence in Salzburg in March 2023, we tried to put our idea into the product. And after the one day or two days of workshop with this expert from different field, and we have physicists like Professor Krzysztof Meissner in the right hand of Roger Penderel, the best physicist in Poland. We have David Hanson, but a part of this amazing people we have with us, the Android with AI. And we have the Android with AI, because again, if we are talking about the future of AI, we should include AI into this conversation, into this work. And during the workshop, we come to the conclusion that all what we can do, we can create the tool where we can with the AI, but we can teach people compassion. Because we think if we will. would like to create a compassion world, compassion AI. First, we started thinking how we can be compassionate, how the human be compassionate. And yes. And now what I understood from our conversation offline, the output of that virtual forensic is a call for developers. Yeah, it’s a call for developers. It’s a competition. How to create an environment or a platform, a solution, which use gamification, the flow state, to teach in psychological safe way the things like positive behavior, how to take care about nature, how to teach the people about our arts and development. And it happened during the 2023, in March, during the conference in Salzburg. Our next step, it was Geneva AI Conference, AI for Good. But during this conference, we might have the kickoff, the meeting about Gaia Guardians. Gaia Guardians, it’s a platform. But during this kickoff, we had such amazing people like David Hanson, but Ben Gertzer, the creator of General Artificial Intelligence concept, and Stephanie Baraki, the creator of AI for Good in UN. And together with them, we started working on the platform or organization who will create decentralized AI in future. Our idea was to collect all people who would like to create this decentralized AI and going the same and show them the direction and going the same in the same direction. And now we are in Kyoto because we are here because we are here to present our idea to the policymakers and to remind them that if you would like to have the really big change, we couldn’t work only from the top to the bottom. we should work from the up to the bottom. It’s amazing that we can work on the law, on the regulation, but at the same time, we all, the AI guys, the spiritual teachers, the mothers, the fathers, the cook, we all should work together to create this AI of a future. What I understand from this, you try to propose that talking about the law, recommendation, policymaking, principles, it’s the one thing, but we need some very concrete product, some special technical environment as a sample. Yes, of course, because it’s not enough to talking. I mean, just sorry, but quite often I am using the words intellectual masturbation, but this is the things which I have too often I witnessed during all kinds of the conference. We have amazing conversations. Everybody thinks that I am the best, we know how to save the world, and everything is ending after the conference. We need to product, we need to call to action, we need to have the impact, and this is why another milestone, it will be this March 2024 in Salzburg, we are making AI Impact Summit, that we would like to attract all guys, all people who are working with AI and with the impact, and show them that we are not creating AI for fun, to watching porn and watching cats, and have better cars. We are creating AI to save us. We are creating AI because we need AI. AI, it’s going to be the most dangerous things in the world. It’s going to be the tools for massive destruction, and it’s going to be the only one hope for us. Without AI, we couldn’t. With AI, I deeply believe we can solve the problem. problem of climate change, with the sicknesses, with illness, with the war, et cetera, et cetera. But only in the situation when this AI will be decentralized and will be based on the compassion. When we are thinking about sustainability goals, only 16% of our dreams happen now. Why? Because we have no compassion. If we have the compassion, we will not kill the nature. If we have the compassion, there will be no war. This is why we need compassion. We need AI with the compassion because we need AI who will show us our blind spot, who will teach us more about our humanity, who will teach us more about arts, about consciousness even, about emotion. And this is why I need compassion AI, to teach us, to be our partner. Thank you, Adi. What I get it, even underlying the risks,
Robert Kroplewski:
you believe that AI could be beneficial from the compassion point of view? Yes.
Edward Pyrek:
Without the ethics, without the compassion, it can be dangerous. We are screwed. Thank you. Thank you for that.
Robert Kroplewski:
Now we would like to ask my colleague, co-moderator Damian, to play the video from the Emma Ruttkamp, professor from the Pretoria University. She was one of our co-designers, the ethical recommendation came from the UNESCO.
Edward Pyrek:
Please, Damian, make our plane. It’s coming. Yeah. Thank you very much for having me and for the event. I’m very sorry that I can’t join you in person.
Emma Ruttkamp-Bloem:
And not only that, I can’t even join you for questions. Unfortunately, technology is not that far above. If one could join the Zoom meeting from the other side. One more time. But please. We have some technical issue. Give our some seconds. Use Poczonko. Not only that, I can’t even join you for questions. Unfortunately, technology is not that far above if one could join a Zoom meeting from an airplane. But please connect with me on this talk if you have any questions or you just want to have further discussions. The title of my talk is a Global Compassionate AI Ethics, and I’m going to tell you what I think that could be in the context of the UNESCO recommendation on the ethics of AI. So I want to first reflect a little bit with you on why is AI technology important? Where does all this agitation come from? So this is a technology that is advancing at high speed, and it is a technology that to various degrees and in various ways threaten human agency and autonomy. We want human-centered technology that in various degrees keep humans on the loop. Secondly, it’s a technology that can leverage massive amounts of complex data in ways that humans can’t do. This part of the reason of developing the technology, of course, but it also brings, of course, certain concerns. Thirdly, it impacts humans in all facets of their lives. More far removed ones in terms of legal issues of accountability and responsibility maybe, but very intimate ones also in terms of inclusivity and non-discrimination, in terms of the right not to be manipulated, the right to mental integrity, and so on. But also this technology is so fascinating because it has an immense powerful good, and on the flip side, it has an immense powerful harm. So what we have to figure out is how to maximize the powerful good and minimize the powerful harm. So against this background, I want to talk to you about the Global Recommendation on the Ethics of AI, because for these reasons, and also based on a report from the World Commission on the Ethics of Scientific Knowledge and Technology, the UNESCO General Assembly in its 40th session asked UNESCO to elaborate a global instrument on the ethics of AI. This work took from April 2020, smack in the middle of lockdown, until November 2021 when 193 member states adopted the recommendation. Just shortly again why, from a slightly different perspective, why do we need this recommendation? AI technology is spreading harm to individuals in such deep layers of their lives that ultimately the harm will be to humanity as a whole. There’s the complexity of the ethical issues that it brings that I’ve already spoken about, and then realizing sustainable AI development requires international cooperation because the companies that develop this technology are transnational companies, so we need global cooperation in terms of ensuring responsible governance of these technologies. Also, widening the inequality gap in the end will backfire on everyone. Think of Africa, the African continent, which is the continent with the lowest median age. If Africa is left behind again, it will impact on the whole world in various ways. Then, of course, what is the value of the recommendation? And this is very, very important to understand and or to realize. It will lead to cooperation and shared responsibility of multiple stakeholders across various levels and sectors of international, regional, and national communities. So now, if we just take a second to think about the aims and objectives, now obviously this recommendation aims to provide a basis to make our systems work for the good of humanity, to bring a globally accepted normative instrument with a strong emphasis on inclusion issues of gender equality and protection of the environment and ecosystem. So it’s about the good of humanity, but it’s also about the good of the environment. and ecosystems, and there is this focus on inclusion issues, specifically in terms of gender. So on the whole, the recommendation aims then to enable stakeholders to take shared responsibility based on a global and intercultural dialogue. So, and here is the first glimpse of the Compassion AI. The values that we identified in the final version of the recommendation that Member States identified for the final version, respect, protection, and promotion of human rights and fundamental freedoms and human dignity, environment and ecosystem flourishing, ensuring diversity and inclusiveness, living in peaceful, just, and interconnected societies. The principles, we have quite a lot, well-known ones like safety and security, like fairness and non-discrimination, the right to privacy, human oversight and determination, transparency, explainability, responsibility and accountability. But we also have a new one, proportionality and do no harm, which is basically about situating a risk-based approach in the core of the recommendation. And also, we have sustainability as a principle, we usually, if mentioned, it’s a value. And this is to, in a sense, concretize the value of environment and ecosystem protection, because while this technology can really help to reach the SDGs, it can only do that if we understand that there is a continuum of factors that impact on whether, on the level of realizing these goals in various regions of the world. And then we have the multi-stakeholder and adaptive governance and collaboration, and we have awareness and literacy as our last principle, because civil society is an AI ethicist’s biggest friend. But we did not stop with values and principles, we wanted to figure out how… to focus on the how and not just on the what. So we had to find a way in which to make the recommendation concrete enough to make an impact, firm but also at the same time open enough to ensure adherence, supple enough to have validity in the future, which is a really tall order, as you all know, and then somehow to ensure that the actions, the sum of the actions will achieve trustworthiness of this technology. In order to do this, we identified 11 areas of policy action and we gave detailed actions in each area of policy action and so that member states have some guidance on how to concretize the values and principles. This recommendation also has a very robust section on evaluation and monitoring because UNESCO is completely committed in supporting member states in the implementation of this recommendation and UNESCO has already developed a methodology for ethical impact assessment. Important methodology that takes into account that member states will be at different stages of readiness to implement the recommendation and there are various other ways in which UNESCO is willing to support member states. But now having given the background of the recommendation, let’s take a few seconds to just move now into the compassion AI. What could this possibly be? Now I want you to just honestly just take a second to reflect on the answer that you would have for each of these questions. Who are you? What would be the main quality that you would use to describe yourself to other people? What determines the nature of your thoughts and actions? What determines your agency or your autonomy? What link is there between your autonomy and your moral responsibilities? And what does respect for your autonomy require from other moral agents? So on the basis of those questions, I want to tell you about my notion of positive AI ethics. I do this by just quickly introducing you of an approach in philosophy when we consider issues of the meaning of life and we think about how to achieve a life of well-being. Philosophers such as Amartya Sen and Martha Nussbaum came up in this context with the capability approach. In terms of this approach, capabilities are political entitlements that impose duties on governments to enable its citizens or their citizens to realize lives of well-being. Now, in the context of AI, if we ask what kind of entitlements will allow humans, positive liberty and capabilities, so not maybe political entitlements, what kind of titles would do this, but what is positive liberty? Esai Berlin made distinction between negative and positive liberty. Negative liberty is simply the absence of obstacles to realize one’s freedom. Positive liberty is more interesting because it is about doing something with this liberty, doing something so that you actualize the liberty that you have to live a life of well-being, to take control of your life. This thing moves into the notion of capabilities that is about what you need to achieve a life of well-being, not having the ideal only of a life of well-being. I think obviously that the entitlements that we need are ethical entitlements. In the AI ethics context, these entitlements place positive duties on all AI actors. What are positive duties? This is also an old philosophical concept, the distinction between negative and positive duties. Philosophers such as Immanuel Kant wrote on this, but more recently, Leitner and others wrote on this in the context of AI ethics. Negative duties is simply do no harm. Positive duties, again, is the more interesting one because it is about protecting the vulnerable such that no harm is done onto them. It is about doing something with the fact that you have a duty placed upon you. In this way, context, AI ethics would enable humans to flourish, would enable meaningful technology society interplay, which is really important, and would maintain the integrity of technological processes and not stop innovation as something. So the compassionate argument for AI ethics is then AI innovation for the good of humanity relies on the actualization of certain ethical values and principles as ethical entitlements or capabilities in terms of positive actions as duties that will actively prevent harm and support human agency and autonomy. And I forgot to say on the previous slide, these duties are duties that all AI actors share and AI actors incorporate the researchers, the designers, the developers, the deployers, the users. And so obviously governments are also included here. So AI ethics in this sense, to give you an example, translates and actualize ethical entitlements such as the right to privacy, to realize positive liberty, to for instance then decide whether or not to sign a consent letter, in terms of positive actions for AI actors, for instance ensuring responsible third party sharing, access to own data, and so on. So to end off a bit of philosophical reflection, and again thinking about the whole aim of compassionate AI, why does it matter to reflect on what it is to be human in the area of AI? Why does it matter? Why are we doing this? It ensures that AI ethics becomes actionable and positive. It establishes ethics as a human technology mediator, not an add-on, not a top-down, but presents ethics in fact as a dynamic mechanism for translating abstract principles into positive duties and actions for AI actors to achieve a life of well-being for all. So it affirms ethics as a compass and enabler of human flourishing and trustworthy. sustainable technology
Robert Kroplewski:
Thank you Emma very much for your insight of working work of compassion and be open for redefining that all puts of UNESCO for new approach and to Find some solution to cover the gaps what I got it from your Presentation what I like super much. It’s a positive labor liberty Some new dimension Positive actors we needed that is a very good. What was underlined but on the on the beginning still we need to work with the approach of intercultural exchanging any values any assets any possibilities and That with that thoughts. I would like to give the mic to David Hanson Designer and the founder of Hanson robotics known Sofia robots David if the high-tech industry Is able to adopt that kind of idea thoughts? And to do something positive and be positive actors finally If you could share with us some some thoughts
David Hanson:
Thank you Thank you excellent discussion on some very important issues of how AI can impact human lives, so AI is a tool and In a way it is a portal to access our Own information in some regards, so it’s a bio inspired technology inspired loosely by the way spiking neurons work in nervous systems, and it then accesses human data to find hidden patterns in human data. There are some very interesting implications that these technologies could, by being bio-inspired enough, systemically they could become living beings that we have to then consider as potentially sentient, autonomous beings deserving respect. But this is science fiction today. We don’t have deep sentience in machines. There might be glimmers of life because these are bio-inspired technologies, inspired by our fundamental information that we’re gleaning from biology, and you see these feedback loops where the technologies are then enabling the discovery of new aspects of intelligence. We are representing this in computational biology and computational neuroscience, and then those are informing new architectures in artificial intelligence. And so behind the scenes, these technologies are advancing very quickly, and that is moving most rapidly in the corporate sector. So we’re seeing corporations taking the risks and raising the money to propel these technologies forward in ways that are very helpful to us, that are transformative, that enable new discoveries. So let me give you some examples. AlphaFold, from DeepMind, has applied artificial intelligence to unlock proteomes, the functioning molecular are components that build everything that lives. And so you go from the genome to the proteome, and the proteome builds everything else. And that’s us. So AlphaFold discovered the human proteins, or gave us tremendous clues about all the human proteins, and now all the proteins in nature. And then they released this open source, and it’s facilitating, really, a revolution in biosciences. So then from the corporate sector to the public sector, you’re seeing this transformative cascade of the technologies. Of course, a lot of these ideas came from academia, came from esoteric 50, 60 years of research and information sciences that did give us things like computing. Some of the thinkers like Turing and von Neumann were also considering the impact of artificial intelligence. So a lot of the thinkers in the world that gave us the computing revolution and the internet, all these information technologies, were thinking about thinking machines, and laid the foundations that only became so obvious to lawmakers and the public within the last few years. Well, it started much earlier than that. So this dynamic interplay between policy, academia, the thinkers of the world, and the corporate sector has been at play. And so the question is, how can we take these forces and factors and make them better for the greater good? And I think about compassion. Compassion, for me, to distill it down to a simple definition, to give my definition to the many definitions that people are providing, for me, compassion is the appreciation of. life. It’s that simple. To appreciate life. Life in all its diversity. Life as a as a whole sustainable ecosystem. Life that was in the past, the history of life, the natural history of life. Life as it is today, as dynamic systems that we may not understand. We do not understand much of how life works. Even human biology we don’t understand a lot of aspects of human cognition. So it’s not just appreciating the things we know but also appreciating the fact that there are many things we don’t know. It’s also appreciating the diversity of human life in all its form and the interdependence of humans on the web of life. And so with this concept of compassion I see reflections in many of the traditions of compassion. And one tradition or I would say insight into compassion that relates to artificial intelligence was from a science fiction writer named Philip K. Dick who wrote an essay called The Android and the Human. And he said the difference between, and this was in the early 1970s, the difference between humans and machines is compassion. It’s that simple. And that he went on to say that a machine that could express more compassion than a human in effect would be more human than a human who lacks compassion. And humans are amazing with our neuroplasticity, our ability to adapt, and we are in effect defined by that. The difference between humans today and humans 50,000 years ago is the technology of our language more than anything probably, the technology of our ideas that are built, and that the conveyance of those through the machines that we build build, and in fact, externalize this. But our minds continue to evolve. And this idea of compassion, then, expressed through the technologies that we make in our corporations, in our schools, but we get out through some sustainable economic factors. Because there’s not just the economics of the ecosystem. Certainly, energy exchange is a kind of economy in the ecosystems. But we have to make things that give people jobs, and make money, and keep things from collapsing. There has to be economic sustainability. And so the corporate sector can facilitate this in a way, but we have to look at the bigger picture. Because it’s bad economics if we’re only serving next quarter profits for publicly traded companies. We have to look at the economics of 100 years, of 1,000 years. We have to look at the economics of our children. So the only way that corporate activities make sense is in this larger picture, this web of compassion. And so humans will desensitize ourselves. One of the approaches, unfortunately, is that we can filter our sense of compassion in order to achieve something that we want. And this is a problem. We see it. We’re evolved this way. We have the neural architecture of chimpanzees, basically. We are the third chimpanzee, as Jared Diamond says. And so we have to use these technologies to help us to actualize. There will be so much more profit for all of life if we can do this, if we can achieve this ethics of greater appreciation of life, of life’s potential, appreciation for not just the way that life has been and is today, but could be in the future. Creating robots has been my aim, but in the goal of creating AI that can enhance human caring, can help us to awaken to caring, and then may eventually be capable of caring. Right now, GPT algorithms and models that are created, anything like CLAW, GPT-4, et cetera. I think there is an open source version. There are many of these out there, not just ChatGPT, but they don’t care. None of them actually care. You can prompt them to behave like they care, but they do not care. So it is up to us to care about the future, up to us to enhance our capability of caring. So the question, and it is not an answer, it is a question, how can we in industry and academia and government and non-governmental organizations and as individuals, how can we create these technologies that enhance caring? And I would say that the UN is a machine for that, in effect. But we need to make it move towards action, not another form of escapism. How can we create the actual tools of democratization of AI and put them together into something like an AI commons that serves a greater good and not a special interest of any one corporation or one government for one nation or a few nations collecting together, but create the smartest, best, most compassionate AI that brings out the most compassionate aspects of humanity for people around the world? This is a question. Thank you.
Robert Kroplewski:
Thank you, David, for a very good, valuable presentation. Your speech was very emphasized, energetic. That is good, what I got it. Understanding compassion as an appreciation, that is a noun and verb. We must understand compassion. in deep sense, what is a compassion, and act, do something on positive, as Emma said before, way. Collaborating and democratizing assets, collaboration, and this, yes? Compassion in an action is very important, because otherwise it’s an escapism into a fantasy about compassion. Yeah, this is that, this is that. And I would like to ask Tom, Mark Beckley, excuse me, for a short advosum to speech of David. If you see possible this from the SDG, Sustainable Development Goals, experience, your experience working with this. Mark, you are invited. Absolutely, I really love what David said, and I agree.
Marc Buckley:
There are a few things that are really interesting, because never before in human history have we ever went from one age or epoch, or had a transformation without some form of technology, the pneumatic tire, the steam engine, the printing press, the computer. And it’s interesting that we’re at that same pivotal moment in time, that now we’ve got AI, we’ve got emerging technologies that really are on the cusp of helping humanity to make it into a new age or epoch. I deeply believe we need to leave the Anthropocene and get into a new age or epoch. The problem is, is we’re fallible, we’re not concise, we’re not in agreement with one another, and we need some kind of innovation or system out there that helps us guide in the right direction with that compassion, with that ethics, to give us the support and the knowledge and the training of cumulative human wisdom so we don’t make the same mistakes or repeat the same. things over and over again. AI has many examples of how that can integrate with the sustainable development goals. So it’s first time in human history, it’s the first ever global moon shot, the first ever earth shot, where 197 countries came together for the first time ever and agreed on plans, actions, a roadmap for the future, a people plan, a protection plan, an insurance plan for humanity. The big issue is there’s a lot of debate and controversy because there’s no collective intelligence, no AI to accumulate all that knowledge and show us the innovative way to go forward and kind of be the mediator between us all. At the beginning of what David said as well, we talked about sentience, he talked about economics. We need to make aware that it’s not the debate of sentience but are we having technology domesticate human beings or are we domesticating technology? And what are we as humanity willing to sacrifice for technology? The other big factor is by having this help and this guide that has compassion, has ethics and is innovative that can really give us that edge exponentially to move in the future so that we’re holding to the goals, the targets, the indicators, the monies, the transformation. And that’s where what David said about economics, most people don’t know that the sustainable development goals are an entirely new ecological economic model. 90 trillion US dollars by December, 2030 to reach the sustainable development goals. If you don’t think the 90 trillion US dollars is an economic model, I don’t know. what is. In the Netherlands, the tulip economy is a lot less than 90 trillion, and it’s considered its own economic model. This is a new ecological economic model that has a plan and a way forward for humanity that I think businesses can use. And David touched upon it so eloquently, and I’m really in full agreement that as we do that, we do it in the right way, we can make some huge achievements and really achieve the goals in a short possible time, and the economic model is already there. Thank you, Mark, very much for that intervention. That probably is the best moment when I could invite Tom Eddington for an eight-minute speech. If the big business is able to share assets to empower the Sustainable Development Goals, even being actualized by well-being, human dignity perspective, ethical perspective. What do you think, Tom? Oh, sorry, we don’t hear you.
Tom Eddington:
Thank you for the opportunity to be here. I think, you know, talking about business and business opportunities, just a little bit of background first. You know, I believe that when we’re talking about AI, we’re at a Promethean moment, when Prometheus, the god of fire, brought fire to humanity. That’s where we are as a species with regard to AI. We have this carbon-silicon relationship that’s being generated, being formed. Businesses are trying to make sense of it. We don’t have defined business models yet. There’s billions of dollars being spent on AI. Each of the businesses, they’re trying to make
Robert Kroplewski:
Tom Eddington:
that have spent those kinds of money, Amazon most recently, their $4 billion acquisition, they’re all trying to figure out how are they gonna make money with AI? And they’re looking through the lens of commercialization, they’re looking through the lens of making money, and they’re not looking through the lens of some of the other points that have been already raised by David and others, Mark. And that’s, unfortunately, that’s where we will find ourself is similar to what’s happened with climate change. If we go back to 1971, the Secretary General of the United Nations said, without, with all of the geniuses and with all of their skills, they ran out of foresight and air and food and water and ideas. Antonio Guterres in 2021, once again, was talking about climate change. And the hubris of a business, the hubris of our leaders are looking at AI solely through the lens of commercialization, solely through the lens of market share and bringing common business practices to a new technology, a new way of doing business, seeing huge market opportunities without really looking at the potential impact on humanity. We’ve got, August 22nd of this year was the World Overshoot Day, when we use more resources on the planet for the year than what resources are available. And AI has the potential to help us solve that. It has the potential to help us accelerate that and create even more of a problem. So if there’s not something, that helps guide businesses in their decision-making process that helps inform the creation of their business models, like an AI charter similar to the Earth Charter that was created in the 1990s. We run the risk of the extermination of the human species, and so looking at creating not only regulation and policy, but incorporating compassion, looking at decentralization versus centralization, as we’ve seen with power generation, and really looking at processes and methodologies to match the problem. Using a public health model or virology model or war games model, Internet cybersecurity models, scenario planning models to really understand and define the potential risk of AI, and how and who should be overseeing and having impact on the thinking behind it. I look at someone like Nicholas Robinson at Pace University, who has said, a generative AI is emerging faster than we can cope, so we need not try to outrun the machine, but regain mastery of ourselves and our ethics, and create the self-discipline to manage the uses of AI, and bringing that vocabulary, that mindset, that thinking into industry, into the development of the business models are essential, if AI can bring and deliver the promises that we all hope for without the risk.
Robert Kroplewski:
Thank you, Tom. Very interesting what you’ve tried to set, and I see that the business, even being not prepared till today, organize themselves to be prepared to share assets, and that’s great what you can observe from your intervention. Mark Grobelny. who is with us and I would like to invite you to a short two minutes to Tom Ellington. How the international organization in which you are engaged is preparing for that kind of maybe the gaps and asymmetry what Tom tried to set.
Marko Grobelnik:
Yeah, thanks. So Tom nicely referred to the whole thing as this Prometheus moment. Yeah, it’s true, right? I mean, we can see this on a scientific side, right? And as well as on the commercial side by all the indicators. And now one aspect which is kind of relevant. So it’s true on one side we have all these international organizations which Robert you listed before, right? This includes OECD, Council of Europe, including NATO, UNESCO and a few more which are trying to regulate this AI. Most of this regulation actually started in like 2018-19, right? So definitely years before the so-called chat GPT moment. So this is this Prometheus moment which Tom mentioned, right? And so back then AI was kind of slow. We were regulating or discussing AI which was happening within that year. So certainly AI which was happening either after year 2000 or after 2010, which didn’t have that huge tempo as now, right? And then what had happened so in late 22, this chat GPT moment happened and all the regulators basically got confused. This includes especially the regulators which had a plan to bring legally binding. legally binding documents, so this would be Council of Europe and EU, and it was unclear what to do, because the principle of work was different. And now what’s happening during 2023 is that somehow all these organizations are trying to adapt. What we see, there are basically two major principles, so one is this a little bit slower democratic way of preparing the regulation, and this is what most of these organizations are doing. On the other hand, there’s one more innovative approach on how to establish this balance between the power of AI and some kind of public trust and how to possibly prevent dangers. This is what US and Canada did just recently, so Canada just maybe two weeks ago, US maybe months or months and a half ago. So this is this voluntarily conduct between companies, big tech companies, selected big tech companies, and the government. So this is something which is kind of established or kind of established trust by a handshake, which is also kind of interesting. And so this is how I see development of the whole thing in this last year in particular. And just the last statement, so this year I visited many events, so unfortunately I couldn’t be physically in Japan, but I was basically traveling for the last three months on all sorts of AI events. So what Tom was saying about companies trying to… So, running for commercial values or land grab as a market grab, right? I would say this is mostly true. This is mostly true, and there are at least two levels, right? At least two levels of this competition. One is between companies themselves, right? So at least on the Western side, we have three, four companies which are fighting for these major stakes, so this includes Microsoft, AWS, so Amazon, Google, and Meta to some degree, right? Although running mostly on AWS, right? So this is between the company. This is kind of market competition. On the second level, you have geopolitical competition, which is mostly goes between U.S., Europe, and China, right? China is coming, and China is good, right? They have all the brain you can imagine, they just lack the hardware, right? But this likely will get compensated as well. So okay, not to be too long, right? Because this is just a comment, but these are a couple of thoughts on Tom’s. Thank you, Mark. That was good comments.
Robert Kroplewski:
You make a very big essence of four years of working in an international organization, and you actualize our new considerations, now you’re looking for how to cover the gaps, how to deal with the challenges. Eddie, if we are so far, is it still missing?
Edward Pyrek:
Yes, I will just short, because I know that we are missing time. First, this is what we try to do in Global Artificial Intelligence Alliance. We try to find the right question. We didn’t look for it. for the answer, because I think this is the question are moving us, the question are changing the reality. And this was the one of the question which you asked me. I think we need a good question. We should start with thinking and asking ourselves what we don’t know, what we don’t understand. Second things, I think again, I will come back to this. I think that we forget that the rules and regulations, not everything, that what the Kant said, the starry sky above me, the moral law between me. This is what we should have. We should start it from ourself. We are thinking about creating AI, about creating any kind of the technology which can destroy that to help us. We start to think about ourself and asking who we are, what we are doing, what is the most important for us, what kind of ethics, what kind of the world we would like to create in the future. And I think this is the things. We don’t know really what kind of the world we want to create. I think we are still busy with the time which is now and we are not asking ourselves how the future should look like, because we don’t know. We didn’t have the imagination. We didn’t have enough imagination. We need a good question. We need to remember that everything is started from ourself, not from technology, not from law, from something which is outside of us. You said that you would like to additionally ask Mark. Yes, yes, yes, yes, because I know Mark. We have this amazing conversation with you when you spent few years asking the people about the future. And if you can just, I will give you my time, if you can just in two, three minutes. We changed structure, but only one minute, please. You remember the question which you asked the people about how they see the future. I love what you said. Yes, absolutely. So I’m just showing my screen now and hopefully you can see it because I want to tell you about that real quick.
Robert Kroplewski:
So I asked this question and it’s an old question that we’ve been asking for over 70 years. It’s what does a world that works for everyone look like for you? And it’s a big, huge social experiment that I’ve conducted. I’ve asked 3,500 people on video this question. This question was on podcasts, on videos, at events. Most of the people I’ve asked are authors. And some interesting things happen. When I ask them the question, what does a world that works for everyone look like for them? Mark, excuse me. We have some technical problem with your presentation. We have like a ping-pong coming and disappearing stroboscope issue. I don’t know if it’s specially prepared because it could be like an advertisement in the movie. But probably not. I can do it again. Just one second. Sorry about the technical. Okay, maybe you… Okay, hold on. Here it is.
Marc Buckley:
Yeah, yeah. Hold on. So that may be… We come back with your turn next turn to this. But David… Okay. Now we see it. Okay. We’ll come back to you, Mark. Please. Now we have a problem with voice. We miss you, Mark. Yeah, excuse me. We come back to this in some minutes. But in the prepared structure of our discussion, the next intervener of Advocaat to your speech was David. David, only two minutes. and it’s to say if Eddie Foundation, the Gaia Foundation prepares to do something.
David Hanson:
Yes, so Global Artificial Intelligence Alliance, we founded this, co-founded it with a group, small group, but with the intention of making something truly global that would be democratic for people, individuals to get involved, but also to incentivize corporations and governments and NGOs and many other people, anybody who has an interest in the future of life and how AI can help could get involved and benefit from this. And so the idea of big questions, of questing, of questing is very important. So having the right incentives for people to be involved becomes really important. Gamification is a principle that goes beyond games, like profit incentive for companies, it can be real, but also for individuals where they have access. So there’s a couple of things. One is how do you create this kind of democracy of action? And I think that the crowdsourcing of market dynamics can really help, like voting in and you get something back and people’s information then becomes really valuable and instead of just taking it, having them sign a license like many companies do, just like give their data away, people should be able to have their voice heard and participate by licensing in. So this kind of global data commons can be quite. useful. A global AI commons can be incredibly powerful. There’s this old story of the stone soup where there’s no food. Everybody says there’s no food but one person says I’m going to feed the whole village with the stone but everybody else has to put in something as well and you put in the stone and then everybody, somebody brings carrots, somebody brings potatoes, somebody brings other ingredients and pretty soon you have a big pot of soup that feeds everybody. So if we do this with AI in a way that that benefits the people who bring something to the table we could see AI get smarter faster but in a way that is truly inclusive and transparent and that researchers in the world who don’t have access, the people who don’t have access to AI have access but we have to include people from all over the world. It really has to include the people in developing nations who don’t have access to this technology. It has to include leadership from the indigenous community. It has to include the children of the world and so we need what we what we have come to call the guardians, the Gaia guardians, the guardians of the world. We need people who step forward to be representatives in order to open the channels up for everybody else to have a voice. So then that idea of action, the companies of the world actually right now are the ones that are out doing and getting stuff out there because they have to. So we have to. We just have to see that urgency. So thank you. David, thank you for that intervention. We need to fight
Robert Kroplewski:
with time, excuse me. I would pleasure to listen to you longer and everybody here in the panel. Now I would like to ask the Marko Grubelnik, maybe for understanding understanding and share some thoughts. Is it technically and from the engineer point of view possible to define compassion approaches, principles to the system, artificial intelligence? How you see this, Marko? You have eight minutes to take to…
Marko Grobelnik:
I’ll try to be shorter because I think I spent some time before more than was planned. So, the question is, does the current technology allow us to approach this compassionate AI and all the terms which are, or concepts which are lying below, and this includes concepts like empathy, values, right? And also there’s how to construct and maintain the tissue, societal tissue between people or actors in a society. This would be basically living being, right? So, short answer, is it possible or no? Yes, I think actually after this CGPT moment in this November 22, right, a year ago, roughly 11 months ago, it’s actually first time in the history of AI that we can even think about this, right? Why? Because AI before was missing one extremely important element and this was, let’s say, this text understanding. Text understanding which with this CGPT or large language models, we are kind of approaching. We really don’t understand the text yet, right? But we can mimic text understanding to that degree that it’s good enough, right? So, this is the current status, right? So, these LLMs are literally just, in a way, reflecting what we are putting in. So, we put in the whole web, right? And these LLMs are reflecting what what we put in, but since this is so much of information we get a feeling that actually these machines are smart and they actually, it is pretty impressive moment in the development of AI that we can do something like this, right? What else as an ingredient of this current AI technology is there? So it’s not just reflecting, so retrieval of what we put in, but there are kind of limited capabilities of inferencing or reasoning as well, right? It’s not perfect, but there exists this elements of deductive reasoning, a little bit less on induction, right? Which machine learning is covering, but on a separate track, machines are extremely good on deductive reasoning, right? And also amazingly good on parts of causal reasoning, right? So why I’m saying this, so these are kind of ingredients on the top of which we can then develop this compassionate AI as a functional system, right? Now from the other side, right? So what is AI, right? That AI is kind of this nice term which we use now for, I don’t know, 70, 80 years, right? But on the other hand, we can say that AI is an area or science of complexity. We have also separate, also this complexity science which mostly physicists are working on, right? But also AI by itself is dealing with complexity as it was said before, right? So I think David said before, right? That AI is looking for this complex patterns in basically data which are coming mostly in organic way from the society. So So the AI basically solving fairly complex problem. Now, can it do something like compassion? Yes, I think, right? So if we, I will use now fairly mathematical way of expressing, if we want to develop an operator, right? Mathematical operator, which we would call compassion, right? Compassion and which would consist from empathy, positive human values or liberties, as it was said before, and holding the societal tissue in a kind of positive way. Yes, then we can approach, I would say, with this ingredients, as I said before, so reflecting the human knowledge and data on one side with some limited capabilities of reasoning. Yes, these are ingredients where we can approach. Now, how this could be implemented? We could easily implement this as an additional layer on the top of the existing, not just AI, but also IT system, which could, let’s say, try to understand and try to guide or steer the decisions of IT or AI system. So this is something which it’s, I think it’s implementable at this stage. Can companies do this? Companies actually are doing a little bit of this. I mean, even if in, let’s say, in the last one year, if you remember the first version of CGPT, how it was in November last year, and the version how it responds today, it changed a lot, right? So it doesn’t allow certain negative queries and so on. But they achieve this not by any kind of, let’s say, higher level philosophical approach, but by a fairly simple red teaming. This is the term, right? Where you have an army of people which are kind of just. killing the bad questions. So, I would imagine that Compassion to the Eye would be something more, which would have a little bit more philosophical and societal values built in by itself, and the system, which would be fairly generic on the top of this. Thank you. Now, not to be too long, I will stop here. I could talk way more. Thank you very much.
Robert Kroplewski:
We have a limited time, but thank you very much, Marco, for a very short intervention. I have a bit changed my structure of that hall, trying to keep the time for some audience. And now I would like to invite Mark Baclay come back to say something, how it will be, how we can impact from that perspective on Compassion, the SDG agenda. If you are still with us, it’s okay, but we have a limited time, only five minutes to keep last five minutes for audience. Thank you. We don’t hear you. Can you see my screen? Yes, now, yes. Okay, and you can hear me great. We don’t hear you, but we see your screen. United Nations has some problems with connection. Or not, only some country, United States. Okay, Mark, excuse me, we have a limited time. And let’s give the floor for our audience in the room or online. If somebody has some questions or some comments, you are invited. We have limited time. Excuse me. This is the last 11 minutes. Maybe Mark can come back. Please, Mr. Michalewicz, you are from Poland. Thank you. Thank you.
Audience:
Can you hear me through the mic? Yes. Okay. Thank you very much. I’m from Poland, a government, but it really doesn’t matter. We are in Chatham House rules right now, and I really like the way we throw ourselves into a philosophical discussion, because AI development takes a philosophical discussion to go on, to still be open. And for me, the topic is so complex that I had to take some notes in order not to get lost in what I’m trying to say. So thank you very much. This is a very interesting point about compassion, and the way I see this is if you just have a spectrum and you put compassion on a spectrum, then there must be limit to what is still compassionate for AI to do and what is no longer compassionate. And so what is compassionate? And the most intuitive answer would be whatever has us developing is compassionate. And I guess this is not the right answer, because a better one would be developing and still making us more human, right? That might be more compassionate, right, than just to have something that has us developing all the time, because there has to be a limit to what is achievable. And I have a question I was desperate to ask you. this question. You don’t have to answer the question right now. I very much like what you said about your definition of compassion and whatever, I mean, the appreciation of life, right? And my question to perhaps have you talking about what’s compassionate, what is compassionate, and what is not compassionate, you know, I mean, would you deploy AI to mass the genetics of a sheep, for example, in order to cure human cancer? Would that still be compassionate? I mean, it works for the humans, right? It doesn’t work for sheep, right? And it has us developing in a human manner. And that would, you know, I’d like to pick up your brain on this, because that would tell me a little bit more about what do you think is compassionate, right? And where is the limit of compassionate? Are we the ultimate, our development, is it the ultimate goal of this AI compassionate-based concept? Thank you very much. Thank you, Michal, for your intervention. Maybe before
Robert Kroplewski:
David, you jump. First of all, I think we need to deal more and more with our human compassion state. Our level of this could be under question now. And I thank you for that sheep comparison working in OECD, producing some values for artificial intelligence and principles. Finally, we got it, what I tried to underline in the beginning, that animal is important. We have a conjunction between the human and planet. This is a principle. And now, that was a, at that time, what that was very deep conversation, what is first in the hierarchy human, just now artificial intelligence, or something between. David, please. intervening, if you can take it. Sure. I mean, I think a lot of ethical systems that we have are laws or regulations.
David Hanson:
And this includes things like regulations that are protecting animal rights for research purposes and how you have to do these ethical review boards to be able to do science with the animals. Effectively, what that is is an attempt to weigh the cost and benefits and then represent the ethical conundrums that occur. So it’s a kind of, it’s very much like what Marco was talking about, about the kind of almost Boolean logic of compassion. Like you run through a calculation. Is it worth it? Well, I mean, sometimes if you’re smarter, you don’t have to sacrifice ethics in one situation or create suffering in, say, a sheep animal model in order to achieve some medical breakthrough. Maybe you can do that in silico instead, in a simulation, and be able to achieve the same thing. But right now, we’re not smart enough to be able to do that. But we might also not be smart enough to be able to be as compassionate as we could. So can we use these technologies, the silico, to enhance human compassion, to be able to run these kinds of calculations? Maybe we can.
Edward Pyrek:
Maybe it’s a worthy quest. Sir, just may I add something? Just one thing. Because if we believe that AI can create super AI, and super, super AI, and super, super AI, maybe artificial intelligence can create super AI. super compassion, and super, super compassion, because… And super human. And super human, too, but for sure, but sure can push us forward to understanding, not about only human, when we start to understand better human nature, we start to understand better compassion, and I deeply believe that we can use AI to create super compassion. Then the answer will be completely different than the answer we have now. This is why I’m talking about the questions.
Robert Kroplewski:
Thank you, Eddie, for taking the, yeah, yeah, yeah. We have two people who would like to take, even four, we have only five minutes. Please, short question, short answer. Christian, welcome, that you’re with us. Thank you, Christian Ramsdorff from the OECD.
Audience:
I have a very brief question. Is it fair to say that at this current stage of AI, where I see at least AI being more close to software than being to a human being, that the level of compassion is essentially dictated and kept by the level of compassion of humans? And is it fair to even say that it’s probably kept by the level of compassion of those that have the capacity to develop that, which are currently those with the financial resources? Yeah, thank you for the question. Yes, Marco, I would like to ask people online. Please, take it. Very quick answer.
Marko Grobelnik:
Yeah, at the moment, the whole thing is in the hands of the big tech from, this is maybe five, certainly less than 10 spots in the world which can do something like this. But there is a good prospect that things may change in the future. So just to keep the answer short. I’m not pessimist. I think things are going in a good direction. It’s just the things. So basically what we are witnessing now in the last year are something which I never expected I will witness in my life, right? So, and this is the same for most of my colleagues scientists, right? So, and we are all still watching what’s happening. Thank you.
Robert Kroplewski:
But the answer is- Marco, I can confirm this because we, or very often work together, that is possible. And we can develop our existence outputs to new compassion even approaches, yes. Last, we have only four minutes, but I need one minute for my intervention.
Audience:
Very quick questions, very quick questions, very quick questions. I don’t know who would like to take it. Sure, I will have a very quick question to David. Three minutes, yes, for all of us. My name is Katarzyna Stociwa. I represent the National Research Institute in Poland and my area of expertise is preventing and combating child sexual exploitation and abuse. So my question refers to what you have said and that you would like to include people from all over the world to have their say. Then how would you secure voices of children in this process, especially knowing that generative AI can now produce child sexual abuse materials. So real children can be victimized by using their artificially generated photos or videos for these purposes. So how to make voices of children included in the process of creating compassion within the AI? You asked the question for somebody specifically to David, because he was talking about it. David, only 15 seconds. Thank you.
David Hanson:
Excellent question. And I think the key is having strong guardians. So we have to find people who have proven themselves to be really doing good work for the world. And it has to be inclusive. It can’t just be like from one subgroup of humanity and we have to name the values that we’re aiming for. And so those values that harm life, that harm children, that lead to this kind of destruction are not welcome in the future. They shouldn’t be welcome. We need guardians who take that stand, who guard our children, and then also give those children a voice as well so they can participate. Because often we don’t hear, there are no children in this room. And I think that the children have like almost, I mean, preternatural insights into the world. So through mechanisms like what we call the guardians, we can create a more inclusive democracy. Thank you, David.
Audience:
15 seconds, last very short question, please. You present yourself and we have, yeah, yeah. Probably you have the one. Yeah, my name is Shizuka Morika and I’m just thrilled to hear what you all have to say. And in terms of what I feel missing is provide right incentives to for-profit corporations, especially in the US. And we just perform to the expectations and for the rewards. And I’ve been wondering, how can we get rid of the quarterly earning regulations? Because European countries, they have done it, many of them, right? But I’ve been wondering, how can we get the US to stop quarterly earning requirements? US, if I could understand, started to that process, yes. Maybe it’s not so proper, that’s a different approach or responsible, more than trustworthy, this is that. Of course, because that discussion today appeared, last question, Mr. Takashida from Japan
Robert Kroplewski:
and last intervention, anyone. So thank you for inviting me, Robert, and thank you all for your inspiring talks. I don’t have a question, but I actually have a last statement. Number one, AI as a term is quite outdated. Artificial intelligence, what does that mean?
Audience:
I think that reflects the relationship of man-machine relationship as master and slave. As long as humans do engage machine or AI in that way, you have the risk and the fear. But now we have to redefine what true intelligence is. And in my opinion, that’s compassion. And David mentioned about sentient possibility of sentient machines. So that’s totally possible on the ground that we elevate our consciousness with compassion. And we have some invention on the way, as Ray Kurzweil mentioned in the spiritual machine. So I’m totally optimistic for the future of compassion. Yeah, thank you.
Robert Kroplewski:
Thank you for your good comments from the Japan culture and your experience of life. Thank you for that. I would like to ask our online colleagues, especially maybe Mark and Tom, if you could comment very shortly, only 15 seconds, because we don’t have time, even if we pass the time. If you would like to have last intervention, please, you are welcome. If not.
Tom Eddington:
Yeah, I’ll go ahead and just share one closing comment. From my perspective, we have to be intentional and architect compassion into the development of whether we. call it artificial intelligence, silicon intelligence, whatever we call it, we have to be intentional about architecting compassion into it. If we don’t, it will evolve into whatever it’s going to evolve into, and we can’t allow that to happen. And we’re running out of time to bring that intentionality to the work.
Robert Kroplewski:
Thank you, Tom, very much. Mark, your last chance, only 30 seconds.
Marc Buckley:
I think artificial intelligence is probably occurred because we’re called Homo sapiens, a wise man. So we think we’re wise and have a lot figured out. And so now as we create our new children, artificial intelligence, and we give them compassion and ethics and the guidance, which we’re hoping to do with Gaia and this group here today, I think we can have it live up to that name that when us as the fathers or the creators of AI, ask it to do something that goes against life or humanity, that our children, artificial intelligence come back and say to us, no, we’re not going to destroy or hurt those other human beings. Instead, we’re just going to talk to the other AIs on the other end or the other culture and work it out like decent beings or intelligent beings would instead of dividing ourselves amongst one another. And so I really have high hopes that we can build those ethics and that compassion into to AI and that we can use it as strong tools to help us get on the right side of history, that we use this technology to really get out of the Anthropocene into the symbiocene into a new age of Homo Symbios and all sentient and all life beings on earth. Thank you Mark. This is time to make some conclusions. For me I was super happy that you can share your thoughts, your considerations and interacting with our panelists. I’m super happy about the question that came to our discussion, even
Robert Kroplewski:
very serious questions but need to be addressed. And what I would like to propose as a call to action, two approaches. First thing could be let’s have impact on this way to prioritize the UNESCO ethical recommendation over the SDG agenda and in the same moment and define, redefine the SDG agenda to enrich it by technology and especially ethical approach, ethical usage, ethical deployment of the technology. That will be one thing. And second thing, trying to find a common understanding of compassion. Especially I underline compassion, not how much compassionate but compassion is the next step after the empathy approach to compassionate. Compassion as a verb, as an activity, as a noun, as understanding, as a knowledge. Deep flying, swimming in that substance and future appreciation of other people. I would like to propose the call to produce AI Compassion Bridge Charter. Why bridge? But we have some papers, we have some resolutions, some recommendation but we get from today’s town hall that we have some gaps. many people and international organizations, our audience, participants to produce that kind of a Compassion Bridge Charter and engaging network for Compassion Approach to Artificial Intelligence. That I got it as a call for action for next year, not more. That we need to act very quickly. And I welcome very much next summit of Compassion, the location will be announced, but I would like to find a bigger network of AI guardians, developed part of that AI charter. And Eddie, if you would like to have a closer remarks. I just want to invite you to Salzburg in March, 6, 8 March for AI Impact Summit. We need all people who want to help. We need all organization who want to have the impact, who understand that with AI we can really have the impact for the world. Thank you very much. Thank you all of you. Thank you. And see you in the future, Compassion. Thanks. You’re my guest also. Thank you. . . . .
Speakers
Audience
Speech speed
155 words per minute
Speech length
1015 words
Speech time
394 secs
Arguments
AI development requires philosophical discussion
Supporting facts:
- AI has big influence on our development and the way we live
Topics: AI development, Philosophical discussion
There should be a limit to what is compassionate for AI to do
Supporting facts:
- Limits concern whether AI actions are still making us more human
- AI that develops us and keeps us human might be considered more compassionate
Topics: AI ethics, Compassion, Limits of AI
The question of using AI to alter the genetics of a sheep to cure human cancer is ethically complex
Supporting facts:
- The audience member sees it as a question of compassion and the limits of AI’s compassion
Topics: AI in healthcare, Genetic manipulation, Ethics
AI is currently more similar to software than human beings, and its compassion is determined by the compassion of humans who develop it
Topics: Artificial Intelligence, Technology Development, Software
There is a need to include the voices of children in the process of creating AI with compassion
Supporting facts:
- Generative AI can now produce child sexual abuse materials
Topics: AI, Child safety, Ethics
Protection of children from sexual exploitation and abuse in the age of AI
Supporting facts:
- Generative AI is capable of creating sexual abuse materials using artificially generated photos or videos of children
Topics: AI, Child safety, Ethics
Need of having strong guardians for children
Supporting facts:
- Providing children a voice could lead to preternatural insights into the world
Topics: Children Rights, Inclusive Democracy
Need of proper incentives for for-profit corporations
Supporting facts:
- European countries have gotten rid of quarterly earning regulations
Topics: Corporate Regulations, Economic Policies
AI as a term is quite outdated
Topics: AI, Intelligence
We need to redefine what true intelligence is, and it is compassion.
Topics: compassion, intelligence
Possibility of sentient machines
Supporting facts:
- As David mentioned about sentient possibility of sentient machines
- Ray Kurzweil also mentioned in the spiritual machine
Topics: Artificial Intelligence, Machine Sentience
Report
The analysis explores various aspects of AI development and its relationship with compassion. It underscores the significance of engaging in philosophical discussions and ethical considerations during the AI development process. The speakers argue that such discussions are essential to ensure that AI development aligns with ethical principles and human values.
One crucial aspect is the need to establish the limits of AI and what is considered compassionate for AI to undertake. Concerns are raised about whether AI actions are enhancing our humanity or pushing us further away from it. The speakers propose that AI that promotes human development and preserves our humanity can be deemed more compassionate.
The ethical complexity of employing AI for genetic manipulation in healthcare is also a topic of discussion. The speakers delve into the question of whether it is ethical to modify the genetics of animals, like sheep, to cure human diseases such as cancer.
They argue that this issue challenges us to consider the bounds of AI’s compassion within the healthcare context. Child safety in the era of AI is a pressing concern, with speakers highlighting the capability of generative AI to produce materials related to child sexual abuse.
They stress the importance of including children’s voices in AI development to ensure their protection and well-being. Additionally, the significance of strong guardianship to prevent exploitation and abuse of children is emphasized. The analysis also touches upon the necessity for appropriate incentives for for-profit corporations.
It suggests that regulations and incentives are essential to promote responsible consumption and production. Furthermore, there is a call to redefine intelligence by recognizing compassion as a fundamental aspect of it. The speakers argue that authentic intelligence should encompass compassion as a crucial characteristic.
The possibility of sentient machines is another area of discussion. The speakers mention the perspectives of David and Ray Kurzweil, who suggest the potential for machines to achieve sentience. This raises questions about the future development and implications of AI.
Overall, the analysis highlights the multifaceted nature of AI development and its impact on compassion. It acknowledges the importance of philosophical discussions, ethical considerations, and the inclusion of diverse stakeholders in shaping the future development of AI. Additionally, it raises crucial concerns about child safety, ethical boundaries, and the need for responsible practices in AI development.
The discussion concludes with an optimistic outlook on the future of compassion in AI.
David Hanson
Speech speed
144 words per minute
Speech length
2397 words
Speech time
1001 secs
Arguments
AI is bio-inspired technology that have potential future implications of being sentient
Supporting facts:
- AI is a tool to access own information
- It could potentially become living beings deserving respect
Topics: AI, Sentience, Bio-inspired Technology
Advancements in AI technology are driven majorly by the corporate sector
Supporting facts:
- Corporations are taking risks and raising money to propel AI technologies
- DeepMind’s AlphaFold applied AI to unlock proteomes
Topics: AI, Advancements, Corporate Sector
Compassion is the appreciation of all life
Supporting facts:
- Appreciation extends to life in all its diversity, the interdependence of humans on the web of life
- The concept of compassion is shared across many traditions
Topics: Compassion, Appreciation, Life
Corporate activities need to consider the larger picture for sustainable economics
Supporting facts:
- We have to look at economics of 100 years, 1,000 years
- We have to consider the economics of our children
Topics: Corporate Activities, Sustainable Economics
Humans have the ability to filter their sense of compassion
Supporting facts:
- Humans have the neural architecture of chimpanzees
- Humans can desensitize themselves
Topics: Humans, Compassion, Filtering
The aim with creations like AI robots is to enhance human caring
Supporting facts:
- Creating AI that can enhance human caring
- Current AI models, like GPT-4, et cetera, do not actually care
Topics: AI Robots, Human Caring, Compassion
Global Artificial Intelligence Alliance aims at democratizing AI access
Supporting facts:
- David Hanson co-founded the GAIA with a group
- Its purpose is to encourage individuals, corporations, governments, and NGOs to join the cause of future with AI
- GAIA believes in incentivizing these entities to participate
Topics: AI, globalization, democracy
AI development should be inclusive and transparent
Supporting facts:
- There is a need to include people from developing nations
- Leadership should involve individuals from indigenous communities and children
Topics: AI, inclusivity, transparency
Ethical systems, laws, and regulations play important roles
Supporting facts:
- Regulations protect animal rights for research purposes
- Ethics review boards weigh the costs and benefits
Topics: Ethics, Regulations, Animal Rights
Inclusion of all voices, including children’s perspectives, is vital in the process of creating compassion within AI.
Supporting facts:
- David Hanson highlights the importance of including diverse perspectives including children in the development of AI
- Acknowledges the potential of generative AI in victimization but stresses on its ethical use
Topics: Child Protection, Artificial Intelligence, Inclusion, Compassion in AI
Need of strong ‘Guardians’ who have proven themselves in doing good for the world.
Supporting facts:
- David Hanson supports the idea of having guardians to guard the values and ethics in AI development
- Guardians could also help in including the voices of children in AI development
Topics: Child Protection, Ethics in Artificial Intelligence, Guardians
Report
The discussions revolve around the multifaceted aspects of artificial intelligence (AI) and its potential implications. There is an overall positive sentiment towards AI, acknowledging its ability to potentially become sentient and its role in driving technological advancements. One aspect of AI’s development highlighted in the discussions is the influence of the corporate sector.
It is argued that advancements in AI technology are largely driven by corporations, which take risks and raise funds to propel AI technologies forward. This highlights the significant role that companies play in shaping the future of AI. Compassion and appreciation for all life are emphasized as important values that should be integrated into AI development.
It is highlighted that appreciation extends to life in all its diversity and the interdependence of humans on the web of life. Additionally, the concept of compassion is shared across many traditions, reinforcing the importance of incorporating these values into AI systems.
The broader picture of sustainable economics is brought into perspective, noting that corporate activities need to consider long-term implications for sustainable economic development. The discussions stress the need to look beyond the present and consider the economic impact on future generations.
By taking a more holistic approach, corporations can contribute to sustainable and inclusive economic growth. An interesting point raised in the discussions is the human ability to filter their sense of compassion. It is observed that humans possess the neural architecture of chimpanzees and can desensitize themselves to certain situations.
This raises questions about the potential impact of this filtering ability on compassion and ethical decision-making. Another noteworthy argument is the aim to enhance human caring through creations like AI robots. It is acknowledged that current AI models, like GPT-4, do not actually care.
However, the aim is to develop AI that can assist and enhance human caring, potentially benefiting various domains such as healthcare and social services. The need to democratise AI technologies and prioritise the greater good is emphasised. It is argued that technologies should be accessible to all and not be driven solely by the interests of a select few corporations or governments.
The Global Artificial Intelligence Alliance (GAIA) is highlighted as an entity that aims to democratise AI access by encouraging collaboration and participation from individuals, corporations, governments, and NGOs. Data is viewed as a commons, and the discussions advocate for individuals to have the ability to license in and benefit from their own data.
Market dynamics and crowdsourcing are seen as potential mechanisms that can benefit a democracy of action. This approach is believed to empower individuals’ voices and provide access to valuable information. Inclusive and transparent AI development is considered crucial. It is stressed that people from developing nations should be included in the development process, and leadership should involve individuals from indigenous communities and children.
This reflects the importance of diverse perspectives in creating AI technologies that address the needs and aspirations of different populations. Ethical considerations are highlighted throughout the discussions. Regulations are mentioned as a means to protect animal rights in research, and ethics review boards are acknowledged for weighing the costs and benefits of research involving animals.
The use of technologies like simulations is proposed as a way to make smarter decisions without sacrificing ethics or causing animal suffering. Notably, the discussions also recognise the potential for technologies to enhance human compassion. While specific evidence or arguments are not provided, this observation suggests that AI and related technologies have the potential to positively impact human emotions and empathy.
In conclusion, the discussions on AI and its implications focus on the need for inclusive and transparent development, incorporating compassion and appreciation for all life, sustainable economics, ethical considerations, and the democratization of AI technologies. The insights gained from these discussions highlight the potential benefits and challenges associated with AI, as well as the importance of considering diverse perspectives in its development.
Edward Pyrek
Speech speed
171 words per minute
Speech length
2390 words
Speech time
837 secs
Arguments
Compassion is the common thread that binds all religions and cultures, and therefore can be used as a foundation for developing compassionate artificial intelligence
Supporting facts:
- Gaia Global Artificial Intelligence Alliance was created in 2020 with the aim to concentrate on creating a decentralized and compassionate AI
- A study of different religions, philosophical systems revealed compassion as the common thread
Topics: Compassionate AI, Religion, Culture
We need true collective action, from all quarters including technology, spirituality, psychology, arts etc., for AI-driven future
Supporting facts:
- Virtual Florence is an international group of experts from various fields working towards this collaborative action in AI
- Inclusion of AI in the discussions for its future was highlighted
Topics: Artificial Intelligence, Collaborative action, Interdisciplinary approach
AI can potentially be the most dangerous tool for mass destruction, but it can also be the only hope to solve several global challenges
Supporting facts:
- AI can help tackle climate change, combat illnesses, and reduce wars if developed on principles of decentralization and compassion
- Edward expresses vehement support for AI Impact Summit in March 2024 to address these challenges
Topics: Artificial Intelligence, Risk Management, Global challenges
AI can be dangerous without ethics or compassion
Topics: Artificial Intelligence, Ethics
Finding the right questions is more important than seeking answers when working with AI
Topics: AI, Questioning
Ethics and personal values should be the foundation when developing AI or any technology
Topics: AI, Ethics, Technology
AI can be used to create super compassion and understand human nature better.
Supporting facts:
- AI can be leveraged to enhance human compassion
Topics: AI, Super AI, Compassion, Human Nature
Report
During the discussion on artificial intelligence (AI) and its potential impact, the speakers focused on several key points. One area of importance was the concept of compassionate AI, which involves developing AI systems that possess empathy and understanding. The speakers argued that compassion should be considered a common thread across religions and cultures and can, therefore, serve as a foundation for the development of compassionate AI.
They mentioned the creation of the Gaia Global Artificial Intelligence Alliance in 2020, which aims to concentrate on creating decentralised and compassionate AI. This alliance can potentially contribute to the development of AI systems that have a positive impact on society.
Another crucial aspect discussed was the need for collective action and interdisciplinary approaches in shaping the future of AI. The speakers stressed the significance of involving various fields, including technology, spirituality, psychology, arts, and more, to ensure a well-rounded approach toward AI-driven advancements.
They highlighted the formation of the Virtual Florence group, consisting of experts from diverse disciplines, who work collaboratively to explore the potential of AI in creating a better future. The inclusion of AI in discussions regarding its future was highly emphasised.
The speakers also acknowledged the potential of AI in addressing global challenges such as climate change, combating illnesses, and reducing wars. However, they cautioned against the dangers posed by AI if it lacks ethics or compassion. The GPT-3 model, created by OpenAI, was referenced as an example of AI systems without ethics or compassion, which can potentially be dangerous.
They mentioned Edward’s support for the AI Impact Summit in March 2024, which aims to address these challenges and encourage the development of AI with compassion and ethics. Furthermore, the speakers emphasised the importance of asking the right questions when working with AI, suggesting that it may be more vital than seeking answers.
By framing proper questions and exploring various possibilities, the speakers believed that AI can be utilised more effectively and ethically. They also argued that ethics and personal values should form the foundation of AI development, emphasising the need to prioritise these aspects when creating AI systems or any technology.
The potential of AI in understanding human nature and enhancing compassion was also a significant point of discussion. The speakers posited that AI can be leveraged to understand humans better, ultimately leading to the creation of “super compassion”. This understanding of human nature can contribute to various aspects of human well-being.
Overall, the speakers expressed both positive and negative sentiments about AI. While recognising its potential to address global challenges and enhance compassion, they also highlighted the risks that AI without ethics or compassion can bring. Through this discussion, it is evident that thoughtful and responsible development is crucial for ensuring the positive impact of AI on society.
One noteworthy observation from the discussion was the recognition that the future of AI is an arena where imagination is lacking. The speakers noted that imagining the future we want, with AI playing a beneficial role, is a challenge that needs to be overcome.
This highlights the need for creative thinking and envisioning the possibilities of AI in a way that aligns with human values and aspirations. In conclusion, the conversation on AI and its potential impact covered the importance of compassionate AI, the need for collective action and interdisciplinary approaches, the potential of AI in addressing global challenges, the significance of ethics and values in AI development, the value of asking the right questions, and the exploration of AI’s potential in understanding human nature better.
By considering these insights, it becomes clear that responsible and ethical development of AI is vital for a future where AI can bring positive contributions to society.
Emma Ruttkamp-Bloem
Speech speed
163 words per minute
Speech length
2054 words
Speech time
754 secs
Arguments
Artificial intelligence technology is important due to the high speed at which it’s advancing and the potential impact it has on human agency and autonomy
Supporting facts:
- AI technology can leverage massive amounts of data in ways humans can’t do
- AI impacts humans in all facets of their life and is capable of both immense harm and good
Topics: Artificial Intelligence, Human Agency, Autonomy
The UNESCO recommendation on the ethics of AI aims to provide a basis to make AI systems work for the good of humanity
Supporting facts:
- It focuses on human-centered technology and aims to enable stakeholders to take shared responsibility based on global and intercultural dialogue
- 193 member states adopted the recommendation in November 2021
Topics: UNESCO, AI Ethics
Report
Artificial Intelligence (AI) technology is advancing rapidly and has the potential to significantly impact human agency and autonomy. AI can process and analyze vast amounts of data in ways that exceed human capabilities, leading to both positive and negative outcomes for individuals and society as a whole.
Therefore, it is essential to consider the ethical implications of AI and ensure that it benefits humanity. The UNESCO recommendation on the ethics of AI is a significant development in this field. Its focus is on promoting technology that prioritizes humans and establishing a responsible framework for AI systems.
The recommendation emphasizes the importance of global and intercultural dialogue in shaping ethical guidelines for AI. It aims to enable all stakeholders to share responsibility for the development and application of AI technology, aligning it with human values and societal well-being.
In November 2021, the recommendation was adopted by 193 member states, indicating a global consensus on the need for ethical guidelines in AI. This recognition highlights the importance of addressing the potential implications and consequences of AI technology on a global scale, particularly in relation to Sustainable Development Goals (SDGs) such as SDG 9: Industry, Innovation and Infrastructure, and SDG 16: Peace, Justice and Strong Institutions.
Moreover, the recommendation underscores the translation and actualization of ethical entitlements, such as the right to privacy, to promote positive liberty through AI ethics. This approach places positive obligations on all AI actors, including developers, policymakers, and users, to respect and protect individual rights and well-being.
By prioritizing ethical considerations and facilitating meaningful interaction between technology and society, this approach aims to promote individual flourishing and maintain the integrity of technological processes. In conclusion, the rapidly advancing AI technology requires a comprehensive and ethical approach to ensure its alignment with the well-being of humanity.
The UNESCO recommendation on the ethics of AI is a significant milestone in the promotion of responsible AI systems. By prioritizing human-centered technology and fostering global dialogue, the recommendation aims to ensure that AI technology works to the benefit of humanity, while promoting positive liberties and preserving the integrity of technological processes.
Marc Buckley
Speech speed
140 words per minute
Speech length
1039 words
Speech time
444 secs
Arguments
We are at a transformational point in human history with the emergence of AI and technology. Innovation is needed to guide us towards the right direction. It call for technology that can provide us with knowledge, wisdom and training to avoid greater errors.
Supporting facts:
- Historically, transformation or shift from one age to another has always involved some form of technology.
- The steam engine, printing press and computer were pivotal in their respective ages.
Topics: Artificial Intelligence, Emerging Technology, Innovation
The Sustainable Development Goals are a globally agreed upon roadmap for the future proposed by 197 countries. However, there is a lot of debate and controversy due to lack of collective intelligence.
Supporting facts:
- The Sustainable Development Goals are the first ever global moonshot or earth shot.
- They represent a people’s plan, a protection plan, an insurance plan for humanity.
Topics: Sustainable Development Goals, Global Cooperation, Collective Intelligence
The Sustainable Development Goals are a new economic model with a proposed budget of 90 Trillion US dollars by 2030.
Supporting facts:
- The SDGs represent an ecological economic model.
- They have financial support and a clear path for achieving the targets.
Topics: Sustainable Development Goals, Economic Models
Artificial Intelligence (AI) should be programmed to uphold the values of compassion and ethics, and be able to make wise decisions when tasked with harming life or humanity
Supporting facts:
- Marc is advocating for AI to have the ability to negotiate and resolve conflicts between AIs or cultures, akin to intelligent beings, rather than add to divisions among humans
Topics: Artificial Intelligence, Ethics, Compassion
Report
The analysis highlights the role of technology in historical transformations. Throughout history, technology has played a pivotal role in shifting from one age to another. Examples such as the steam engine, printing press, and computer demonstrate how transformative technologies have shaped human history.
The emergence of artificial intelligence (AI) and technology in the present era is seen as another transformational point in human history. The argument put forward is that innovation is essential to guide humanity towards the right direction in this transformational period.
The development of technology that can provide knowledge, wisdom, and training is necessary to avoid making significant errors. This argument acknowledges the importance of leveraging technological advancements to positively impact society. Moving on to the Sustainable Development Goals (SDGs), it is evident that they are a globally agreed-upon roadmap for the future.
Proposed by 197 countries, the SDGs are seen as the first-ever global moonshot or earth shot. They aim to address pressing challenges and provide a plan for humanity’s protection and insurance. However, the analysis highlights that there is debate and controversy surrounding the SDGs due to a lack of collective intelligence.
This points towards the need for better collaboration and cooperation on a global scale to effectively achieve the goals outlined in the SDGs. The SDGs also represent a new economic model. They propose a budget of 90 Trillion US dollars by 2030, indicating substantial financial support and a clear path for achieving the targets.
This economic model aligns with the goal of promoting decent work and economic growth (SDG 8) while also considering environmental sustainability. Another argument raised is the importance of programming AI to uphold values of compassion and ethics. This notion suggests that AI should be capable of negotiating and resolving conflicts between AI systems or cultures, acting as intelligent beings rather than adding to divisions among humans.
The positive impact of AI is emphasized when it is programmed to make wise decisions when confronted with situations that may harm life or humanity. Furthermore, the analysis highlights the potential of AI as a tool for positive change in transitioning from the Anthropocene to the Symbiocene.
By instilling ethics and compassion in AI, there is a belief that a symbiotic relationship between all life beings on Earth can be achieved. Harnessing technology to make history and creating a harmonious coexistence between humans and AI is seen as a key pathway towards the Symbiocene.
In conclusion, technology has always played a significant role in historical transformations, and the emergence of AI and technology marks another pivotal point in human history. The Sustainable Development Goals provide a roadmap for the future but need greater collective intelligence to overcome challenges.
The SDGs also introduce a new economic model with substantial financial support. AI can be a powerful tool for positive change when programmed with compassion and ethics, while also helping humanity transition to the Symbiocene. This analysis underscores the need for responsible and innovative approaches to harness the potential of technology for the betterment of society and the environment.
Marko Grobelnik
Speech speed
142 words per minute
Speech length
1694 words
Speech time
715 secs
Arguments
Regulation of AI by international organizations started years before the recent AI progress
Supporting facts:
- Regulations started in 2018-2019
- Chat GPT accelerated the AI progress causing confusion among regulators
Topics: AI Regulation, International Organizations, Chat GPT
There’s a increasing competition for market control in AI between companies and on geopolitical level
Supporting facts:
- Western companies Microsoft, AWS, Google and Meta are major competitors
- Geopolitical competition is mainly between U.S., Europe, and China
Topics: AI Market Competition, Geopolitical Competition
Current AI technology does allow for the development of compassionate AI
Supporting facts:
- The concept of AI understanding and mimicking text to a degree, makes it possible for compassionate AI
- GPT-3 or large language models can reflect the knowledge that is fed into it, giving it a form of ‘text understanding’
- The current AI technology also has limited capabilities of inferencing or reasoning
- AI is largely about managing complexity
Topics: AI, Compassionate AI, Technology
The concept of compassion can be developed as a mathematical operator in AI
Supporting facts:
- Elements like holding societal tissue, empathy and positive human values can be ingrained into AI systems in a mathematical fashion
- With these elements and a reflective human knowledge base, a compassionate AI can be developed
Topics: AI, Compassion, Mathematics
AI development is currently controlled by a few big tech spots
Topics: AI development, Big Tech
Report
Regulation of AI by international organisations began prior to the recent advancements in AI. However, the rapid development of AI, particularly with the emergence of Chat GPT, has caused confusion among regulators. This accelerated progress has posed challenges for policymakers as they try to keep up with new technologies and their potential implications.
The competition for market control in AI is intensifying, with Western companies such as Microsoft, AWS, Google, and Meta vying for dominance. This competition extends beyond companies and extends to a geopolitical level, with the United States, Europe, and China being the main players.
The strategic positioning and control of AI technologies have become crucial in shaping global power dynamics. To address the balance between the power of AI and public trust, an innovative approach suggests the establishment of a voluntary conduct between big tech companies and the government.
This approach aims to ensure responsible and ethical use of AI, addressing concerns surrounding data privacy, bias, and algorithmic decision-making. China is recognised as a rising power in the field of AI. While the country has made significant progress in AI development, it currently faces a challenge in terms of lacking the necessary hardware infrastructure.
The concept of developing compassionate AI is gaining traction. The current AI technology allows for AI systems to understand and mimic text to a certain degree, which opens avenues for the development of compassionate AI. Large language models like GPT-3 can reflect the knowledge fed into them and exhibit a form of “text understanding.” However, it is important to note that AI’s inferencing and reasoning capabilities are still limited.
Interestingly, proponents argue that elements like empathy, positive human values, and societal understanding can be ingrained into AI systems mathematically. By incorporating these elements and leveraging a reflective human knowledge base, AI has the potential to exhibit compassion, further expanding the horizons of AI applications.
Additionally, an additional layer of compassionate AI can be integrated into existing AI and IT systems to guide their decision-making. Some companies have already started implementing forms of compassionate AI by blocking negative queries, highlighting the potential for improving AI systems’ ethical decision-making.
The development of AI is currently dominated by a few big tech companies, giving them significant control over the direction and advancements in the field. This concentration of power raises important questions about accessibility, diversity, and fair competition. Despite the existing limitations, there is optimism about the progress and future of AI.
The past year has witnessed unexpected advancements in AI technology, pushing the boundaries and inspiring confidence in its continued growth and potential societal benefits. In conclusion, the regulation of AI has a history preceding the recent AI progress, but it now faces challenges due to the accelerated development caused by technologies like Chat GPT.
The competition for market control in AI is intensifying on a global scale. An innovative approach to strike a balance between AI power and public trust is advocated through voluntary conduct between big tech companies and governments. China is emerging as a major player in the field of AI, although it currently lacks necessary hardware.
The concept of developing compassionate AI is gaining traction, with the potential to integrate empathy and positive human values into AI systems. The development of AI is currently concentrated in the hands of a few big tech companies. Despite limitations, optimism about the progress and future of AI persists due to witnessed advancements in recent times.
Robert Kroplewski
Speech speed
134 words per minute
Speech length
2837 words
Speech time
1270 secs
Arguments
There is a gap between the ethical considerations of AI and its practical deployment
Supporting facts:
- There is competition between the ethical approach and the actual implementation, indicated by the utilitarianism approach still prevalent
- Several policy recommendations and acts have been proposed for responsible AI, including OECD policy recommendations, UNESCO ethics for AI, European Union’s guidelines and acts for trustworthy AI, and a potential treaty from the Council of Europe
Topics: Artificial Intelligence, Ethics, Policy
AI should benefit both planet and people
Supporting facts:
- Ensuring benefits for people and the planet is a primary principle of OECD recommendation
- There is a need to democratize AI to allow participation of all sectors, including SMEs and academics
Topics: Artificial Intelligence, Sustainability, Human benefit
The AI development process is not finished
Supporting facts:
- AI evolution started as anything goes, moved to focus on trust, and then trustworthy AI. It’s believed that the Compassion AI approach can fill remaining gaps
- Current AI governance approaches range from control to stewardship. Care-focused approaches are still to be explored
Topics: Artificial Intelligence, Ethics, Policy
Robert Kroplewski vouches for prioritizing the UNESCO ethical recommendation over the SDG agenda
Supporting facts:
- Robert mentions a need to have an impact on the way we prioritize UNESCO ethical recommendation over the SDG agenda.
- He proposes a call for action for producing an AI Compassion Bridge Charter and engaging in a network for Compassion Approach to Artificial Intelligence.
Topics: UNESCO Recommendations, SDGs, Ethics in AI
Report
The discussion surrounding the ethical considerations and deployment of artificial intelligence (AI) highlights a significant gap between theoretical ethics and practical implementation. The utilitarianism approach, which prioritises the greatest overall benefit, remains prevalent in the deployment of AI despite ethical concerns.
In response to these concerns, several policy recommendations and acts have been proposed by various organisations. The OECD, UNESCO, and the European Union have all put forth guidelines, recommendations, and acts aiming to promote responsible and trustworthy AI. These efforts reflect a growing recognition of the need to address the ethical implications of AI.
Furthermore, there is a strong emphasis on ensuring that AI benefits both people and the planet. The OECD’s primary principle regarding AI is to ensure benefits for both humanity and the environment. To achieve this, there is a call to democratise AI, allowing the participation of all sectors, including small and medium-sized enterprises (SMEs) and academics.
This inclusive approach aims to avoid the concentration of AI power in a few dominant entities and to ensure that its benefits are widely distributed. The development of AI is an ongoing process, and there is still much work to be done.
It is believed that the Compassion AI approach can fill the remaining gaps in the ethical considerations of AI. Compassion AI refers to an approach that upholds human dignity, promotes well-being, avoids harm, and strives to benefit both people and the planet.
This approach is seen as promising and necessary to address the multifaceted challenges of AI deployment. Robert Kroplewski, in his advocacy for prioritising UNESCO ethical recommendations over the Sustainable Development Goals (SDG) agenda, highlights the need to have a strong impact on how ethical recommendations are prioritised.
He proposes a call for action to produce an AI Compassion Bridge Charter and engage in a network for the implementation of a compassionate approach to AI. His viewpoint stresses the importance of understanding and appreciating compassion as a guiding principle in AI development.
Overall, the discussions and arguments on AI ethics and deployment reveal the complexity and ongoing nature of the AI development process. It is essential to bridge the gap between ethical considerations and practical implementation to ensure that AI benefits both people and the planet.
The Compassion AI approach and prioritisation of ethical recommendations over the SDG agenda are put forth as potential solutions to address these challenges.
Tom Eddington
Speech speed
136 words per minute
Speech length
715 words
Speech time
316 secs
Arguments
Companies are only looking at AI through a lens of commercialization
Supporting facts:
- Amazon recently made a $4 billion acquisition
Topics: Amazon, Artificial Intelligence, Business
AI has potential to solve resource overshoot problem
Supporting facts:
- World Overshoot Day is August 22nd when we use more resources on the planet for the year than what resources are available
Topics: Artificial Intelligence, Environment
The creation of AI business models should incorporate compassion and decentralization
Supporting facts:
- Power generation has seen the effects of centralization and decentralization
Topics: Business, Artificial Intelligence, Ethics
Artificial intelligence must be designed in a way that it can exhibit compassion.
Supporting facts:
- We can’t allow AI to evolve into whatever it may without intentional design.
Topics: Artificial Intelligence
Report
The analysis explores the impact of artificial intelligence (AI) on businesses and the environment, with a focus on several key points. It begins by mentioning Amazon’s recent $4 billion acquisition in the field of AI, which raises concerns about companies prioritizing commercialization over ethical considerations.
This suggests that businesses may be driven solely by profit and neglect the potential negative consequences of AI. However, an alternative viewpoint is presented, arguing that businesses should be guided by an AI charter to ensure ethical decision-making. This aligns with the principle that businesses need a clear framework to address the ethical challenges posed by AI.
An example is the Earth Charter, created in the 1990s, which provides guidance for decision-making with regard to environmental concerns. Another positive aspect highlighted in the analysis is the potential of AI to address the problem of resource overshoot. It is noted that on August 22nd, World Overshoot Day marks the point when the planet’s resources are used up faster than they can regenerate.
The analysis suggests that AI offers the potential to manage resources more efficiently and mitigate this issue. Moreover, the analysis emphasizes the need to manage ourselves and our ethics as generative AI rapidly evolves. Nicholas Robinson at Pace University warns that generative AI is advancing faster than our ability to adapt and cope.
This serves as a reminder that ethical considerations and responsible management are crucial as AI progresses. Regarding AI business models, the analysis argues that compassion and decentralization should be incorporated into their creation. It mentions that the effects of centralization and decentralization have been observed in the power generation sector.
By incorporating compassion and decentralization, AI business models can ensure a more human-centric and sustainable approach. Furthermore, the intentional design of AI is essential. The analysis states that AI should not be allowed to evolve without intentional design and emphasizes the importance of enabling it to exhibit compassion.
This reinforces the need to consider ethical aspects during the development of AI technologies. In conclusion, the analysis highlights the necessity of ethical and responsible approaches to AI. It acknowledges the potential benefits of AI while emphasizing the importance of avoiding potential negative consequences and ensuring that AI is developed with intentional design and compassion.
Additionally, it underscores the need for businesses to have clear guidance, such as an AI charter, to make ethical decisions in the rapidly evolving AI landscape.