Western and Chinese researchers identify ‘red lines’ to prevent AI existential risks

Chinese and Western scientists have identified ‘red lines’ on AI risks, which are key areas where AI must not cross to avoid existential risks.

 Smoke Pipe, Body Part, Finger, Hand, Person

At a meeting in Beijing last week, Western and Chinese AI experts issued a harsh warning that addressing threats associated with a powerful technology requires global cooperation equivalent to the Cold War effort to avert nuclear warfare.

The group of international AI scientists recognized ‘red lines’ in AI development, including the creation of bioweapons and the launch of cyberattacks. As reported by the Financial Times in the days following the conference, the academics cautioned that a collaborative approach to AI safety was needed to prevent ‘catastrophic or even existential risks to humanity within our lifetimes.’ ‘In the depths of the Cold War, international scientific and governmental coordination helped avert a thermonuclear catastrophe. Humanity again needs to coordinate to avert a catastrophe that could arise from unprecedented technology,’ the statement said.

Why does it matter?

Signatories to the statement include prominent experts such as Geoffrey Hinton and Yoshua Bengio, Turing Award winners who are considered ‘AI’s godfathers’, Stuart Russell, a University of California (Berkeley) computer science professor; and Andrew Yao, one of China’s most renowned computer researchers. The stark remarks followed the ‘International Dialogue on AI Safety‘ held in Beijing last week, with attendance from Chinese government officials suggesting tacit government support for the conference and its topics.

US President Joe Biden and China’s Xi Jinping agreed to open a discussion on AI safety when they met in San Francisco in November. Earlier in November, China and the US were among the 28 countries hosted by UK Prime Minister Rishi Sunak for the first AI safety summit, where major AI companies made pledges to collaborate to address the existential risks posed by advanced AI.