Emotional attachment to AI could impact real-life interactions, says OpenAI

As AI models approach human-like reasoning, experts urge a pause to evaluate the effects on social behaviour and norms.

AI, love, romance, emotional dependence

The potential impact of OpenAI’s realistic voice feature on human interactions has raised concerns, with the company warning that people might form emotional bonds with AI at the expense of real-life relationships. The company noted that users of its GPT-4 model have shown signs of anthropomorphizing the AI, attributing human-like qualities to it, which could lead to misplaced trust and dependency. OpenAI’s report highlighted that the high-quality voice interaction might exacerbate these issues, raising questions about the long-term effects on social norms.

The company observed that some testers of the AI voice feature interacted with it in ways that suggested an emotional connection, such as expressing sadness over the end of their session. While these behaviours might seem harmless, OpenAI emphasised the need to study their potential evolution over time. The report also suggested that reliance on AI for social interaction could diminish users’ abilities or willingness to engage in human relationships, altering how people interact with one another.

Concerns were also raised about the AI’s ability to recall details and handle tasks, which might lead to over-reliance on the technology. OpenAI further noted that its AI models, designed to be deferential in conversations, might inadvertently promote anti-social norms when users become accustomed to behaviours, such as interrupting, that are inappropriate in human interactions. The company pledged to continue testing how these voice capabilities could affect emotional attachment and social behaviour.

The issue gained attention following a controversy in June when OpenAI was criticized for allegedly using a voice similar to actress Scarlett Johansson‘s in its chatbot. Although the company denied the voice belonged to Johansson, the incident underscored the risks associated with voice-cloning technology. As AI models continue to advance toward human-like reasoning, experts are increasingly urging a pause to consider the broader implications for human relationships and societal norms.