Affective computing: The governance challenges

29 Nov 2022 12:05h - 13:05h

Session page

Event report

The field of affective computing is rapidly expanding. Technologies that recognise, interpret, and simulate human emotions are deployed in education, transportation, hiring, entertainment, and even for digital love lives. However, the complexity of human emotions, the lack of scientific evidence, the diversity of global contexts and human vulnerability are just some of the challenges to making affective computing technologies reliable. 

Affective computing is not a single technology, but a field of knowledge including various applications that can detect, simulate, and organise data about human affections or emotions. Not all affective computing includes artificial intelligence (AI) systems. For example, users can self-report their feelings and emotions to mood tracker applications or reply to content moderation satisfaction surveys. Because they provide insight into users’ internal states, these applications fall under the affective computing debate, but do not rely on AI to infer emotions without people stating them. The affective computing that does rely on AI was the focus of this panel. 

The last decade has brought excitement in the use of AI models for emotion recognition. AI can be used for specific data such as heart rate, reading facial expressions, or analysing language that can signify anger, happiness, or excitement. The economy of emotion apps and algorithms reliant on AI is growing. Affective computing is used in education to track students’ mood and attention, in policing to catch deception, and in job interviews to determine applicants’ feelings about a company. The first problem is the increased human reliance on AI to know human emotions better than humans themselves without sufficient scientific evidence of this being possible. 

Can AI sufficiently evaluate human emotions? Research says that currently, it cannot. When shown a picture of a person frowning with tense eyebrows, the system will often suggest anger or disgust as the subject’s internal state. However, humans tend to have the same expressions when focusing on accomplishing a hard mental task or if they are very excited, for example during a sports game. The problem here is conflating observations and drawing wrong conclusions because human emotions are difficult to describe and label. An important distinction is that facial expressions are actually just facial movements and alone are not enough to know someone’s inner feelings. When a company uses an image of a person smiling,  AI will equalise that to joy, because we expect universal facial movements. A research was mentioned that shows that universal emotional expression does not exist. Reliability and false positives are a big problem if we rely on affective computing to make decisions after being trained on wrong assumptions. Emotions include an ensemble of hundreds of signals, from facial expressions to movements, body postures, choice of words, different abilities and special needs, and are therefore situated and relational. Affective AI computing cannot account for all of this.  

Another big challenge, related to understanding that emotions are not universally expressed, is that affective computing is often built for a global purpose. Because there is no one size fits all, it should never be the goal to develop models that try to infer internal states of emotion across different regions of the world. Currently, training data and final products mainly come from the Global North and are as such deployed in the Global South. This causes problems because replicating from one region to the other omits all the peculiarities that should be taken into account, from individual user needs to different regulatory regimes. 

As a consequence, users can be harmed through denial of consequential services, risk of physical or emotional harm, or infringement of human rights. All come from over-reliance on affective computing without knowing its reliability. To prevent harm, caution is important and to that end, Microsoft came up with the 4Cs Ethical Guidelines: communication, consent, calibration, and contingency of affective computing systems. Other soft laws and non-binding guidelines exist, but over the last year, it has become clear that strong regulation is needed. The European Union General Data Protection Regulation and the AI Act were criticised because affective computing is allowed and not yet banned or put under a moratorium until it becomes reliable. Regulation needs to track economic developments and protect human rights, including privacy, dignity, and autonomy. 

By Jana Misic

 

The session in keywords

IGF 2022 WS 354 Affective Computing The Governance challenges