Emotion Detection Software

Affective Computing 101

When the human brain is processing information, it takes millions of neurons and an intricate cognitive network to decode whatever it is that we are simultaneously hearing, seeing, feeling. In every day social situations, we try (and sometimes fail) to interpret the behavior and affective reactions of others by decoding verbal and non-verbal signals. When it comes to affective reactions, non-verbal signals can even say more than words, as they transmit both conscious and unconscious emotions. Just think of somebody blushing or breaking into a sweat during a flight-or-fight situation. Recognizing and interpreting affective reactions, therefore, is a complex, multimodal perception process.

Since the late 1990s, the field of affective computing has been aiming to make human emotions – in all of their complexity – recognizable and measurable for algorithms and intelligent systems. Consequently, affective computing is an interdisciplinary field, operating at the interface of computer science (AI and machine learning, among others), cognitive science, and psychology. But why do we (us humans) need machines to analyze our emotions in the first place? By perceiving, processing, and – in the next step – by simulating human emotions, affective computing applications can help individuals who face challenges when communicating or interpreting affective reactions, as we will later see. Furthermore, they aim to give machines emotional intelligence in order to improve the human-machine interface – which, by the way, is not as futuristic as it sounds: it’s an everyday experience, for instance when driving a vehicle.

From emotion to data

At Fraunhofer IIS, we understand and apply affective computing, also known as Emotion AI, by combining technical expertise with know-how in psychology and physiology (read more in our Emotion Analysis 101). In our experience, the analysis of both conscious and unconscious affective reactions can be an asset in subject studies and other use cases – albeit how challenging the latter are to detect. For it is not just our conscious statements, but the whole array of unconscious, psycho-physical reactions that decide how we perceive our environment and how we react and interact: with consumer goods, in social situations, or while driving and in different traffic situations, for example.

Data acquisition and data analysis for Emotion AI, therefore, are based on multimodal study designs, using speech signals, facial expressions, and bodily reactions (measured by sensor solutions and/or image analysis). There are two approaches to acquire data, both involving experiments with human test subjects: by either knowing how they feel and then labeling the data, or by putting them in specific situations to trigger various emotional reactions. An example of the latter method is to trigger different emotional states in an exposure cabin or in a driving simulator. Just like an exposure cabin, our driving simulator is fully equipped with cameras, lighting, and systems for multimodal biosignal acquisition (e.g., heart and breathing rate), which is useful, for example, when assessing cognitive overload while driving. During post-processing, the fusion and analysis of the multimodal data follows: Algorithms intelligently evaluate the data, meaning they select, weight, and combine the various signals for each test subject. Compared to including discrete parameters only, the multimodal approach makes the results more robust and gives a holistic and more accurate picture of the subject’s affective state. Finally, the algorithm trained for the classification of emotional states is implemented into product-ready software.

Affective computing: A tool of great power – and great responsibility

Another field where intelligent data analysis and Emotion AI are implemented vastly is the healthcare sector. One example of an application in this field is ERIK, a joint research project aimed to help autistic children recognize and interpret emotions, and to respond accordingly. The project involves a humanoid robot that interacts with the children in real-time while performing sensor-based and software-supported analysis of facial expressions. In another research project, the monitoring system PainFaceReader automatically detects pain in patients who are unable to communicate, for example due to dementia, during acute post-operative care, or in palliative care. These patients may not be able to communicate their pain episodes, leading to an undertreatment of pain symptoms.

Both affective computing applications improve the understanding and response to emotions in cases where communication barriers exist. They address vulnerable persons, which is why the potential misuse of these technologies needs to be part of the public discourse – just like their potential usefulness. Emotion AI is by design based on sensitive, private, and very personal data. In the context of healthcare applications, patients’ privacy rights are also part of the equation. A high level of data security, therefore, needs to be a prerequisite of affective computing technologies and to be monitored subsequently. In case of ERIK and PainFaceReader for instance, the underlying camera-based emotion analysis software SHORE® uses anonymized metadata only and maintains ethical guidelines and data protection regulations.

From black box to white box

Another core issue of Emotion AI is, as with all machine learning methods, the quality of the data (not just the issue of quantity, which machine learning experts will always respond to with “more, more, more”). With skewed data, AI algorithms can reproduce biases, meaning they exhibit higher failure rates and inaccuracies with certain groups, leading to possible unfairness during real-world application. In case of facial expression recognition, for example, a bias can be in age, with the algorithm performing best for the young and being less accurate for older test subjects (cf. Pahl et al. 2022). Especially when applied in the context of health care solutions and patient care, age as a source of bias (just as gender and ethnicity) has to be identified and remedied as early as possible (see Deuschel et al. 2021 for a more detailed analysis). Algorithms need to be evaluated for fairness, considering that they and their training data are the basis of all real-world application and potentially high-impact decisions.

In light of the digitization of everyday life and with respect to the acceptance of machine learning methods, the inner workings of these technologies, how they generate their knowledge, draw their conclusions, and their possible limitations, need to not only be part of the professional discourse, but also of general knowledge. Or to use the metaphor of the “Comprehensible Artificial Intelligence” project group: AI systems should evolve from being a black box to being a white box – not just for experts, but with regard to public acceptance, strategic decisions, and their application in industry (learn more on the issue of Trustworthy AI).

Image copyright: Fraunhofer IIS / Bianca Möller

Grit Nickel

Grit Nickel

Grit is a content writer at Fraunhofer IIS and a science communication specialist. She has 6+ years of experience in research and holds a PhD in German linguistics.

Add comment