SMART SENSING insights

How Affective Computing Can Help Build Trust into Technology

Robots and artificial intelligences are only as good as we train them to be. So why not train them to adept to us, their human counterparts? That’s the vision of affective computing: Incorporating emotional intelligence into AI systems, robots, vehicles, and other IoT technologies. The goal being, that these appliances can not only process information, but also recognize, understand, and respond to human emotions and that way build trust into technology.

Designers of future computing can continue with the development of computers that ignore emotions, or they can take the risk of making machines that recognize emotions, communicate them, and perhaps even ‘have’ them, at least in the ways in which emotions aid in intelligent interaction and decision making.

Rosalind W. Picard (2003):  Affective computing: challenges. International Journal of Human-Computer Studies. Volume 59, Issues 1–2, Pages 55-64

Affective computing improves human-machine interaction and makes it more effective, by creating personalized user experiences and enabling more empathetic AI systems. Thus affective computing has the potential to revolutionize the way we interact with machines, improving user health and overall productivity. With this potential impact in sight, it’s no wonder affective computing technologies have picked up momentum in recent years. These aspects have been widely discussed. But another crucial benefit of affective computing often goes unnoticed: How it can help build trust into technology. Here are some scenarios:

Service robots should build trust before running at full capacity.

Service robots are increasingly used in various industries. E.g., as cleaning robots, robotic assistants in health care facilities, or delivery robots for packages or food. Their presence is far from being common, though. Until it is, it’s important for robots to adept their behavior accordingly. If a service robot approaches a human, it is not wise to do so at full speed – at least not the first time around. Until the human knows what to expect and has built trust into the device and its potential usability, affective computing means to adept the robot’s behavior to the human’s unfamiliarity and possible skepticism.

Autonomous cars should adept their driving behavior to the affective state of the driver.

Autonomous cars have information that the driver and passenger do not have. Imagine driving in fog and being hesitant about passing another car. Your car might be able to overtake safely when driving in autonomous mode thanks to the environmental data it has. As the driver, though, you’re not necessarily aware of the additional information your car has and whether you can trust its assessment. Here’s where affective computing comes into play: Instead of letting you just close your eyes and hope for the best, your autonomous car will adapt its driving behavior to your anxiety. It will also provide additional information on why it’s safe to overtake.

Please don’t make me yell at you!

Have you ever called a service hotline that operates with an interactive voice response (IVR) system and ended up yelling at the other end of the call out of pure frustration? With IVR systems, you’re not actually talking to a live agent (i.e., another human), but to a voice recognition system. These AI-based systems primarily focus on converting speech into text. Wouldn’t it be great if they integrated affective computing techniques as a standard feature? Then the IVR system will adept its behavior, depending on the caller’s affective state. If frustration is detected, the system can respond with empathy, patience, or offer alternative solutions – instead of only maintaining its ever-calm voice. As most users will not differentiate between AIs capable of affective computing and those not, they are not aware of the limitations of voice recognition system without integrated affective computing features. The take-home message will be: I do not want to talk to an AI again. And that’s too bad, given that IVR systems are employed to ensure the efficiency and accessibility of service hotlines.

Alexa?!

A long day at work is behind you. You get home and just manage to get out an annoyed “Alexa turn on some music”. Then the loudest, most stressful music of all time comes on? Not if affective computing technologies train voice assistants to adapt to the user’s emotions. Instead of randomly playing a song like “Battery” by Metallica, Alexa could then recognize you being stressed or tired and choose something like “Here comes the Sun” by the Beatles from the “calming” playlist. Users who find that the assistant takes their emotional state into account would then be more likely to build trust in this technology.

AI should shift the focus on enrichment instead of replacement

But AI is also becoming more and more commonplace in the industry. For example, when working on the assembly line. Here, affective computing could greatly improve the work of employees and demonstrate the benefits of AI. For example, a system could detect an employees stress or exhaustion and adjust by slowing the assembly line or suggesting a short break. In addition, the sytsem can offer personalized feedback and support to promote employee wellbeing. Such measures would increase efficiency and safety and also show that the technology takes needs and well-being of employees seriously. This could increase trust in the technology and improve acceptance for the use of assembly line technologies.

Image copyright: Unsplash /Alex Knight

Add comment

Get in touch with us

If you would like to learn more or if you are interested in a joint development project, please contact us: affective-computing@iis.fraunhofer.de

Jaspar Pahl

Jaspar Pahl                                                Head of strategic research topic Affective Computing & Senior Scientist Digital Health and Analytics | Fraunhofer IIS