SMART SENSING insights
Human and robotic hand live a human-machine romance. That's a vision of how affective computing can help build trust into technology.

How Affective Computing Can Help Build Trust into Technology

Robots and artificial intelligences are only as good as we train them to be. So why not train them to adept to us, their human counterparts? That’s the vision of affective computing: Incorporating emotional intelligence into AI systems, robots, vehicles, and other IoT technologies. The goal being, that these appliances can not only process information, but also recognize, understand, and respond to human emotions and that way build trust into technology.

Designers of future computing can continue with the development of computers that ignore emotions, or they can take the risk of making machines that recognize emotions, communicate them, and perhaps even ‘have’ them, at least in the ways in which emotions aid in intelligent interaction and decision making.

Rosalind W. Picard (2003):  Affective computing: challenges. International Journal of Human-Computer Studies. Volume 59, Issues 1–2, Pages 55-64

Affective computing improves human-machine interaction by personalizing user experiences and fostering more empathetic AI systems, ultimately making interactions more effective. Thus, affective computing has the potential to revolutionize the way we interact with machines, improving user well-being and overall productivity. With this potential impact in sight, it’s no wonder affective computing technologies have picked up momentum in recent years. While these aspects have been widely discussed, another crucial benefit of affective computing often goes unnoticed: How it can help build trust into technology. Here are some scenarios:

Service robots should build trust before running at full capacity.

Service robots are increasingly used in various industries, e.g., as cleaning robots, robotic assistants in health care facilities, or delivery robots for packages or food. Their presence is far from being common, though. Until it is, it’s important for robots to adept their behavior accordingly. If a service robot approaches a human, it is not wise to do so at full speed – at least not the first time around. Until the human knows what to expect and has built trust into the device and its potential usability, affective computing means to adept the robot’s behavior to the human’s unfamiliarity and possible skepticism.

Autonomous cars should adept their driving behavior to the affective state of the driver.

Autonomous cars have information that the driver and passenger do not have. Imagine driving in fog and being hesitant about passing another car. Your car might be able to overtake safely when driving in autonomous mode thanks to the environmental data it has. As the driver, though, you’re not necessarily aware of the additional information your car has and whether you can trust its assessment. Here’s where affective computing comes into play: Instead of letting you just close your eyes and hope for the best, your autonomous car will adapt its driving behavior to your anxiety. It will also provide additional information on why it’s safe to overtake.

Please don’t make me yell at you!

Have you ever called a service hotline that operates with an interactive voice response (IVR) system and ended up yelling at the other end of the call out of pure frustration? With IVR systems, you’re not actually talking to a live agent (i.e., another human), but to a voice recognition system. These AI-based systems primarily focus on converting speech into text. Wouldn’t it be great if affective computing techniques were integrated as a standard feature? Then the IVR system would adept its behavior, depending on the caller’s affective state. If frustration is detected, the system can respond with empathy, patience, or offer alternative solutions – instead of only maintaining its ever-calm voice. As most users will not differentiate between AIs capable of affective computing and those not, they are not aware of the limitations of voice recognition system without integrated affective computing features. The take-home message will be: I do not want to talk to an AI again. And that’s too bad, given that IVR systems are employed to ensure the efficiency and accessibility of service hotlines.

Alexa?!

A long day at work is behind you. You get home and just manage to get out an annoyed “Alexa turn on some music”. Then the loudest, most stressful music of all time comes on? Not if affective computing technologies train voice assistants to adapt to the user’s emotions. Instead of randomly playing a song like “Battery” by Metallica, Alexa could then recognize you being stressed or tired and choose something like “Here comes the Sun” by the Beatles from the “calming” playlist. Users who see that the assistant acknowledges their emotional state are more likely to trust this technology.

AI should shift the focus on enrichment instead of replacement.

AI is also becoming more and more commonplace in the industry, particularly on assembly lines. Here, affective computing could significantly improve employee performance and demonstrate the benefits of AI. For example, a system could detect an employee’s stress or exhaustion and respond by slowing the assembly line or recommending a short break. In addition, the sytsem can offer personalized feedback and support to promote employee well-being. Such measures would increase efficiency and safety and also show that the technology takes needs and well-being of employees seriously. This could increase trust in the technology and improve acceptance for the use of assembly line technologies.

With AI being introduced into different application and sections of life and industry, discussions (and one’s own thoughts) seem to be wavering between “The sky is the limit” and “What’s gonna happen to us humans!?”. But as these few scenarios illustrate, it’s really worthwhile to focus on the potential benefits. Yet, there is a risk that AI can be used to enhance assembly line productivity without considering worker well-being or implemented in service and smart home applications without prioritizing data safety. (As we’ve been living with the latter for quite some time already, we know that this is a major concern). As with any emerging technology: That’s where careful consideration and appropriate regulation needs to happen BEFORE it is implemented (read more on the issue in our Affective Computing 101).


Image copyright: iStock

Grit Nickel

Grit Nickel

Grit is a content writer at Fraunhofer IIS and a science communication specialist. She has 6+ years of experience in research and holds a PhD in German linguistics.

Anna Chiwona

Anna is a working student at Fraunhofer IIS. She holds a bachelor's degree in media management and has experience in copy writing and social media. She contributes texts with a focus on affective computing and digital health.

Add comment

Get in touch with us

Do you have any questions concerning our product portfolio or you would like to learn more about our customized services? Do not hesitate to contact me: nadine.lang-richter@iis.fraunhofer.de

Nadine Lang

Dr. rer. nat. Nadine Lang
Group Leader Medical Data Analysis
Digital Health and Analytics | Fraunhofer IIS

All Categories