ECSCW Workshop

Appropriate Trust in Human-AI Interactions

About the workshop

AI (artificial intelligence) systems are increasingly being used in all aspects of our lives, from mundane routines to sensitive decision-making and even creative tasks. Therefore, an appropriate level of trust is required so that users know when to rely on the system and when to override it. While research has extensively addressed fostering trust in human-AI interactions, the lack of standardized procedures for human-AI trust hinders interpretation of results and cross-study comparisons. As a result, the fundamental understanding of human-AI trust remains fragmented.

This workshop invites researchers to revisit existing approaches and work toward a standardized framework for studying AI trust to answer the open questions:

  • What does trust mean between humans and AI in different contexts?
  • How can we create and convey the calibrated level of trust in interactions with AI?
  • How can we develop a standardized framework to address new challenges?
Schedule

in Western European Summer Time (WEST)

09:30-09:45

| Welcome, agenda, introduction

09:45-10:00

10:00-10:50

| Speed meeting

| Presentations: Submission + Research challenges

10:50-11:10

11:10-11:20

| Activity break

| Introduction and set up of working groups

11:20-12:30

| Group activity part I

12:30-13:30

| Lunch

13:30-15:00

| Panel Discussion

15:00-15:15

| Activity Break

15:15-16:15

| Group activity part II

16:15-16:45

| Sharing experiences & discussions from the group works

16:45-17:00

| Next steps and closing

Panel Discussion on Trust, Transparency, Explainability, Accountability and Responsibility of AI

Currently, there are several strands of research in HCI that are evolving around notions of trust, transparency, explainability, accountability, and responsibility of AI-embedded technologies. In many cases, these terms are even used interchangeably, raising the question of where we should draw the line between these terms and distinguish them from each other. Our panel discussion will address this question, which aligns with workshop activities on developing a flexible framework for designing appropriate trust in human AI interactions.

Speakers & Panelists

Motahhare Eslami

Carnegie Mellon University

Finale Doshi-Velez

Harvard Paulson School of Engineering and Applied Sciences

Marisa Tschopp

scip AG

Philipp Wintersberger

TU Wien

Motahhare Eslami

Carnegie Mellon University

Motahhare Eslami is an assistant professor at the School of Computer Science, Human-Computer Interaction Institute (HCII), and Institute for Software Research (ISR), at Carnegie Mellon University. Motahhare’s work draws on Human-Computer Interaction, Social Computing and Data Mining techniques to empower the users of algorithmic systems, particularly those who belong to marginalized communities or those whose decisions impact those communities, make transparent, fair, and informed decisions in interaction with algorithmic systems. Her work has been recognized with a Google Ph.D. Fellowship, Best Paper Award at ACM CHI, Honorable mention award at ACM CSCW, and has been covered in mainstream media such as Time, The Washington Post, Huffington Post, the BBC, Fortune, and Quartz. Motahhare’s research is supported by NSF (Fairness in AI, AI Institute, Future of Work), Amazon, Google, Facebook, and Cisco., Amazon, Facebook, and Cisco.

Finale Doshi-Velez

Harvard Paulson School of Engineering and Applied Sciences

Finale Doshi-Velez is a Gordon McKay Professor in Computer Science at the Harvard Paulson School of Engineering and Applied Sciences. She completed her MSc from the University of Cambridge as a Marshall Scholar, her PhD from MIT, and her postdoc at Harvard Medical School. Her interests lie at the intersection of machine learning, healthcare, and interpretability.

Marisa Tschopp

Scip AG

Marisa Tschopp is a researcher at scip AG, Ambassador and Chief Research Officer at Women in AI NPO, and Co-Chair of the IEEE Trust and Agency in AI Systems committee. She researches AI from a psychological perspective, addressing a variety of questions about psychological phenomena with a particular interest in ethical implications. Her research focuses on trust, performance measurement of conversational AI (A-IQ), agency, leadership, and issues of gender equality in AI. As an organizational psychologist, she has experience in social and educational settings with a particular passion for digital teaching and learning trends. She has studied and taught at various universities in Germany, Canada and Switzerland. She has published various media articles, book chapters and essays and is a frequent speaker at conferences and events (including TEDx) worldwide. Marisa holds a Master’s degree in Psychology of Excellence in Business and Education from Ludwig Maximilian University of Munich, Germany, and a BA degree in Business Psychology with a focus on market and consumer psychology. As an associate researcher in the Social Processes Lab at the Leibniz Institut für Wissensmedien (IWM), she investigates the perceived relationships between humans and machines.

Philipp Wintersberger

TU Wien

Philipp Wintersberger is a researcher at TU Wien (Vienna University of Technology). He obtained his doctorate in Engineering Science from Johannes Kepler University Linz, specializing in Human-Machine Cooperation. His publications, which focus on trust in automation, attentive user interfaces, transparency of driving algorithms, as well as UX and acceptance of automated vehicles. He has co-organized multiple workshops at well-known HCI conferences, served as Technical Program Chair for AutomotiveUI’21, and is a member of the AutomotiveUI steering committee.

Motahhare Eslami

Carnegie Mellon University

Motahhare Eslami is an assistant professor at the School of Computer Science, Human-Computer Interaction Institute (HCII), and Institute for Software Research (ISR), at Carnegie Mellon University. Motahhare’s work draws on Human-Computer Interaction, Social Computing and Data Mining techniques to empower the users of algorithmic systems, particularly those who belong to marginalized communities or those whose decisions impact those communities, make transparent, fair, and informed decisions in interaction with algorithmic systems. Her work has been recognized with a Google Ph.D. Fellowship, Best Paper Award at ACM CHI, Honorable mention award at ACM CSCW, and has been covered in mainstream media such as Time, The Washington Post, Huffington Post, the BBC, Fortune, and Quartz. Motahhare’s research is supported by NSF (Fairness in AI, AI Institute, Future of Work), Amazon, Google, Facebook, and Cisco., Amazon, Facebook, and Cisco.

Finale Doshi-Velez

Harvard Paulson School of Engineering and Applied Sciences

Finale Doshi-Velez is a Gordon McKay Professor in Computer Science at the Harvard Paulson School of Engineering and Applied Sciences. She completed her MSc from the University of Cambridge as a Marshall Scholar, her PhD from MIT, and her postdoc at Harvard Medical School. Her interests lie at the intersection of machine learning, healthcare, and interpretability.

Marisa Tschopp

scip AG

Marisa Tschopp is a researcher at scip AG, Ambassador and Chief Research Officer at Women in AI NPO, and Co-Chair of the IEEE Trust and Agency in AI Systems committee. She researches AI from a psychological perspective, addressing a variety of questions about psychological phenomena with a particular interest in ethical implications. Her research focuses on trust, performance measurement of conversational AI (A-IQ), agency, leadership, and issues of gender equality in AI. As an organizational psychologist, she has experience in social and educational settings with a particular passion for digital teaching and learning trends. She has studied and taught at various universities in Germany, Canada and Switzerland. She has published various media articles, book chapters and essays and is a frequent speaker at conferences and events (including TEDx) worldwide. Marisa holds a Master’s degree in Psychology of Excellence in Business and Education from Ludwig Maximilian University of Munich, Germany, and a BA degree in Business Psychology with a focus on market and consumer psychology. As an associate researcher in the Social Processes Lab at the Leibniz Institut für Wissensmedien (IWM), she investigates the perceived relationships between humans and machines.

Philipp Wintersberger

TU Wien

Philipp Wintersberger is a researcher at TU Wien (Vienna University of Technology). He obtained his doctorate in Engineering Science from Johannes Kepler University Linz, specializing in Human-Machine Cooperation. His publications, which focus on trust in automation, attentive user interfaces, transparency of driving algorithms, as well as UX and acceptance of automated vehicles. He has co-organized multiple workshops at well-known HCI conferences, served as Technical Program Chair for AutomotiveUI’21, and is a member of the AutomotiveUI steering committee.

Important dates

Submission deadline

May 04, 2022

Notification

May 08, 2022

Workshop

27 June (Hybrid: Coimbra, Portugal and Online)

Call for participation

This one-day workshop aims to provide a forum for researchers as well as practitioners and activists to discuss challenges in building trust and to start working on solutions that are more practical and viable to adapt in different AI interaction contexts. The topics include but are not limited to: ​

  • Definitions of trust and reliance.​
  • Interpersonal trust and lessons from social sciences.
  • Qualitative and quantitative methods for building and evaluating trust.​
  • Challenges of designing appropriate trust and tradeoffs with other objectives. ​
  • Solutions (and their limitations) for promoting appropriate trust (e.g., XAI, control mechanisms, human agency, communicating uncertainty etc).​
  • Safety mechanisms for when trust is broken.​

 

We invite anyone interested in participating to submit a paper up to 4 pages (not including references). Template can be found (https://www.acm.org/publications/taps/word-template-workflow). Papers should critically reflect upon the authors’ experiences from the field or research area related to challenges they face when building trust in AI interactions. Authors’ prior experience does not have to be specifically concerned with these challenges, but the position papers will be expected to demonstrate how their experience is relevant to the workshop’s topic and can be applied within the workshops’ context.​

Submissions should be sent to fatemeh.alizadeh@uni-siegen.de in .pdf format. Position papers will be reviewed based on relevance and potential for contribution to the workshop. At least one co-author of each accepted paper must register to the ECSCW 2022 conference to attend the workshop.

What can you get out of this workshop​

All workshop participants will be provided with the latest approaches to trust in artificial intelligence (in the form of downloadable content prior to the workshop, but also in the form of short talks during the workshop), and you will have the opportunity to interact and collaborate with other participants. We will form individual groups to brainstorm and work on problems relevant to this emerging field. In addition, the workshop participants should be become part of exchange group which should serve as support line when help is needed dealing with an uncommon situation.

FAQs

Do papers have to include previous work or can they include previous work or a case study?
Case studies or new approaches to literature review are fine as long as there is a clear link to trust in AI-enabled systems.

Can I submit a paper describing a potential research idea?
Absolutely. We encourage you to discuss planned and future work at the workshop, but please submit a scientific proposal that focuses on the research questions and methods. However, be aware that your ideas will be discussed publicly afterwards.

Can I attend the workshop if I do not have an accepted paper?
The short answer is no. You must have an accepted paper to attend the workshop. However, once all submissions have been reviewed, the organizing committee will discuss the possibility of opening the workshop to participants without accepted work. Our goal is to strike a balance between workshop size, interactivity, and depth of discussion. Please keep an eye on the website for an update.

Organizers

Fatemeh Alizadeh

University of Siegen

Fatemeh Alizadeh (main contact) is a PhD student and research associate at the Institute for Information Systems and New Media, University of Siegen. In her research, she combines her knowledge in HCI with her computer engineering and AI background to study unexpected situations with intelligent systems. Her main research interest is to improve the understandability, explainability and trustworthiness of AI-embedded technologies.

Oleksandra Vereschak

Sorbonne Université

Oleksandra Vereschak is a PhD student at ISIR, Sorbonne Université. Her main focus of interest is users’ trust in AI, which situates her work in the interdisciplinary domain of Human-AI interaction. She predominantly focuses on the AI-based systems assisting human decision making in the high-risk contexts such as medical, recruiting, and credit decision making. She studies not only what influences human trust, but also how to improve experimental protocols to evaluate it drawing from her social sciences background.

Dominik Pins

Fraunhofer Institute for Applied Information Technology (FIT)

Dominik Pins is a PhD student and a research associate at Fraunhofer Institute for Applied Information Technology (FIT) in the department of Human-Centered Engineering and Design. As a usability engineer and research associate with sociological background he focuses in his research on user needs and practices regarding trust and privacy in the home environment and the design of trustworthy technologies, specifically AI systems.​

Gunnar Stevens

University of Siegen

Gunnar Stevens is a Professor of Information Systems at the University of Siegen and Co-Director of the Institute for Consumer Informatics, Bonn-Rhein-Sieg University of Applied Sciences. He has been researching and publishing in the fields of HCI, CSCW, Usable Security and Digital Consumer Protection for years. For his research he received the IBM Eclipse-Innovation Award in 2005 and the PhD Award of the IHK Siegen-Wittgenstein in 2010

Gilles Bailly

Sorbonne Université

Gilles Bailly is a CNRS researcher at ISIR, Sorbonne Université. His research is at the crossroad of human-computer interaction (HCI), skill acquisition, decision making, artificial intelligence (AI) and robotics. He designs novel interaction techniques (desktop interaction, mobile interaction, gestural interaction, etc.) and builds predictive models of performance and knowledge with a focus on the transition from novice to expert behavior.

Baptiste Caramiaux

Sorbonne Université

Baptiste Caramiaux is a CNRS researcher at ISIR, Sorbonne Université. He conducts research in human-computer interaction (HCI), examining how machine learning (or artificial intelligence) algorithms can be used in various fields such as performing arts, health or pedagogy. He is particularly interested in learning technologies when they are integrated with communities of practice. In particular, he sees technology as a reflective tool that allows people to question their practice, learn, and express themselves.

Organizers

Fatemeh Alizadeh

University of Siegen

Fatemeh Alizadeh (main contact) is a PhD student and research associate at the Institute for Information Systems and New Media, University of Siegen. In her research, she combines her knowledge in HCI with her computer engineering and AI background to study unexpected situations with intelligent systems. Her main research interest is to improve the understandability, explainability and trustworthiness of AI-embedded technologies.

Oleksandra Vereschak

Sorbonne Université

Oleksandra Vereschak is a PhD student at ISIR, Sorbonne Université. Her main focus of interest is users’ trust in AI, which situates her work in the interdisciplinary domain of Human-AI interaction. She predominantly focuses on the AI-based systems assisting human decision making in the high-risk contexts such as medical, recruiting, and credit decision making. She studies not only what influences human trust, but also how to improve experimental protocols to evaluate it drawing from her social sciences background.

Gunnar Stevens

University of Siegen

Gunnar Stevens is a Professor of Information Systems at the University of Siegen and Co-Director of the Institute for Consumer Informatics, Bonn-Rhein-Sieg University of Applied Sciences. He has been researching and publishing in the fields of HCI, CSCW, Usable Security and Digital Consumer Protection for years. For his research he received the IBM Eclipse-Innovation Award in 2005 and the PhD Award of the IHK Siegen-Wittgenstein in 2010

Gilles Bailly

Sorbonne Université

Gilles Bailly is a CNRS researcher at ISIR, Sorbonne Université. His research is at the crossroad of human-computer interaction (HCI), skill acquisition, decision making, artificial intelligence (AI) and robotics. He designs novel interaction techniques (desktop interaction, mobile interaction, gestural interaction, etc.) and builds predictive models of performance and knowledge with a focus on the transition from novice to expert behavior.

Dominik Pins

Fraunhofer Institute for Applied Information Technology (FIT)

Dominik Pins is a PhD student and a research associate at Fraunhofer Institute for Applied Information Technology (FIT) in the department of Human-Centered Engineering and Design. As a usability engineer and research associate with sociological background he focuses in his research on user needs and practices regarding trust and privacy in the home environment and the design of trustworthy technologies, specifically AI systems.​

Baptiste Caramiaux

Sorbonne Université

Baptiste Caramiaux is a CNRS researcher at ISIR, Sorbonne Université. He conducts research in human-computer interaction (HCI), examining how machine learning (or artificial intelligence) algorithms can be used in various fields such as performing arts, health or pedagogy. He is particularly interested in learning technologies when they are integrated with communities of practice. In particular, he sees technology as a reflective tool that allows people to question their practice, learn, and express themselves.