background

Our Projects

ERROR

PROJECT DATA

Title: Evaluating tRust weaRing Off in Robots
Duration: 36 months
Starting Date: 01/09/2023

PROJECT SYNOPSIS

This project aims to investigate computational mechanisms to calibrate people’s trust in robots, particularly when robots make errors that can breach such trust. Trust is a fundamental construct that influences the outcome of human-robot interaction and is affected by several factors. However, strategic robot’s erroneous behaviors (such as deception) can be intentionally included to foster the perception of human quality in robots and to induce people to
adopt healthier and safer lifestyles (e.g., in care and assistive facilities). To this extent, we will investigate how robot deception may affect people’s trust in robots. We intend to examine whether and how different types of robotic deception change people’s perceptions of robots. Moreover, we want to distinguish changes in trust dynamics between people and robots when robots are using deceiving techniques or making actual errors. Finally, we aim to provide mechanisms to minimize the loss or recover people’s trust in robots after being deceived. We intend to investigate whether endowing robots with Theory of Mind abilities, such as the cognitive ability to infer other’s mental and emotional states (i.e., beliefs, desires, thoughts, emotions) is able to minimize people’s loss of trust in robots. The guidelines provided by this project will allow robots to balance people’s trust for the development of a successful long-lasting interaction. Understanding the factors that influence trust in robots, such as reliability and intentionality, will pave the way for the development of more trustworthy robotic companions and assistants. Additionally, insights into deception in human-robot interaction can lead to the creation of robust safeguards against manipulative or malicious behavior from AI systems. Overall, this research will significantly contribute to fostering trust, improving user experience, and establishing ethical guidelines for the future of human-robot interaction.

Publications

– A. Rossi, R. Esposito, D. Marocco, S. Rossi, “Error: Evaluating Trust Wearing Off in Robots”, Proceedings of the 2nd International Workshop on Multidisciplinary Perspectives on Human-AI Team Trust co-located with 11th International Conference on Human-Agent Interaction (HAI 2023) Gothenburg, Sweden, December 4-7, 2023. https://ceur-ws.org/Vol-3634/

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force.

PARTNERS

– Università degli Studi di Napoli Federico II

FUNDING

Air Force Office of Scientific Research (AFOSR) 

Project identifier: FA8655-23-1-7060

AFOSR
Back to top of page