Relative Effects of Positive and Negative Explanations on Satisfaction and Performance in Human-Agent Teams
Improving agent capabilities and increasing availability of computing platforms and Internet connectivity allows for more effective and diverse collaboration between human users and automated agents. To increase the viability and effectiveness of human-agent collaborative teams, there is a pressing need for research enabling such teams to maximally leverage relative strengths of human and automated reasoners. We study virtual and ad-hoc teams, comprising a human and an agent, collaborating over a few episodes where each episode requires them to complete a set of tasks chosen from given task types. Team members are initially unaware of the capabilities of their partners, and the agent, acting as the task allocator, has to adapt the allocation process to maximize team performance. The focus of the current paper is on analyzing how allocation decision explanations can affect both user performance and the human workers' outlook including factors such as motivation and satisfaction. We investigate the effect of explanations provided by the agent allocator to the human on performance and key factors reported by the human teammate on surveys. Survey factors include the effect of explanations on motivation, explanatory power, and understandability, as well as satisfaction with and trust / confidence in the teammate. We evaluated a set of hypotheses on these factors related to positive, negative, and no-explanation scenarios through experiments conducted with MTurk workers.
How to Cite
Copyright (c) 2023 Bryan Lavender, Sami Abuhaimed, Sandip Sen
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.