Abstract |
This paper involves a human-agent system in which there is an operator charged with a pattern
recognition task, using an automated decision aid. The objective is to make this human-agent
system operate as effectively as possible.
Effectiveness is gained by an increase of appropriate reliance on the operator and the aid. We
studied whether it is possible to contribute to this objective by, apart from the operator, letting
the aid as well calibrate trust in order to make reliance decisions. In addition, the aid's calibration
of trust in reliance decision making capabilities of both the operator and itself is also expected to
contribute, through reliance decision making on a metalevel, which we call metareliance decision
making.
In this paper we present a formalization of these two approaches: a reliance (RDMM) and
metareliance decision making model (MetaRDMM), respectively. A combination of laboratory and
simulation experiments shows significant improvements compared to reliance decision making
solely done by operators.
|