Abstract |
When engaging in social interaction, people rely on their ability to reason about other people’s mental states, including goals, intentions, and beliefs. This theory of mind ability allows them to more easily understand, predict, and even manipulate the behavior of others. People can also use their theory of mind to reason about the theory of mind of others, which allows them to understand sentences like “Alice believes that Bob does not know that she wrote a novel under pseudonym”. But while the usefulness of higher orders of theory of mind is apparent in many social interactions, empirical evidence so far suggests that people do not use this ability spontaneously when playing games, even when doing so would be highly beneficial.
In this lecture, we discuss some experiments in which we have attempted to encourage participants to engage in higher-order theory of mind reasoning by letting them play games against computational agents: the one-shot competitive Mod game; the turn-taking game Marble Drop; and the negotiation game Colored Trails. It turns out that we can entice people to use second-order theory of mind in Marble Drop and Colored Trails, and in the Mod game even third-order theory of mind. We discuss different methods of estimating participants’ reasoning strategies in these games, some of them based only on their moves in a series of games, others based on reaction times or eye movements. When in future, hybrid intelligence will combine people with robots and software agents into teams, it will be beneficial if the computational members of the team develop a reasonable theory of mind about each of their human colleagues. |