Abstract |
To create a robot with a mind of its own, we ex-tended a formalized version of a model that explains affect-driven interaction with mechanisms for goal-directed behavior. We ran simulation experiments with intelligent software agents and found that agents preferred affect-driven decision options to rational decision options in situations where choices for low expected utility are irrational. This behavior counters current models in decision making, which generally have a hedonic bias and always select the option with the highest expected utility.
|