Abstract
In order to achieve more believable interactions with artificial agents, there is a need to produce dialogue that is not only relevant, but also emotionally appropriate and consistent. This paper presents a comprehensive system that models the emotional state of users and an agent to dynamically adapt dialogue utterance selection. A Partially Observable Markov Decision Process (POMDP) with an online solver is used to model user reactions in real-time. The model decides the emotional content of the next utterance based on the rewards from the users and the agent. The previous approaches are extended through jointly modeling the user and agent emotions, maintaining this model over time with a memory, and enabling interactions with multiple users. A proof of concept user study is used to demonstrate that the system can deliver and maintain distinct agent personalities during multiparty interactions.
Copyright Notice
The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.