Affect and emotions play a crucial role in any human experience. However, these components are often overlooked in the design of user experiences with virtual AI agents. In this paper, we investigate the possibility of AI agents - in particular virtual assistants, to adapt to users’ emotional states in different interaction scenarios. Currently, AI agents such as Google Home, Alexa, and Siri, support very limited forms of affective interactions, despite showing great potentials for their implementation. This gap reflects the lack of specific theoretical models for affective and empathetic interactions with AI assistants, as well as current technological limitations. In this work, we address the first issues, i.e. the lack of theoretical models, from an experience design perspective. We present the Adaptive Affective Loop, a revised version of the Affective Loop model from social robotics, where we introduce two new concepts. First, the use of distributed interfaces for AI agents, where all IoT elements controlled by the agent can be leveraged to generate affective interactions. Second, the integration of learning and adaptive features, which allow the AI agent to assess the effectiveness of its affective response, and to adapt it over time, in order to generate custom empathetic responses for each user. We apply the model to three interaction scenarios: direct, predefined, and indirect interactions with the AI agent. We discuss benefits and limits of our model, and we address the challenges future designers will face while envisioning experiences for such new affective scenarios.
The Adaptive Affective Loop: How AI Agents Can Generate Empathetic Systemic Experiences
Rampino L.;
2021-01-01
Abstract
Affect and emotions play a crucial role in any human experience. However, these components are often overlooked in the design of user experiences with virtual AI agents. In this paper, we investigate the possibility of AI agents - in particular virtual assistants, to adapt to users’ emotional states in different interaction scenarios. Currently, AI agents such as Google Home, Alexa, and Siri, support very limited forms of affective interactions, despite showing great potentials for their implementation. This gap reflects the lack of specific theoretical models for affective and empathetic interactions with AI assistants, as well as current technological limitations. In this work, we address the first issues, i.e. the lack of theoretical models, from an experience design perspective. We present the Adaptive Affective Loop, a revised version of the Affective Loop model from social robotics, where we introduce two new concepts. First, the use of distributed interfaces for AI agents, where all IoT elements controlled by the agent can be leveraged to generate affective interactions. Second, the integration of learning and adaptive features, which allow the AI agent to assess the effectiveness of its affective response, and to adapt it over time, in order to generate custom empathetic responses for each user. We apply the model to three interaction scenarios: direct, predefined, and indirect interactions with the AI agent. We discuss benefits and limits of our model, and we address the challenges future designers will face while envisioning experiences for such new affective scenarios.File | Dimensione | Formato | |
---|---|---|---|
Paper_Colombo_Rampino.pdf
Accesso riservato
:
Publisher’s version
Dimensione
1.52 MB
Formato
Adobe PDF
|
1.52 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.