We present a data-driven control architecture designed to encode specific information, such as the presence or absence of an emotion, in the movements of an avatar or robot driven by a human operator. Our strategy leverages a set of human-recorded examples as the core for generating information-rich kinematic signals. To ensure successful object grasping, we propose a deep reinforcement learning strategy. We validate our approach using an experimental dataset obtained during the reach-to-grasp phase of a pick-and-place task.
Data-Driven Architecture to Encode Information in the Kinematics of Robots and Artificial Avatars
Coraggio, Marco;di Bernardo, Mario
2024-01-01
Abstract
We present a data-driven control architecture designed to encode specific information, such as the presence or absence of an emotion, in the movements of an avatar or robot driven by a human operator. Our strategy leverages a set of human-recorded examples as the core for generating information-rich kinematic signals. To ensure successful object grasping, we propose a deep reinforcement learning strategy. We validate our approach using an experimental dataset obtained during the reach-to-grasp phase of a pick-and-place task.File in questo prodotto:
Non ci sono file associati a questo prodotto.
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.