We present a data-driven control architecture designed to encode specific information, such as the presence or absence of an emotion, in the movements of an avatar or robot driven by a human operator. Our strategy leverages a set of human-recorded examples as the core for generating information-rich kinematic signals. To ensure successful object grasping, we propose a deep reinforcement learning strategy. We validate our approach using an experimental dataset obtained during the reach-to-grasp phase of a pick-and-place task.

Data-Driven Architecture to Encode Information in the Kinematics of Robots and Artificial Avatars

Coraggio, Marco;di Bernardo, Mario
2024-01-01

Abstract

We present a data-driven control architecture designed to encode specific information, such as the presence or absence of an emotion, in the movements of an avatar or robot driven by a human operator. Our strategy leverages a set of human-recorded examples as the core for generating information-rich kinematic signals. To ensure successful object grasping, we propose a deep reinforcement learning strategy. We validate our approach using an experimental dataset obtained during the reach-to-grasp phase of a pick-and-place task.
2024
data-driven control
machine learning
human-in-the-loop control
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14246/1402
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact