The Role of Closed-Loop Hand Control in Handshaking Interactions

In this paper we investigate the role of haptic feedback in human/robot handshaking by comparing different force controllers. The basic hypothesis is that in human handshaking force control there is a balance between an intrinsic (open--loop) and extrinsic (closed--loop) contribution. We use an underactuated anthropomorphic robotic hand, the Pisa/IIT hand, instrumented with a set of pressure sensors estimating the grip force applied by humans. In a first set of experiments we ask subjects to mimic a given force profile applied by the robot hand, to understand how human perceive and are able to reproduce a handshaking force.

Learn More

On the Role of Stiffness and Synchronization in Human-Robot Handshaking

This paper presents a system for soft human-robot handshaking, using a soft robot hand in conjunction witha lightweight and impedance-controlled robot arm. Using this system, we study how different factors influencethe perceived naturalness, and give the robot different personality traits. Capitalizing on recent findings regardinghandshake grasp force regulation, and on studies of the impedance control of the human arm, we investigate the roleof arm stiffness as well as the kinaesthetic synchronization of human and robot arm motions during the handshake.The system is implemented using a lightweight anthropomorphic arm, with a Pisa/IIT Softhand wearing a sensorizedsilicone glove as the end-effector.

Learn More

Deep Generative Video Compression

The usage of deep generative models for image compression has led to impressive performance gains over classical codecs while neural video compression is still in its infancy. Here, we propose an end-to-end, deep generative modeling approach to compress temporal sequences with a focus on video. Our approach builds upon variational autoencoder (VAE) models for sequential data and combines them with recent work on neural image compression.

Learn More





Smile Intensity Detection in Multiparty Interaction using Deep Learning

Emotion expression recognition is an important aspect for enabling decision making in autonomous agents and systems designed to interact with humans. In this paper, we present our experience in developing a software component for smile intensity detection for multiparty interaction. First, the deep learning architecture and training process is described in detail. This is followed by analysis of the results obtained from testing the trained network. Finally, we outline the steps taken to implement and visualize this network in a real-time software component.

Learn More