On the Role of Stiffness and Synchronization in Human-Robot Handshaking

This paper presents a system for soft human-robot handshaking, using a soft robot hand in conjunction witha lightweight and impedance-controlled robot arm. Using this system, we study how different factors influencethe perceived naturalness, and give the robot different personality traits. Capitalizing on recent findings regardinghandshake grasp force regulation, and on studies of the impedance control of the human arm, we investigate the roleof arm stiffness as well as the kinaesthetic synchronization of human and robot arm motions during the handshake.The system is implemented using a lightweight anthropomorphic arm, with a Pisa/IIT Softhand wearing a sensorizedsilicone glove as the end-effector.

Learn More

MakeSense: Automated Sensor Design for Proprioceptive Soft Robots

Soft robots have applications in safe human-robot interactions, manipulation of fragile objects, and locomotion in challenging and unstructured environments. In this paper, we present a computational method for augmenting soft robots with proprioceptive sensing capabilities. Our method automatically computes a minimal stretch-receptive sensor network to user-provided soft robotic designs, which is optimized to perform well under a set of user-specified deformation-force pairs. The sensorized robots are able to reconstruct their full deformation state, under interaction forces. We cast our sensor design as a sub-selection problem, selecting a minimal set of sensors from a large set of fabricable ones which minimizes the error when sensing specified deformation-force pairs. Unique to our approach is the use of an analytical gradient of our reconstruction performance measure with respect to selection variables. We demonstrate our technique on a bending bar and gripper example, illustrating more complex designs with a simulated tentacle.

Learn More

Deep Generative Video Compression

The usage of deep generative models for image compression has led to impressive performance gains over classical codecs while neural video compression is still in its infancy. Here, we propose an end-to-end, deep generative modeling approach to compress temporal sequences with a focus on video. Our approach builds upon variational autoencoder (VAE) models for sequential data and combines them with recent work on neural image compression.

Learn More