Improving VIP Viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis

Anamorphosis for 2D displays can provide viewer centric perspective viewing, enabling 3D appearance, eye contact and engagement, by adapting dynamically in real time to a single moving viewer’s viewpoint, but at the cost of distorted viewing for other viewers. We present a method for constructing non-linear projections as a combination of anamorphic rendering of selective objects whilst reverting to normal perspective rendering of the rest of the scene. Our study defines a scene consisting of five characters, with one of these characters selectively rendered in anamorphic perspective.

Learn More


Smile and Laugh Dynamics in Naturalistic Dyadic Interactions: Intensity Levels, Sequences and Roles

Smiles and laughs have been the subject of many studies over the past decades, due to their frequent occurrence in interactions, as well as their social and emotional functions in dyadic conversations. In this paper we push forward previous work by providing a first study on the influence one interacting partner’s smiles and laughs have on their interlocutor’s, taking into account these expressions’ intensities. Our second contribution is a study on the patterns of laugh and smile sequences during the dialogs, again taking the intensity into account. Finally, we discuss the effect of the interlocutor’s role on smiling and laughing. In order to achieve this, we use a database of naturalistic dyadic conversations which was collected and annotated for the purpose of this study.

Learn More


The Role of Closed-Loop Hand Control in Handshaking Interactions

In this paper we investigate the role of haptic feedback in human/robot handshaking by comparing different force controllers. The basic hypothesis is that in human handshaking force control there is a balance between an intrinsic (open--loop) and extrinsic (closed--loop) contribution. We use an underactuated anthropomorphic robotic hand, the Pisa/IIT hand, instrumented with a set of pressure sensors estimating the grip force applied by humans. In a first set of experiments we ask subjects to mimic a given force profile applied by the robot hand, to understand how human perceive and are able to reproduce a handshaking force.

Learn More

On the Role of Stiffness and Synchronization in Human-Robot Handshaking

This paper presents a system for soft human-robot handshaking, using a soft robot hand in conjunction witha lightweight and impedance-controlled robot arm. Using this system, we study how different factors influencethe perceived naturalness, and give the robot different personality traits. Capitalizing on recent findings regardinghandshake grasp force regulation, and on studies of the impedance control of the human arm, we investigate the roleof arm stiffness as well as the kinaesthetic synchronization of human and robot arm motions during the handshake.The system is implemented using a lightweight anthropomorphic arm, with a Pisa/IIT Softhand wearing a sensorizedsilicone glove as the end-effector.

Learn More

Deep Generative Video Compression

The usage of deep generative models for image compression has led to impressive performance gains over classical codecs while neural video compression is still in its infancy. Here, we propose an end-to-end, deep generative modeling approach to compress temporal sequences with a focus on video. Our approach builds upon variational autoencoder (VAE) models for sequential data and combines them with recent work on neural image compression.

Learn More