FaceMagic: Real-Time Facial Detail Effects on Mobile
We present a novel real-time face detail reconstruction method capable of recovering high quality geometry on consumer mobile devices.
PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation
Augmented reality devices enable new approaches for character animation, e.g., given that character posing is three dimensional in nature it follows that interfaces with higher degrees-of-freedom (DoF) should outperform 2D interfaces. We present PoseMMR, allowing Multiple users to animate characters in a Mixed Reality environment, like how a stop-motion animator would manipulate a physical puppet, frame-by-frame, to create the scene. We explore the potential advantages of the PoseMMR can facilitate immersive posing, animation editing, version control and collaboration, and provide a set of guidelines to foster the development of immersive technologies as tools for collaborative authoring of character animation.
Deep Generative Video Compression
The usage of deep generative models for image compression has led to impressive performance gains over classical codecs while neural video compression is still in its infancy. Here, we propose an end-to-end, deep generative modeling approach to compress temporal sequences with a focus on video. Our approach builds upon variational autoencoder (VAE) models for sequential data and combines them with recent work on neural image compression.
David Dunn
JUNGLE: An Interactive Visual Platform for Collaborative Creation and Consumption of Nonlinear Transmedia Stories
JUNGLE is an interactive, visual platform for the collaborative manipulation and consumption of nonlinear transmedia stories.
Recycling a Landmark Dataset for Real-time Face Tracking with Low Cost HMD Integrated Cameras
Towards a Natural Motion Generator: a Pipeline to Control a Humanoid based on Motion Data
Light Field Video Synthesis Using Inexpensive Surveillance Camera Systems
We present a light field video synthesis technique that can achieve accurate reconstruction given a low-cost, widebaseline camera rig. Our system called, INDiuM, novelly integrates optical flow with methods for rectification, disparity estimation, and feature extraction, which we then feed to a neural network view synthesis solver with widebaseline capability. A new bi-directional warping approach resolves reprojection ambiguities that would result from either backward or forward warping only. The system and method enables the use of off-the-shelf surveillance camera hardware in a simplified and expedited capture workflow. A thorough analysis of the refinement process and resulting view synthesis accuracy over state of the art is provided.
Appearance Capture and Modeling of Human Teeth
We present a system specifically designed for capturing the optical properties of live human teeth such that they can be realistically re-rendered in computer graphics.
Practical Dynamic Facial Appearance Modeling and Acquisition
We present a method to acquire dynamic properties of facial skin appearance, including dynamic diffuse albedo encoding blood flow, dynamic specular intensity, and per-frame high-resolution normal maps for a facial performance sequence.
Page 1 of 32