State of the Art in Artistic Editing of Appearance, Lighting, and Material
We organize this complex and active research area in a structure tailored to academic researchers, graduate students, and industry professionals alike.
Optimizing Stereo-to-Multiview Conversion for Autostereoscopic Displays
We present a novel stereo-to-multiview video conversion method for glasses-free multiview displays.
Automatic View Synthesis by Image-Domain-Warping
In this paper, a view synthesis method based on Image-domain-Warping (IDW) is presented which synthesizes new views directly from S3D video and functions completely automatically.
Distinguishing Texture Edges from Object Boundaries in Video
We address this issue by introducing a simple, low-level, patch-consistency assumption that leverages the extra information present in video data to resolve this ambiguity.
Lighting Estimation in Outdoor Image Collections
Large scale structure-from-motion (SfM) algorithms have recently enabled us to reconstruct highly detailed 3-D models of our surroundings simply by taking photographs. In this paper, we propose to leverage these reconstruction techniques and automatically estimate the outdoor illumination conditions for each image in a SfM photo collection.
High-Quality Capture of Eyes
Even though the human eye is one of the central features of individual appearance, its shape has so far been mostly approximated in our community with gross simplifications. In this paper we demonstrate that there is a lot of individuality to every eye, a fact that common practices for 3D eye generation do not consider.
Temporally Coherent Local Tone Mapping of HDR Video
Recent subjective studies showed that current tone mapping operators either produce disturbing temporal artifacts, or are limited in their local contrast reproduction capability. We address both of these issues and present an HDR video tone mapping operator that can greatly reduce the input dynamic range, while at the same time preserving scene details without causing significant visual artifacts.
Content Retargeting Using Parameter-Parallel Facial Layers
We present a method to deconstruct the content of an actor’s facial expression into three layers using an additive composition function, transfer the content to parameter-parallel layers for the character, and reconstruct the character’s expression using the same composition function.
Hybrid Robotic/Virtual Pan-Tilt-Zoom Cameras for Autonomous Event Recording
“Win at Home and Draw Away”: Automatic Formation Analysis Highlighting the Differences in Home and Away Team Behaviors
Page 29 of 32