We revisit some key steps of this workflow and propose semiautomatic methods for performing them.
We present a graphical authoring tool for creating complex narratives in large, populated areas with crowds of virtual humans.
We propose a framework for motion capture using sparse multi-modal sensor sets, including data obtained from optical markers and inertial measurement units.
We quantitatively and qualitatively show that our monocular approach reconstructs higher quality lip shapes, even for complex shapes like a kiss or lip rolling than previous monocular approaches.
In this paper, we, therefore, present the first approach for non-invasive reconstruction of an entire person-specific tooth row from just a sparse set of photographs of the mouth region.
We address the challenge of efficiently rendering massive assemblies of grains within a forward path-tracing framework.
We propose an image-space (iterative) reconstruction scheme that employs control variates to reduce variance.
We present an accessible graphical platform for content creators and end users to create a story world, populate it with smart characters and objects, and define narrative events that can be used to author digital stories.
Synthetic training significantly reduces the capture and annotation burden and in theory allows generation of an arbitrary amount of data.
We present a simple and effective method for removing noise and outliers from such point sets.
Page 3 of 29