We present a new approach to clothing simulation using low-dimensional linear subspaces with temporally adaptive bases. Our method exploits full-space simulation training data in order to construct a pool of low-dimensional bases distributed across pose space. For this purpose, we interpret the simulation data as offsets from a kinematic deformation model that captures the global shape of clothing due to body pose.
We present an efficient method for augmenting keyframed character animations with physically-simulated secondary motion. Our method achieves a performance improvement of one to two orders of magnitude over previous work without compromising on quality.
Our work on “motion brushes” provides a new workflow for the creation and reuse of 3D animation with a focus on stylized movement and depiction. Conceptually, motion brushes expand existing brush models by incorporating hierarchies of 3D animated content including geometry, appearance information, and motion data as core brush primitives that are instantiated using a painting interface.
We develop an algorithm for the efficient and stable simulation of large-scale elastic rod assemblies.
Our novel `generate-and-rank' approach rapidly and semi-automatically generates data-driven fight scenes from high-level text descriptions composed of simple clauses and phrases. From a database of captured motions and its associated motion graph, we first generate a `cascade' of plausible scenes.
This paper provides measurement and fitting methods that allow nonlinear models to be fit to the observed deformation of a particular cloth sample.
In this project, we developed a model of internal friction based on a reparameterization of Dahl’s model and validated that this model provides a good match to important features of cloth hysteresis even with a minimal set of parameters.
We present a technique for adding fine-scale details and expressiveness to low-resolution art-directed facial performances, such as those created manually using a rig, via marker-based capture, by fitting a morphable model to a video, or through Kinect reconstruction using recent faceshift technology.
In this paper we present a linear face modelling approach that generalises to unseen data better than the traditional holistic approach while also allowing click-and-drag interaction for animation.
Page 7 of 8