Augmented reality devices enable new approaches for character animation, e.g., given that character posing is three dimensional in nature it follows that interfaces with higher degrees-of-freedom (DoF) should outperform 2D interfaces. We present PoseMMR, allowing Multiple users to animate characters in a Mixed Reality environment, like how a stop-motion animator would manipulate a physical puppet, frame-by-frame, to create the scene. We explore the potential advantages of the PoseMMR can facilitate immersive posing, animation editing, version control and collaboration, and provide a set of guidelines to foster the development of immersive technologies as tools for collaborative authoring of character animation.
Synthetic training significantly reduces the capture and annotation burden and in theory allows generation of an arbitrary amount of data.
We present a low-cost solution for yaw drift in head-mounted display systems that performs better than current commercial solutions and provides a wide capture area for pose tracking.
In this paper, we propose a depth image and video codec based on block compression, that exploits typical characteristics of depth streams, drawing inspiration from S3TC texture compression and geometric wavelets.
We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion.
The interactive narrative guides guests through the immersive story with lighting and spatial audio design and integrates both walkable and air haptic actuators.
We present a novel real-time face detail reconstruction method capable of recovering high quality geometry on consumer mobile devices.
We present a real-time multi-view facial capture system facilitated by synthetic training imagery.
The work focuses on evaluating responses to a selection of synthesized camera oriented reality mixing techniques for AR, such as motion blur, defocus blur, latency and lighting responsiveness.
Page 1 of 5