We present an interactive design system that allows users to create sculpting styles and fabricate clay models using a standard 6-axisrobot arm.
The usage of deep generative models for image compression has led to impressive performance gains over classical codecs while neural video compression is still in its infancy. Here, we propose an end-to-end, deep generative modeling approach to compress temporal sequences with a focus on video. Our approach builds upon variational autoencoder (VAE) models for sequential data and combines them with recent work on neural image compression.
JUNGLE is an interactive, visual platform for the collaborative manipulation and consumption of nonlinear transmedia stories.
We present a light field video synthesis technique that can achieve accurate reconstruction given a low-cost, widebaseline camera rig. Our system called, INDiuM, novelly integrates optical flow with methods for rectification, disparity estimation, and feature extraction, which we then feed to a neural network view synthesis solver with widebaseline capability. A new bi-directional warping approach resolves reprojection ambiguities that would result from either backward or forward warping only. The system and method enables the use of off-the-shelf surveillance camera hardware in a simplified and expedited capture workflow. A thorough analysis of the refinement process and resulting view synthesis accuracy over state of the art is provided.
We present a novel algorithm to denoise deep Monte Carlo renderings, in which pixels contain multiple color values, each for a different range of depths.
This paper presents a story version control and graphical visualization framework to enhance collaborative story authoring.
In this paper, we propose a scalable method for streaming lightfield video, parameterized on viewer location and time, that efficiently handles RAM-to-GPU memory transfers of lightfield video in a compressed form, utilizing the GPU architecture for reduction of latency.
We propose a novel pre-filtering method that reduces the noise introduced by depth-of-field and motion blur effects in geometric buffers (G-buffers) such as texture, normal and depth images.
The work focuses on evaluating responses to a selection of synthesized camera oriented reality mixing techniques for AR, such as motion blur, defocus blur, latency and lighting responsiveness.
Page 1 of 8