Abstract
Lightfield video, as a high-dimensional function, is very demanding in terms of storage. As such, lighteld video data, even in a compressed form, do not typically fit in GPU or main memory unless the capture area, resolution or duration is sufficiently small. Additionally, latency minimization–critical for viewer comfort in use-cases such as virtual reality–places further constraints in many compression schemes. In this paper, we propose a scalable method for streaming lightfield video, parameterized on viewer location and time, that efficiently handles RAM-to-GPU memory transfers of lightfield video in a compressed form, utilizing the GPU architecture for reduction of latency. We demonstrate the effectiveness of our method in a variety of compressed animated lightfield datasets.
Copyright Notice
The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.