Abstract
Recent subjective studies showed that current tone mapping operators either produce disturbing temporal artifacts or are limited in their local contrast reproduction capability. We address both of these issues and present an HDR video tone mapping operator that can greatly reduce the input dynamic range, while at the same time preserving scene details without causing significant visual artifacts. To achieve this, we revisit the commonly used spatial base-detail layer decomposition and extend it to the temporal domain. We achieve high-quality spatiotemporal edge-aware filtering efficiently by using a mathematically justified iterative approach that approximates a global solution. Comparison with the state-of-the-art, both qualitatively, and quantitatively through a controlled subjective experiment, clearly shows our method’s advantages over previous work. We present local tone mapping results on challenging high-resolution scenes with complex motion and varying illumination. We also demonstrate our method’s capability of preserving scene details at user adjustable scales, and its advantages for low light video sequences with significant camera noise.
Additional Content
Copyright Notice
The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.