This paper addresses the problem of globally balancing colors between images. The input to our algorithm is a sparse set of desired color correspondences between a source and a target image. The global color space transformation problem is then solved by computing a smooth vector field in CIE Lab color space that maps the gamut of the source to that of the target. We employ normalized radial basis functions for which we compute optimized shape parameters based on the input images, allowing for more faithful and flexible color matching compared to existing RBF-, regression- or histogram-based techniques. Furthermore, we show how the basic per-image matching can be efficiently and robustly extended to the temporal domain using RANSAC-based correspondence classification. Besides interactive color balancing for images, these properties render our method extremely useful for automatic, consistent embedding of synthetic graphics in video, as required by applications such as augmented reality.
The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.