An Implicit Neural Representation for the Image Stack: Depth, All in Focus, and High Dynamic Range

Siggraph Asia 2023

    Chao Wang1,   Ana Serrano2,   Xingan Pan1, 3,   Krzysztof Wolski1,   Bin Chen1,  Karol Myszkowski1,   Hans-Peter Seidel1,   Christian Theobalt1,   Thomas Leimkühler1

1Max-Planck-Institut für Informatik   2University of Zaragoza   3Nanyang Technological University


Video



In everyday photography, physical limitations of camera sensors and lenses frequently lead to a variety of degradations in captured images such as saturation or defocus blur. A common approach to overcome these limitations is to resort to image stack fusion, which involves capturing multiple images with different focal distances or exposures. For instance, to obtain an all-in-focus image, a set of multi-focus images is captured. Similarly, capturing multiple exposures allows for the reconstruction of high dynamic range. In this paper, we present a novel approach that combines neural fields with an expressive camera model to achieve a unified reconstruction of an all-in-focus high-dynamic-range image from an image stack.
Our approach is composed of a set of specialized implicit neural representations tailored to address specific sub-problems along our pipeline: We use neural implicits to predict flow to overcome misalignments arising from lens breathing, depth, and all-in-focus images to account for depth of field, as well as tonemapping to deal with sensor responses and saturation -- all trained using a physically inspired supervision structure with a differentiable thin lens model at its core. An important benefit of our approach is its ability to handle these tasks simultaneously or independently, providing flexible post-editing capabilities such as refocusing and exposure adjustment. By sampling the three primary factors in photography within our framework (focal distance, aperture, and exposure time), we conduct a thorough exploration to gain valuable insights into their significance and impact on overall reconstruction quality.
Through extensive validation, we demonstrate that our method outperforms existing approaches in both depth-from-defocus and all-in-focus image reconstruction tasks. Moreover, our approach exhibits promising results in each of these three dimensions, showcasing its potential to enhance captured image quality and provide greater control in post-processing.

Bibtex


@article{wang2023neural, title={A Neural Implicit Representation for the Image Stack: Depth, All in Focus, and High Dynamic Range}, author={Wang, Chao and Serrano, Ana and Pan, Xingang and Wolski, Krzysztof and Chen, Bin and Seidel, Hans-Peter and Theobalt, Christian and Myszkowski, Karol and Leimk{\"u}hler, Thomas}, journal={ACM Transactions on Graphics}, year={2023}, Volume={42}, Number={6}, DOI={10.1145/3618367}, publisher={ACM} }
Imprint / Data Protection