Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image Decomposition

2020, Jun 23    
Hassan Abu Alhaija Siva Karthik Mustikovela Justus Thies Matthias Nießner Andreas Geiger Carsten Rother
3DV
Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image Decomposition

Neural rendering techniques promise efficient photo-realistic image synthesis while at the same time providing rich control over scene parameters by learning the physical image formation process. While several supervised methods have been proposed for this task, acquiring a dataset of images with accurately aligned 3D models is very difficult. The main contribution of this work is to lift this restriction by training a neural rendering algorithm from unpaired data. More specifically, we propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties. In contrast to a traditional graphics pipeline, our approach does not require to specify all scene properties, such as material parameters and lighting by hand. Instead, we learn photo-realistic deferred rendering from a small set of 3D models and a larger set of unaligned real images, both of which are easy to acquire in practice. Simultaneously, we obtain accurate intrinsic decompositions of real images while not requiring paired ground truth. Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-toimage translation baselines both qualitatively and quantitatively.

[Paper]  [Bibtex]