(Anhalt University of Applied Sciences, 2023) Kolodiazhna, Olena; Savin, Volodymyr; Uss, Mykhailo; Kussul, Nataliia
This paper addresses the problem of novel view synthesis using Neural Radiance Fields (NeRF) for scenes with dynamic illumination. NeRF training utilizes photometric consistency loss that is pixel-wise consistency between a set of scene images and intensity values rendered by NeRF. For reflective surfaces, image intensity depends on viewing angle and this effect is taken into account by using ray direction as NeRF input. For scenes with dynamic illumination, image intensity depends not only on position and viewing direction but also on time. We show that this factor affects NeRF training with standard photometric loss function effectively decreasing quality of both image and depth rendering. To cope with this problem, we propose to add time as additional NeRF input. Experiments on ScanNet dataset demonstrate that NeRF with modified input outperforms original model version and renders more consistent 3D structures. Results of this study could be used to improve quality of training data augmentation for depth prediction models (e.g. depth-from-stereo models) for scenes with non-static illumination.