DP-NeRF models each type of blur using 3D rigid transformation of given camera.
Neural Radiance Field(NeRF) has exhibited outstanding three-dimensional(3D) reconstruction quality via the novel view synthesis from multi-view images and paired calibrated camera parameters. However, previous NeRF- based systems have been demonstrated under strictly controlled settings, with little attention paid to less ideal scenarios, including with the presence of noise such as exposure, illumination changes, and blur. In particular, though blur frequently occurs in real situations, NeRF that can handle blurred images has received little attention. The few studies that have investigated NeRF for blurred images have not considered geometric and appearance consistency in 3D space, which is one of the most important factors in 3D reconstruction. This leads to inconsistency and the degradation of the perceptual quality of the constructed scene.
Hence, this paper proposes a DP-NeRF, a novel clean NeRF framework for blurred images, which is constrained with two physical priors. These priors are derived from the actual blurring process during image acquisition by the camera. DP-NeRF proposes rigid blurring kernel to impose 3D consistency utilizing the physical priors and adaptive weight proposal to refine the color composition error in consideration of the relationship between depth and blur.
We present extensive experimental results for synthetic and real scenes with two types of blur: camera motion blur and defocus blur. The results demonstrate that DP-NeRF successfully improves the perceptual quality of the constructed NeRF ensuring 3D geometric and appearance consistency. We further demonstrate the effectiveness of our model with comprehensive ablation analysis.
Physical Scene Priors
Prior 1: A blurred image is generated during in camera image acquisition.
We propose ray rigid transformation(RRT) to mimic the blurring process of an image based on prior 1, which are shared in each single view. It is defined as SE(3) field in 3D space for camera poses, which imposes the geometric and appearance consistency in 3D space. Parameters to define the SE(3) field is the outputs of the MLP optimized during the training.
Prior 2: The blurring process for all of the pixels in a blurred image occurs simultaneously.
We additionally propose coarse composition weights to integrate the rendered colors from each rigidly transformed rays to produce the blurry colors. It is also produced by the MLP whose encoder is shared with above MLP which defines the SE(3) field.
DP-NeRF can model the blurring kernel preserving the geometric and appearance consistency in 3D space.
We present additional video results for synthetic and real scenes with two types of blur: camera motion blur and defocus blur. The dataset used for our paper is from Deblur-NeRF. The results demonstrate that DP-NeRF successfully improves the perceptual quality of the constructed NeRF ensuring 3D geometric and appearance consistency. We also show the video results of failure cases for object motion blur here. As we mentioned in the paper, object motion blur is the issue associated with temporal consistency. Therefore, DP-NeRF can not handle the object motion blur due to the static scene assumption.
@InProceedings{Lee_2023_CVPR,
author = {Lee, Dogyoon and Lee, Minhyeok and Shin, Chajin and Lee, Sangyoun},
title = {DP-NeRF: Deblurred Neural Radiance Field With Physical Scene Priors},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
pages = {12386-12396}
}