EditableNeRF: Editing Topologically Varying Neural Radiance Fields
by Key Points

CVPR 2023



Abstract

overview

Neural radiance fields (NeRF) achieve highly photo-realistic novel-view synthesis, but it's a challenging problem to edit the scenes modeled by NeRF-based methods, especially for dynamic scenes. We propose editable neural radiance fields that enable end-users to easily edit dynamic scenes and even support topological changes. Input with an image sequence from a single camera, our network is trained fully automatically and models topologically varying dynamics using our picked-out surface key points. Then end-users can edit the scene by easily dragging the key points to desired new positions. To achieve this, we propose a scene analysis method to detect and initialize key points by considering the dynamics in the scene, and a weighted key points strategy to model topologically varying dynamics by joint key points and weights optimization. Our method supports intuitive multi-dimensional (up to 3D) editing and can generate novel scenes that are unseen in the input sequence. Experiments demonstrate that our method achieves high-quality editing on various dynamic scenes and outperforms the state-of-the-art. Our code and captured data are available in this page.


Video


Method Overview

overview

The query point x is first warped into the canonical space by a warp field and a latent code βt in frame t. Next, we compute the key point weights of this canonical point x′ and use it to calculate a linear combination of all key point positions kt, called weighted key points. After that, we feed the following NeRF MLP with the weighted key points and x′, then the output density and color are used for volumetric rendering. In the training stage, optical flow and depth maps are used to supervise key point positions.


Citation