NeRFshop: Interactive Editing of Neural Radiance Fields

I3D 2023

1Inria 2Université Côte d'Azur 3École Polytechnique 4Max Planck Institute
1 2 3 4

Abstract

Neural Radiance Fields (NeRFs) have revolutionized novel view synthesis for captured scenes, with recent methods allowing interactive free-viewpoint navigation and fast training for scene reconstruction. However, the implicit representations used by these methods — often including neural networks and complex encodings — make them difficult to edit. Some initial methods have been proposed, but they suffer from limited editing capabilities and/or from a lack of interactivity, and are thus unsuitable for interactive editing of captured scenes.

We tackle both limitations and introduce NeRFshop, a novel end-to-end method that allows users to interactively select and deform objects through cage-based transformations. NeRFshop provides fine scribble-based user control for the selection of regions or objects to edit, semi-automatic cage creation, and interactive volumetric manipulation of scene content thanks to our GPU-friendly two-level interpolation scheme. Further, we introduce a preliminary approach that reduces potential resulting artifacts of these transformations with a volumetric membrane interpolation technique inspired by Poisson image editing and provide a process that "distills" the edits into a standalone NeRF representation.

Video

Method

Selection and semi-automatic cage building

(a) Scribbling
(b) Reprojection
(c) Region Growing
(d) Cage Building
(c) Tetrahedralization

We introduce a scribble-based interface allowing the user to select objects — or a volumetric region represented as voxels — in a NeRF [Mildenhall 2020] scene that will be edited (a,b). The user is then free to extend the selection (c), before it is turned into a cage (d) that can be manipulated. The cage is used to create a tetrahedral structure (e), which steers the volumetric deformation process.

Interactive volumetric editing

The cage creation as described above allows the user to interactively manipulate the selected region. This includes both rigid transformations (i.e., all cage vertices are displaced using the same transformation), and non-rigid deformations (i.e., subsets of vertices are displaced independently). To do so, we propagate the cage vertex displacements into corresponding manipulations of the enclosed density volume with a two-level interpolation scheme leveraging the parallelism of the GPU and complying with the fast rendering routines of Instant-NGP [Müller 2022] on top of which our implementation is built.

Membrane-based correction and Distillation

Editing complete scenes introduces specific challenges since inconsistent color and density artifacts can appear after (re)moving an object. We introduce a membrane-based interpolation approach inspired by Poisson Image Editing [Pérez 2003].

Similar to “layer flattening/export” in traditional image-editing tools, we provide a distillation step, that collapses all edits performed and saves a hashgrid NeRF representation that can be loaded in established pipelines.

Demos

Selection

Region Growing

Cage Building

Manipulation

Membrane-based Correction

Distillation

BibTeX

@Article{NeRFshop23,
  author      = {Jambon, Cl\'ement and Kerbl, Bernhard and Kopanas, Georgios and Diolatzis, Stavros and Leimk{\"u}hler, Thomas and Drettakis, George"},
  title        = {NeRFshop: Interactive Editing of Neural Radiance Fields"},
  journal      = {Proceedings of the ACM on Computer Graphics and Interactive Techniques},
  number       = {1},
  volume       = {6},
  month        = {May},
  year         = {2023},
  url          = {https://repo-sam.inria.fr/fungraph/nerfshop/}
}

Acknowledgments and Funding

This research was funded by the ERC Advanced grant FUNGRAPH No 788065 http://fungraph.inria.fr. The authors are grateful to the OPAL infrastructure from Université Côte d’Azur for providing resources and support. The authors would also like to thank Adobe for their generous research and software donations. Finally, the authors would like to thank Hugues Bruyère (@smallfly) for his scene XmasBalls

References

[Mildenhall 2020] Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., Ng, R., 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

[Müller 2022] Müller, T., Evans, A., Schied, C. and Keller, A., 2022. Instant neural graphics primitives with a multiresolution hash encoding

[Pérez 2003] Pérez, P., Gangnet, M., Blake, A., 2003. Poisson image editing