Our method combines the strengths of StyleGAN and volumetric neural field rendering to generate a 3D mesoscale texture that can be mapped to objects and used in a path tracer (c). We train on datasets of synthetic patches (a); our method can generate textures that have artistic parameters (such as fur saturation and length) which can be used to create shell maps of arbitrary extent (b).

Abstract

We introduce MesoGAN, a model for generative 3D neural textures. This new graphics primitive represents mesoscale appearance by combining the strengths of generative adversarial networks (StyleGAN) and volumetric neural field rendering.

The primitive can be applied to surfaces as a neural reflectance shell; a thin volumetric layer above the surface with appearance parameters defined by a neural network. To construct the neural shell, we first generate a 2D feature texture using StyleGAN with carefully randomized Fourier features to support arbitrarily sized textures without repeating artifacts.

We augment the 2D feature texture with a learned height feature, which aids the neural field renderer in producing volumetric parameters from the 2D texture. To facilitate filtering, and to enable end-to-end training within memory constraints of current hardware, we utilize a hierarchical texturing approach and train our model on multi-scale synthetic datasets of 3D mesoscale structures.

We propose one possible approach for conditioning MesoGAN on artistic parameters (e.g., fiber length, density of strands, lighting direction) and demonstrate and discuss integration into physically based renderers.

Overview

Ablation

Overview: We use the StyleGAN3 generator, which we extend by injection of phase-randomized Fourier features, to produce a hierarchical feature texture. The texture, when mapped on a surface, conditions an MLP that infers density and reflectance values of the mesostructure in a volumetric shell above the surface. Given a point (e.g., on a primary ray) inside the shell, the inferred reflectance values can be used to evaluate transport (illustrated as yellow paths) between this point and another point on the \emph{top} boundary of the shell.

Baseline Comparison

We compare our method against π-GAN, StyleNeRF and EG3D, our representation is the only one which can achieve high quality while maintaining view consistency (without the need for image space operations) which is crucial for integration into a modern path tracer.

BibTeX


@Article{DNRGARD23,
  author       = "Diolatzis, Stavros and Novak, Jan and Rousselle, Fabrice and Granskog, Jonathan and Aittala, Miika and Ramamoorthi, Ravi and Drettakis, George",
  title        = "MesoGAN: Generative Neural Reflectance Shells",
  journal      = "Computer Graphics Forum",
  year         = "2023",
  url          = "http://www-sop.inria.fr/reves/Basilic/2023/DNRGARD23"
}
	

Acknowledgments and Funding

This research was funded by the ERC Advanced grant FUNGRAPH No 788065. The authors are grateful to Adobe for generous donations, the OPAL infrastructure from Université Côte d’Azur and for the HPC resources from GENCI–IDRIS (Grant 2022-AD011013518). The authors would also like to thank the anonymous reviewers for their valuable feedback and helpful suggestions.