Gaussian Splatting has become the method of choice for 3D reconstruction and real-time rendering of captured real scenes. However, fine appearance details need to be represented as a large number of small Gaussian primitives, which can be wasteful when geometry and appearance exhibit different frequency characteristics.
Inspired by the long tradition of texture mapping, we propose to use texture to represent detailed appearance where possible. Our main focus is to incorporate per-primitive texture maps that adapt to the scene in a principled manner during Gaussian Splatting optimization. We do this by proposing a new appearance representation for 2D Gaussian primitives with textures where the size of a texel is bounded by the image sampling frequency and adapted to the content of the input images. We achieve this by adaptively upscaling or downscaling the texture resolution during optimization. In addition, our approach enables control of the number of primitives during optimization based on texture resolution. We show that our approach performs favorably in image quality and total number of parameters used compared to alternative solutions for textured Gaussian primitives.
We extend the 2D representation to include learnable per-primitive texture maps. By defining the texture maps in the local, axis-aligned space of each primitive, our textures are scale-independent, and thus do not stretch as the primitives change size during training, maintaing their capacity of representing details of a specific scale.
We develop an upscaling and downscaling strategy to change the texel size of each primitive during optimization in a content-aware manner, allowing us to allocate model capacity only where is needed.
Downscaling: We apply a low pass filter to the texture maps and compute the texel error between the filtered and the original versions. If this error is low, we increase the texel size k of the primitive, effectively using the filtered version and reducing the texels used.
Upscaling: We want to quantify how well each primitive fits the underlying scene. For that, we compute the RGB error between the rendered images and the ground truth and then distribute the error values to the primitives based on their blending weights. If the final error is high, we decrease the texel size k of the primitive, resulting in an immediate increase of the texels used, enabling the primitive to match more fine grained details.
We assume that primitives that have a high texture resolution, but still continue to have a high error, lie in areas with a complex underlying geometry. These regions need more primitives to be faithfully reconstructed. Hence, we proceed into splitting such primitives into smaller ones, while keeping the texel size fixed.
Our method provides scene reconstructions of high visual fidelity while using fewer, but more expressive primitives than the baseline.
By defining the texels in the range [-1,1], the texture maps act as high frequency offsets from the base colour of the primitives. This effect is showcased below, where comparisons with textures enabled and disabled are provided.
With our Adaptive-Texel Size Determination, the appearance of the texture maps is linked to the complexity of the underlying scene content. Primitives in simple regions have coarse texture, while those in regions with high-frequency appearance have fine, more complex textures. This is demonstrated below, where primitives are filtered based on whether they have fine or coarse appearance. Notice how the primitives with fine textures cover complex appearance, like the bookcovers, and miss coarse one, like the wall, while the ones with coarse textures keep the general structure of the scene.
There are several gaussian texturing methods that were release at the same time as ours. You can find there project websites here:
Textured Gaussians for Enhanced 3D Scene Appearance Modeling
BillBoard Splatting (BBSplat): Learnable Textured Primitives for Novel View Synthesis
@inproceedings{10.2312:sr.20251190,
booktitle = {Eurographics Symposium on Rendering},
editor = {Wang, Beibei and Wilkie, Alexander},
title = {{Content-Aware Texturing for Gaussian Splatting}},
author = {Papantonakis, Panagiotis and Kopanas, Georgios and Durand, Frédo and Drettakis, George},
year = {2025},
publisher = {The Eurographics Association},
ISSN = {1727-3463},
ISBN = {978-3-03868-292-9},
DOI = {10.2312/sr.20251190}
}
This work was funded by the European Research Council (ERC) Advanced Grant NERPHYS, number 101141721 https://project.inria.fr/nerphys. The authors are grateful to the OPAL infrastructure of the Université Côte d'Azur for providing resources and support, as well as Adobe and NVIDIA for software and hardware donations. This work was granted access to the HPC resources of IDRIS under the allocation AD011015561 made by GENCI. F. Durand acknowledges funding from Google, Amazon, and MIT-GIST.