Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering

Abstract

Neural rendering methods have significantly advanced photo-realistic 3D scene rendering in various academic and industrial applications. The recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed combining the benefits of both primitive-based representations and volumetric representations.

However, it often leads to heavily redundant Gaussians that try to fit every training view, neglecting the underlying scene geometry.

这往往导致大量冗余的高斯试图拟合每个训练视图,而忽略了潜在的场景几何。

Consequently, the resulting model becomes less robust to significant view changes, texture-less area and lighting effects.

We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians, and predicts their attributes on-the-fly based on viewing direction and distance within the view frustum. Anchor growing and pruning strategies are developed based on the importance of neural Gaussians to reliably improve the scene coverage.

We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering. We also demonstrates an enhanced capability to accommodate scenes with varying levels-of-detail and view-dependent observations, without sacrificing the rendering speed.

Project page

Figure 

Figure 1

Scaffold-GS represents the scene using a set of 3D Gaussians structured in a dual-layered hierarchy. Anchored on a sparse grid of initial points, a modest set of neural Gaussians are spawned from each anchor to dynamically adapt to various viewing angles and distances.

Our method achieves rendering quality and speed comparable to 3DGS with a more compact model (last row metrics: PSNR/storage size/FPS). Across multiple datasets, Scaffold-GS demonstrates more robustness in large outdoor scenes and intricate indoor environments with challenging observing views e.g. transparency, specularity, reflection, texture-less regions and fine-scale details.

Figure 2

上一篇:谷歌浏览器如何防范恶意网站和广告