3D Gaussian Splatting (3DGS) has attracted significant attention for its high-quality novel view rendering, inspiring research to address real-world challenges. While conventional methods depend on sharp images for accurate scene reconstruction, real-world scenarios are often affected by defocus blur due to finite depth of field, making it essential to account for realistic 3D scene representation. In this study, we propose CoCoGaussian, a Circle of Confusion-aware Gaussian Splatting that enables precise 3D scene representation using only defocused images. CoCoGaussian addresses the challenge of defocus blur by modeling the Circle of Confusion (CoC) through a physically grounded approach based on the principles of photographic defocus. Exploiting 3D Gaussians, we compute the CoC diameter from depth and learnable aperture information, generating multiple Gaussians to precisely capture the CoC shape. Furthermore, we introduce a learnable scaling factor to enhance robustness and provide more flexibility in handling unreliable depth in scenes with reflective or refractive surfaces. Experiments on both synthetic and real-world datasets demonstrate that CoCoGaussian achieves state-of-the-art performance across multiple benchmarks.
3D高斯点绘(3D Gaussian Splatting, 3DGS)因其高质量的新视角渲染能力而受到广泛关注,激发了应对现实场景挑战的研究。传统方法依赖清晰图像进行准确的场景重建,而现实场景由于有限景深常受到散焦模糊的影响,这使得考虑逼真的3D场景表示变得至关重要。 在本研究中,我们提出了 CoCoGaussian,一种基于散焦模糊感知的高斯点绘方法,能够仅利用散焦图像实现精确的3D场景表示。CoCoGaussian 通过基于摄影散焦原理的物理方法建模模糊圈(Circle of Confusion, CoC),解决了散焦模糊带来的挑战。利用3D高斯点,我们从深度信息和可学习的光圈参数中计算CoC直径,并生成多个高斯点以精确捕捉CoC形状。此外,我们引入了一种可学习的缩放因子,以增强在处理反射或折射表面等不可靠深度场景中的鲁棒性和灵活性。 在合成和真实数据集上的实验表明,CoCoGaussian 在多个基准测试中实现了最先进的性能,验证了其在散焦模糊场景下的高效性和准确性。