Real-Time 3d Scene Generation using Optimized Neural Radiance Fields
Real-Time 3d Scene Generation using Optimized Neural Radiance Fields
Dr.R.Atul Kumar, Manepally Preethika, Kalakunta Srilochan, and Mogulla Hari Charan Reddy
Department Of CSE (AI & ML), Ace Engineering College, Hyderabad, Telangana, India.
ABSTRACT:High-quality 3D scene reconstruction is challenging, especially when only a limited number of images are available. Neural Radiance Fields (NeRFs) enable the representation of a scene as a continuous volumetric function, allowing novel views to be synthesized from sparse inputs. The scene is encoded in a fully-connected neural network that takes 3D coordinates and viewing directions as input and outputs density and color values. Differentiable volume rendering projects these predictions into 2D images, and techniques such as positional encoding and hierarchical sampling improve detail capture and computational efficiency. This approach allows accurate, high-fidelity visualization of complex real-world scenes while avoiding the memory costs of traditional voxel-based methods. Keywords: Neural Radiance Fields (NeRF), 3D scene reconstruction, differentiable volume rendering, positional encoding, hierarchical sampling, view synthesis, implicit representation