Improving Neural Radiance Fields for More Efficient, Tailored, View-Synthesis
dc.contributor.author | Nair, Saeejith Muralidharan | |
dc.date.accessioned | 2024-09-17T14:32:02Z | |
dc.date.available | 2024-09-17T14:32:02Z | |
dc.date.issued | 2024-09-17 | |
dc.date.submitted | 2024-09-04 | |
dc.description.abstract | Neural radiance fields (NeRFs) have revolutionized novel view synthesis, enabling high-quality 3D scene reconstruction from sparse 2D images. However, their computational intensity often hinders real-time applications and deployment on resource-constrained devices. Traditional NeRF models can require days of training for a single scene and demand significant computational resources for rendering, with some implementations necessitating over 150 million network evaluations per rendered image. While various approaches have been proposed to improve NeRF efficiency, they often employ fixed network architectures that may not be optimal for all scenes. This research introduces NAS-NeRF, an new approach that employs generative neural architecture search (NAS) to discover compact, scene-specialized NeRF architectures. NAS, a technique for automatically designing neural network architectures, is investigated as a potential method for optimizing NeRFs by tailoring network architectures to the specific complexities of individual scenes. NAS-NeRF reformulates the NeRF architecture into configurable field cells, enabling efficient exploration of the architecture space while maintaining compatibility with various NeRF variants. Our method incorporates a scene-specific optimization strategy that considers the unique characteristics of each 3D environment to guide architecture search. We also introduce a quality-constrained generation approach that allows for the specification of target performance metrics within the search process. Experiments on the Blender synthetic dataset demonstrate the effectiveness of NAS-NeRF in generating a family of architectures tailored to different efficiency-quality trade-offs. Our most efficient models (NAS-NeRF XXS) achieve up to 23× reduction in parameters and 22× fewer FLOPs compared to baseline NeRF, with only a 5.3% average drop in structural similarity (SSIM). Meanwhile, our high-quality models (NAS-NeRF S) match or exceed baseline performance while reducing parameters by 2-4× and offering up to 1.93× faster inference. These results suggest that high-quality novel view synthesis can be achieved with more compact models, particularly when architectures are tailored to specific scenes. NAS-NeRF contributes to the ongoing research into efficient 3D scene representation methods, helping enable applications in resource-constrained environments and real-time scenarios. | |
dc.identifier.uri | https://hdl.handle.net/10012/21012 | |
dc.language.iso | en | |
dc.pending | false | |
dc.publisher | University of Waterloo | en |
dc.subject | neural radiance field | |
dc.subject | neural architecture search | |
dc.title | Improving Neural Radiance Fields for More Efficient, Tailored, View-Synthesis | |
dc.type | Master Thesis | |
uws-etd.degree | Master of Applied Science | |
uws-etd.degree.department | Systems Design Engineering | |
uws-etd.degree.discipline | System Design Engineering | |
uws-etd.degree.grantor | University of Waterloo | en |
uws-etd.embargo.terms | 0 | |
uws.contributor.advisor | Wong, Alexander | |
uws.contributor.advisor | Shafiee, Mohammad Javad | |
uws.contributor.affiliation1 | Faculty of Engineering | |
uws.peerReviewStatus | Unreviewed | en |
uws.published.city | Waterloo | en |
uws.published.country | Canada | en |
uws.published.province | Ontario | en |
uws.scholarLevel | Graduate | en |
uws.typeOfResource | Text | en |